sfba.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A Mastodon instance for the San Francisco Bay Area. Come on in and join us!

Server stats:

2.4K
active users

#effectivealtruism

0 posts0 participants0 posts today

I will be attending EAGxPrague conference in May.

I have been a big fan of 80000hours.org for some time and given my background, I am interested in AI safety and also in "AI for good".

This is my first in-person involvement with the effective altruism community. I am well aware that there are some controversies around the movement, so I am quite curious about what I find when I finally meet the community in person.

80,000 HoursYou have 80,000 hours in your career.This makes it your best opportunity to have a positive impact on the world. If you’re fortunate enough to be able to use your career for good, but aren’t sure how, we can help

"A couple years ago, Oliver Habryka, the CEO of Lightcone, a company affiliated with LessWrong, published an essay asking why people in the rationalism, effective altruism and AI communities “sometimes go crazy”.

Habryka was writing not long after Sam Bankman-Fried, a major funder of AI research, had begun a spectacular downfall that would end in his conviction for $10bn of fraud. Habryka speculated that when a community is defined by a specific, high-stakes goal (such as making sure humanity isn’t destroyed by AI), members feel pressure to conspicuously live up to the “demanding standard” of that goal.

Habryka used the word “crazy” in the non-clinical sense, to mean extreme or questionable behavior. Yet during the period when Ziz was making her way toward what she would call “the dark side”, the Berkeley AI scene seemed to have a lot of mental health crises.

“This community was rife with nervous breakdown,” a rationalist told me, in a sentiment others echoed, “and it wasn’t random.” People working on the alignment problem “were having these psychological breakdowns because they were in this environment”. There were even suicides, including of two people who were part of the Zizians’ circle.

Wolford, the startup founder and former rationalist, described a chicken-and-egg situation: “If you take the earnestness that defines this community, and you look at civilization-ending risks of a scale that are not particularly implausible at this point, and you are somebody with poor emotional regulation, which also happens to be pretty common among the people that we’re talking about – yeah, why wouldn’t you freak the hell out? It keeps me up at night, and I have stuff to distract me.”

A high rate of pre-existing mental illnesses or neurodevelopmental disorders was probably also a factor, she and others told me."

theguardian.com/global/ng-inte

The Guardian · They wanted to save us from a dark AI future. Then six people were killedBy J Oliver Conroy

As a wonk, I've been an skeptic since I first learned about it. To many people it represents a sense of arrogance, that morality can be distilled into utilitarian quantitative calculations. While there's some truth to that, I think critics forget that it was a direct response to another sense of moral arrogance coming before it, treating all local nonprofit 'pet causes' as unimpeachable and equally urgent, while overlooking the most vulnerable populations in corners of the world suffering at the lowest socioeconomic rung largely out of sight and out of mind. If you are donating to causes with the mantle of "impact", there's no right way to grapple with this, but you're gonna have to grapple with it nonetheless.
vox.com/future-perfect/372519/

Vox · I give to charity — but never to people on the street. Is that wrong?By Sigal Samuel

Another great opinion piece on AI an the lure of : as religion.

“When we put all these ideas together and boil them down, we get this basic proposition:

1. We may not have much time until life as we know it is over.
2. So we need to place a bet on something that can save us.
3. Since the stakes are so high, we should ante up and go all in on our bet.

Any student of religion will immediately recognize this for what it is: apocalyptic logic.”

“Silicon Valley’s vision for AI? It’s religion, repackaged.”

vox.com/the-highlight/23779413

Vox · Silicon Valley’s vision for AI? It’s religion, repackaged.By Sigal Samuel

@TimothyNoah
This mistaken perspective that holds w re to - that the people cannot handle more money - would be completely undermined if the left focused on incentivizing development, , , any form of Our should drive equity to the people and teach them to take control of the market. Not constantly beg for it at the union table or beneath the philanthropist’s t.co/yuKA3ggt1N

The New RepublicSam Bankman-Fried and the Moral Emptiness of Effective AltruismThe most distinctive characteristic of E.A.? The deftness with which it tiptoes past targets likely to offend billionaires.