The more I hear about #EffectiveAltruism, #Rationalism, and the followers thereof, the luckier I feel that I didn't accept a Silicon Valley job when I was young.
That shit is beyond insane, and I think the person that I was when I was considering SV jobs would fall for it super hard, because I was a naive arrogant little shit.
I will be attending EAGxPrague conference in May.
I have been a big fan of https://80000hours.org for some time and given my background, I am interested in AI safety and also in "AI for good".
This is my first in-person involvement with the effective altruism community. I am well aware that there are some controversies around the movement, so I am quite curious about what I find when I finally meet the community in person.
The two most recent episodes of #BioUnethical with David Thorstad, Emily Largent, and @GovindPersad were very (very!) good.
Hosts Leah Pierson and Sophie Gibert may be doing reflective discussion better than anyone — outstanding stage setting, questioning, improvising, etc.
"A couple years ago, Oliver Habryka, the CEO of Lightcone, a company affiliated with LessWrong, published an essay asking why people in the rationalism, effective altruism and AI communities “sometimes go crazy”.
Habryka was writing not long after Sam Bankman-Fried, a major funder of AI research, had begun a spectacular downfall that would end in his conviction for $10bn of fraud. Habryka speculated that when a community is defined by a specific, high-stakes goal (such as making sure humanity isn’t destroyed by AI), members feel pressure to conspicuously live up to the “demanding standard” of that goal.
Habryka used the word “crazy” in the non-clinical sense, to mean extreme or questionable behavior. Yet during the period when Ziz was making her way toward what she would call “the dark side”, the Berkeley AI scene seemed to have a lot of mental health crises.
“This community was rife with nervous breakdown,” a rationalist told me, in a sentiment others echoed, “and it wasn’t random.” People working on the alignment problem “were having these psychological breakdowns because they were in this environment”. There were even suicides, including of two people who were part of the Zizians’ circle.
Wolford, the startup founder and former rationalist, described a chicken-and-egg situation: “If you take the earnestness that defines this community, and you look at civilization-ending risks of a scale that are not particularly implausible at this point, and you are somebody with poor emotional regulation, which also happens to be pretty common among the people that we’re talking about – yeah, why wouldn’t you freak the hell out? It keeps me up at night, and I have stuff to distract me.”
A high rate of pre-existing mental illnesses or neurodevelopmental disorders was probably also a factor, she and others told me."
https://www.theguardian.com/global/ng-interactive/2025/mar/05/zizians-artificial-intelligence
@georgetakei @JosephMeyer Perhaps those unfortunates are part of his planned prey. #Musk #EffectiveAltruism #eugenics
As a #philanthropy wonk, I've been an #effectivealtruism skeptic since I first learned about it. To many people it represents a sense of arrogance, that morality can be distilled into utilitarian quantitative calculations. While there's some truth to that, I think critics forget that it was a direct response to another sense of moral arrogance coming before it, treating all local nonprofit 'pet causes' as unimpeachable and equally urgent, while overlooking the most vulnerable populations in corners of the world suffering at the lowest socioeconomic rung largely out of sight and out of mind. If you are donating to causes with the mantle of "impact", there's no right way to grapple with this, but you're gonna have to grapple with it nonetheless.
https://www.vox.com/future-perfect/372519/charity-giving-effective-altruism-mutual-aid-homeless
Any day when a #TESCREAL #effectiveAltruism #longtermism proponent loses their funding and platform is a good day.
“Oxford shuts down institute run by Elon Musk-backed philosopher”
https://www.theguardian.com/technology/2024/apr/19/oxford-future-of-humanity-institute-closes
‘#EffectiveAltruism is a philosophical and social movement that advocates "using evidence and reason to figure out how to benefit others as much as possible”’ https://en.wikipedia.org/wiki/Effective_altruism
This is just utilitarianism…?
Another great opinion piece on AI an the lure of #TESCREAL #transhumanism #effectiveAltruism: as religion.
“When we put all these ideas together and boil them down, we get this basic proposition:
1. We may not have much time until life as we know it is over.
2. So we need to place a bet on something that can save us.
3. Since the stakes are so high, we should ante up and go all in on our bet.
Any student of religion will immediately recognize this for what it is: apocalyptic logic.”
“Silicon Valley’s vision for AI? It’s religion, repackaged.”
https://www.vox.com/the-highlight/23779413/silicon-valleys-ai-religion-transhumanism-longtermism-ea
@TimothyNoah
This mistaken perspective that #EffectiveAltruism holds w re to #EconomicJustice - that the people cannot handle more money - would be completely undermined if the left focused on incentivizing #SmallBiz development, #coop, #esop, any form of #EmployeeOwnership Our #TaxPolicy should drive equity to the people and teach them to take control of the market. Not constantly beg for it at the union table or beneath the philanthropist’s https://t.co/yuKA3ggt1N