Any Ex-EAs or EA Splinter Groups Out There?

@tags:: #lit✍/📰️article/highlights
@links::
@ref:: Any Ex-EAs or EA Splinter Groups Out There?
@author:: reddit.com

=this.file.name

Book cover of "Any Ex-EAs or EA Splinter Groups Out There?"

Reference

Notes

Quote

(highlight:: EA was so good pre-longtermism.
A friend introduced me a year or so ago. I read Doing Good Better then the life you can save, started donating per GiveWell, and sharing those books/ideas with others.
Then I read what we owe the future, different perspectives about longtermism, and looking at EA efforts in far fetched existential risk-related areas like AI governance. These really turned me off; it is hard to fathom why so many people collectively decided that hypothetical future people matter more than people today, such as the billions at risk from climate change.
I think that this is rooted in a few things:
Macaskill has a lot of weight in EA, and his wrong turn philosophically to longtermism was contagious.
The idea that a life now is as valuable, or even a fraction as valuable, as a life in the far future has an obvious appeal in an egalitarian sense.
In order to get credibility within a community, one usually needs to adopt it's values and beliefs. That isn't to say that EA criticism of longtermism hasn't always existed, but there was clearly a social and financial incentive to cater to the Longtermist funders (SBF among them).
If I were Will, I'd write a book called "What we owe the present". The argument would state that if we apply even a small discount rate to future life, the value of such life approaches 0 rapidly over a long period of time.
It would review the ways in which humans have usually been very bad at predicting the future, such as the tendency of militaries to plan to fight the last war or the many historical predictions from business, science, and religious thinkers which have been completely wrong.
It would reckon with the fact that a lot of Longtermist projects have completely unverifiable impacts. If we quintupled funding of AGI or nuclear weapons EA work, would it meaningfully decrease the probability of rogue AI or Fallout scenarios? It isn't possible to say, which demonstrates that from an original EA perspective it isn't effective. In a similar vein, Sbf could very easily have justified his financial misdeeds with a hand-wavey Longtermist answer of "the people who get screwed in the present are inconsequential compared to the trillions I'm helping in the year 3000".
It would discuss the interaction of current problems with long term risk. For example, if climate change drives global conflict over scarce resources, nuclear war, rogue bioweapons, or runaway AI become dramatically more likely to occur. Related to this, working on current problems affects long term ones: that African infant who lives because of charity might come up with a revolutionary invention later in life. If we can have a good impact now that carries forward to generate an unpredictable set of future benefits exponentially, what advantage is there to trying to address hypothetical problems that may exist far in the future?
Finally, it would explore the cognitive nature of empathy and what motivates people in general to act altruistically. For most people, giving to that homeless guy or local food bank feels satisfying, even though that dollar would go further in Africa. Psychological satisfaction incentivizes the altruist to keep acting altruistically, so that type of giving shouldn't necessarily be dismissed as ineffective.
These aren't new arguments, but if Macaskill were to make them I think it would go a long way towards bringing EA back to its roots and getting past the FTX fiasco.)
- No location available
-