How Effective Altruists Ignored Risk

@tags:: #lit✍/📰️article/highlights
@links::
@ref:: How Effective Altruists Ignored Risk
@author:: vox.com

2023-10-05 vox.com - How Effective Altruists Ignored Risk

Book cover of "How Effective Altruists Ignored Risk"

Reference

Notes

Quote

over the last years (the Washington Post fittingly called it “Altruism, Inc.” ) would have noticed them becoming increasingly insular, confident, and ignorant. Anyone would expect doom to lurk in the shadows when institutions turn stale.
- No location available
-

Quote

The collapse of FTX is a vindication of the view that institutions, not individuals, must shoulder the job of keeping excessive risk-taking at bay. Institutional designs must shepherd safe collective risk-taking and help navigate decision-making under uncertainty.
- No location available
-

The epistemics of risk-taking

Quote

While superficially obsessed with epistemic standards and intelligence (an interest that can take ugly forms), real expertise is rare among this group of smart but inexperienced young people who only just entered the labor force. For reasons of “epistemic modesty” or a fear of sounding stupid, they often defer to high-ranking EAs as authority. Doubts might reveal that they just didn’t understand the ingenuous argumentation for fate determined by technology.
- No location available
-

Quote

They would certainly not be sufficient for the longtermist project, which, if taken seriously, would mean EAs trying to shoulder risk management for all of us and our ancestors. We should not be happy to give them this job as long as their risk estimates are done in insular institutions with epistemic infrastructures that are already beginning to crumble. My proposals and research papers broadly argued that increasing the number of people making important decisions will on average reduce risk, both to the institution of EA and to those affected by EA policy. The project of managing global risk is — by virtue of its scale ­— tied to using distributed, not concentrated, expertise.
- No location available
-

Morality, a shape-shifter

Quote

(highlight:: This altruistic inclination is dangerously easy to repurpose. We all burn for an approving hand on our shoulder, the one that assures us that we are doing good by our peers. The question is, how badly do we burn for approval? What will we burn to the ground to attain it?
If your peers declare “impact” as the signpost of being good and worthy, then your attainment of what looks like ever more “good-doing” is the locus of self-enrichment. Being the best at“good-doing” is the status game. But once you have status, your latest ideas of good-doing define the new rules of the status game.
EAs with status don’t get fancy, shiny things, but they are told that their time is more precious than others. They get to project themselves for hours on the 80,000 Hours podcast, their sacrificial superiority in good-doing is hailed as the next level of what it means to be “value-aligned,” and their often incomprehensible fantasies about the future are considered too brilliant to fully grasp. The thrill of beginning to believe that your ideas might matter in this world is priceless and surely a little addictive.)
- No location available
-

Quote

The list showed just how much what it means to be “a good EA” has changed over the years. Early EAs were competing for status by counting the number of mosquito nets they had funded out of their own pocket; later EAs competed on the number of machine learning papers they co-authored at big AI labs.
- No location available
-

The optimization curse

Quote

Futurism gives rationalization air to breathe because it decouples arguments from verification. You might, by chance, be right on how some intervention today affects humans 300 years from now. But if you were wrong, you’ll never know — and neither will your donors. For all their love of Bayesian inference, their endless gesturing at moral uncertainty, and their norms of superficially signposting epistemic humility, EAs became more willing to venture into a far future where they were far more likely to end up in a space so vast and unconstrained that the only feedback to update against was themselves.
- No location available
-

The locus of blame

Quote

I will not blame EAs for having been wrong about the trustworthiness of Bankman-Fried, but I will blame them for refusing to put enough effort into constructing an environment in which they could be wrong safely. Blame lies in the audacity to take large risks on behalf of others, while at the same time rejecting institutional designs that let ideas fail gently.
- No location available
-

Quote

EA contains at least some ideological incentive to let epistemic risk slide. Institutional constraints, such as transparency reports, external audits, or testing big ideas before scaling, are deeply inconvenient for the project of optimizing toward a world free of suffering.
- No location available
-

Epistemic mechanism design

Quote

Instead of lecturing students on the latest sexy cause area, local EA student chapters could facilitate online deliberations on any of the many outstanding questions about global risk and test how the integration of large language models affects the outcome of debates. They could organize hackathons to extend open source deliberation software and measure how proposed solutions changed relative to the tools that were used. EA think tanks, such as the Centre for Long-Term Resilience, could run citizen assemblies on risks from automation. EA career services could err on the side of providing information rather than directing graduates: 80,000 Hours could manage an open source wiki on different jobs, available for experts in those positions to post fact-checked, diverse, and anonymous advice. Charities like GiveDirectly could build on their recipient feedback platform and their US disaster relief program, to facilitate an exchange of ideas between beneficiaries about governmental emergency response policies that might hasten recovery.
- No location available
-

Quote

(highlight:: My article is clearly an attempt to make EA members demand they be treated less like sheep and more like decision-makers. But it is also a question to the public about what we get to demand of those who promise to save us from any evil of their choosing. Do we not get to demand that they fulfill their role, rather than rule?
The answers will lie in data. Open Philanthropy should fund a new organization for research on epistemic mechanism design. This central body should receive data donations from a decade of epistemic experimentalism in EA. It would be tasked with making this data available to researchers and the public in a form that is anonymized, transparent, and accessible. It should coordinate, host, and connect researchers with practitioners and evaluate results across different combinations, including variable group sizes, integrations with discussion and forecasting platforms, and expert selections. It should fund theory and software development, and the grants it distributes could test distributed grant-giving models.)
- No location available
-