How Effective Altruists Ignored Risk
!tags:: #litâ/đ°ď¸article/highlights
!links::
!ref:: How Effective Altruists Ignored Risk
!author:: vox.com
=this.file.name
Reference
=this.ref
Notes
over the last years (the Washington Post fittingly called it âAltruism, Inc.â ) would have noticed them becoming increasingly insular, confident, and ignorant. Anyone would expect doom to lurk in the shadows when institutions turn stale.
- No location available
-
The collapse of FTX is a vindication of the view that institutions, not individuals, must shoulder the job of keeping excessive risk-taking at bay. Institutional designs must shepherd safe collective risk-taking and help navigate decision-making under uncertainty.
- No location available
-
The epistemics of risk-taking
While superficially obsessed with epistemic standards and intelligence (an interest that can take ugly forms), real expertise is rare among this group of smart but inexperienced young people who only just entered the labor force. For reasons of âepistemic modestyâ or a fear of sounding stupid, they often defer to high-ranking EAs as authority. Doubts might reveal that they just didnât understand the ingenuous argumentation for fate determined by technology.
- No location available
-
They would certainly not be sufficient for the longtermist project, which, if taken seriously, would mean EAs trying to shoulder risk management for all of us and our ancestors. We should not be happy to give them this job as long as their risk estimates are done in insular institutions with epistemic infrastructures that are already beginning to crumble. My proposals and research papers broadly argued that increasing the number of people making important decisions will on average reduce risk, both to the institution of EA and to those affected by EA policy. The project of managing global risk is â by virtue of its scale Ââ tied to using distributed, not concentrated, expertise.
- No location available
-
Morality, a shape-shifter
(highlight:: This altruistic inclination is dangerously easy to repurpose. We all burn for an approving hand on our shoulder, the one that assures us that we are doing good by our peers. The question is, how badly do we burn for approval? What will we burn to the ground to attain it?
If your peers declare âimpactâ as the signpost of being good and worthy, then your attainment of what looks like ever more âgood-doingâ is the locus of self-enrichment. Being the best atâgood-doingâ is the status game. But once you have status, your latest ideas of good-doing define the new rules of the status game.
EAs with status donât get fancy, shiny things, but they are told that their time is more precious than others. They get to project themselves for hours on the 80,000 Hours podcast, their sacrificial superiority in good-doing is hailed as the next level of what it means to be âvalue-aligned,â and their often incomprehensible fantasies about the future are considered too brilliant to fully grasp. The thrill of beginning to believe that your ideas might matter in this world is priceless and surely a little addictive.)
- No location available
-
The list showed just how much what it means to be âa good EAâ has changed over the years. Early EAs were competing for status by counting the number of mosquito nets they had funded out of their own pocket; later EAs competed on the number of machine learning papers they co-authored at big AI labs.
- No location available
-
The optimization curse
Futurism gives rationalization air to breathe because it decouples arguments from verification. You might, by chance, be right on how some intervention today affects humans 300 years from now. But if you were wrong, youâll never know â and neither will your donors. For all their love of Bayesian inference, their endless gesturing at moral uncertainty, and their norms of superficially signposting epistemic humility, EAs became more willing to venture into a far future where they were far more likely to end up in a space so vast and unconstrained that the only feedback to update against was themselves.
- No location available
-
The locus of blame
I will not blame EAs for having been wrong about the trustworthiness of Bankman-Fried, but I will blame them for refusing to put enough effort into constructing an environment in which they could be wrong safely. Blame lies in the audacity to take large risks on behalf of others, while at the same time rejecting institutional designs that let ideas fail gently.
- No location available
-
EA contains at least some ideological incentive to let epistemic risk slide. Institutional constraints, such as transparency reports, external audits, or testing big ideas before scaling, are deeply inconvenient for the project of optimizing toward a world free of suffering.
- No location available
-
Epistemic mechanism design
Instead of lecturing students on the latest sexy cause area, local EA student chapters could facilitate online deliberations on any of the many outstanding questions about global risk and test how the integration of large language models affects the outcome of debates. They could organize hackathons to extend open source deliberation software and measure how proposed solutions changed relative to the tools that were used. EA think tanks, such as the Centre for Long-Term Resilience, could run citizen assemblies on risks from automation. EA career services could err on the side of providing information rather than directing graduates: 80,000 Hours could manage an open source wiki on different jobs, available for experts in those positions to post fact-checked, diverse, and anonymous advice. Charities like GiveDirectly could build on their recipient feedback platform and their US disaster relief program, to facilitate an exchange of ideas between beneficiaries about governmental emergency response policies that might hasten recovery.
- No location available
-
(highlight:: My article is clearly an attempt to make EA members demand they be treated less like sheep and more like decision-makers. But it is also a question to the public about what we get to demand of those who promise to save us from any evil of their choosing. Do we not get to demand that they fulfill their role, rather than rule?
The answers will lie in data. Open Philanthropy should fund a new organization for research on epistemic mechanism design. This central body should receive data donations from a decade of epistemic experimentalism in EA. It would be tasked with making this data available to researchers and the public in a form that is anonymized, transparent, and accessible. It should coordinate, host, and connect researchers with practitioners and evaluate results across different combinations, including variable group sizes, integrations with discussion and forecasting platforms, and expert selections. It should fund theory and software development, and the grants it distributes could test distributed grant-giving models.)
- No location available
-
dg-publish: true
created: 2024-07-01
modified: 2024-07-01
title: How Effective Altruists Ignored Risk
source: hypothesis
!tags:: #litâ/đ°ď¸article/highlights
!links::
!ref:: How Effective Altruists Ignored Risk
!author:: vox.com
=this.file.name
Reference
=this.ref
Notes
over the last years (the Washington Post fittingly called it âAltruism, Inc.â ) would have noticed them becoming increasingly insular, confident, and ignorant. Anyone would expect doom to lurk in the shadows when institutions turn stale.
- No location available
-
The collapse of FTX is a vindication of the view that institutions, not individuals, must shoulder the job of keeping excessive risk-taking at bay. Institutional designs must shepherd safe collective risk-taking and help navigate decision-making under uncertainty.
- No location available
-
The epistemics of risk-taking
While superficially obsessed with epistemic standards and intelligence (an interest that can take ugly forms), real expertise is rare among this group of smart but inexperienced young people who only just entered the labor force. For reasons of âepistemic modestyâ or a fear of sounding stupid, they often defer to high-ranking EAs as authority. Doubts might reveal that they just didnât understand the ingenuous argumentation for fate determined by technology.
- No location available
-
They would certainly not be sufficient for the longtermist project, which, if taken seriously, would mean EAs trying to shoulder risk management for all of us and our ancestors. We should not be happy to give them this job as long as their risk estimates are done in insular institutions with epistemic infrastructures that are already beginning to crumble. My proposals and research papers broadly argued that increasing the number of people making important decisions will on average reduce risk, both to the institution of EA and to those affected by EA policy. The project of managing global risk is â by virtue of its scale Ââ tied to using distributed, not concentrated, expertise.
- No location available
-
Morality, a shape-shifter
(highlight:: This altruistic inclination is dangerously easy to repurpose. We all burn for an approving hand on our shoulder, the one that assures us that we are doing good by our peers. The question is, how badly do we burn for approval? What will we burn to the ground to attain it?
If your peers declare âimpactâ as the signpost of being good and worthy, then your attainment of what looks like ever more âgood-doingâ is the locus of self-enrichment. Being the best atâgood-doingâ is the status game. But once you have status, your latest ideas of good-doing define the new rules of the status game.
EAs with status donât get fancy, shiny things, but they are told that their time is more precious than others. They get to project themselves for hours on the 80,000 Hours podcast, their sacrificial superiority in good-doing is hailed as the next level of what it means to be âvalue-aligned,â and their often incomprehensible fantasies about the future are considered too brilliant to fully grasp. The thrill of beginning to believe that your ideas might matter in this world is priceless and surely a little addictive.)
- No location available
-
The list showed just how much what it means to be âa good EAâ has changed over the years. Early EAs were competing for status by counting the number of mosquito nets they had funded out of their own pocket; later EAs competed on the number of machine learning papers they co-authored at big AI labs.
- No location available
-
The optimization curse
Futurism gives rationalization air to breathe because it decouples arguments from verification. You might, by chance, be right on how some intervention today affects humans 300 years from now. But if you were wrong, youâll never know â and neither will your donors. For all their love of Bayesian inference, their endless gesturing at moral uncertainty, and their norms of superficially signposting epistemic humility, EAs became more willing to venture into a far future where they were far more likely to end up in a space so vast and unconstrained that the only feedback to update against was themselves.
- No location available
-
The locus of blame
I will not blame EAs for having been wrong about the trustworthiness of Bankman-Fried, but I will blame them for refusing to put enough effort into constructing an environment in which they could be wrong safely. Blame lies in the audacity to take large risks on behalf of others, while at the same time rejecting institutional designs that let ideas fail gently.
- No location available
-
EA contains at least some ideological incentive to let epistemic risk slide. Institutional constraints, such as transparency reports, external audits, or testing big ideas before scaling, are deeply inconvenient for the project of optimizing toward a world free of suffering.
- No location available
-
Epistemic mechanism design
Instead of lecturing students on the latest sexy cause area, local EA student chapters could facilitate online deliberations on any of the many outstanding questions about global risk and test how the integration of large language models affects the outcome of debates. They could organize hackathons to extend open source deliberation software and measure how proposed solutions changed relative to the tools that were used. EA think tanks, such as the Centre for Long-Term Resilience, could run citizen assemblies on risks from automation. EA career services could err on the side of providing information rather than directing graduates: 80,000 Hours could manage an open source wiki on different jobs, available for experts in those positions to post fact-checked, diverse, and anonymous advice. Charities like GiveDirectly could build on their recipient feedback platform and their US disaster relief program, to facilitate an exchange of ideas between beneficiaries about governmental emergency response policies that might hasten recovery.
- No location available
-
(highlight:: My article is clearly an attempt to make EA members demand they be treated less like sheep and more like decision-makers. But it is also a question to the public about what we get to demand of those who promise to save us from any evil of their choosing. Do we not get to demand that they fulfill their role, rather than rule?
The answers will lie in data. Open Philanthropy should fund a new organization for research on epistemic mechanism design. This central body should receive data donations from a decade of epistemic experimentalism in EA. It would be tasked with making this data available to researchers and the public in a form that is anonymized, transparent, and accessible. It should coordinate, host, and connect researchers with practitioners and evaluate results across different combinations, including variable group sizes, integrations with discussion and forecasting platforms, and expert selections. It should fund theory and software development, and the grants it distributes could test distributed grant-giving models.)
- No location available
-