AI Could Defeat All of Us Combined
!tags:: #lit✍/📰️article/highlights
!links:: artificial intelligence (ai), existential risk,
!ref:: AI Could Defeat All of Us Combined
!author:: cold-takes.com
=this.file.name
Reference
=this.ref
Notes
We generally don't have a lot of things that could end human civilization if they "tried" sitting around. If we're going to create one, I think we should be asking not "Why would this be dangerous?" but "Why wouldn't it be?"
- No location available
-
- [note::Hmmm... this is an important framing. If it turns out that AI does have the capability to destroy the humanity, it would be really silly to think that humanity underestimated it to the extent the general population does now. Perhaps humans are naturally biased against taking low probability, but extremely bad things seriously -- there's probably a term for this.]
I don't think the danger relies on the idea of "cognitive superpowers" or "superintelligence" - both of which refer to capabilities vastly beyond those of humans. I think we still have a problem even if we assume that AIs will basically have similar capabilities to humans, and not be fundamentally or drastically more intelligent or capable.
- No location available
- ai cognition, ai intelligence, ai alignment,
Using these estimates (details in footnote)5 implies that once the first human-level AI system is created, whoever created it could use the same computing power it took to create it in order to run several hundred million copies for about a year each.6
- No location available
-
- [note::Based on this assumption, it seems like there's an enormous economic incentive for companies to create human-level AI. Why hasn't this already been done?]
something like 5-10% the size of the world's total working-age population.8
- No location available
-
- [note::So, mass job loss (?)]
Each of these AIs might have skills comparable to those of unusually highly paid humans, including scientists, software engineers and quantitative traders. It's hard to say how quickly a set of AIs like this could develop new technologies or make money trading markets, but it seems quite possible for them to amass huge amounts of resources quickly.
- No location available
-
- [note::I'm confused by this. My idea of a "human-level AI" does not mean they have all the skills or even the capacity to learn the skills that a scientists, software engineer, or quantitative trader would have to apply. Perhaps Holden is assuming that the AI either starts out as a "general purpose highly skilled human" or is able to improve itself rapidly?]
if there's something with human-like skills, seeking to disempower humanity, with a population in the same ballpark as (or larger than) that of all humans, we've got a civilization-level problem.
- No location available
-
dg-publish: true
created: 2024-07-01
modified: 2024-07-01
title: AI Could Defeat All of Us Combined
source: hypothesis
!tags:: #lit✍/📰️article/highlights
!links:: artificial intelligence (ai), existential risk,
!ref:: AI Could Defeat All of Us Combined
!author:: cold-takes.com
=this.file.name
Reference
=this.ref
Notes
We generally don't have a lot of things that could end human civilization if they "tried" sitting around. If we're going to create one, I think we should be asking not "Why would this be dangerous?" but "Why wouldn't it be?"
- No location available
-
- [note::Hmmm... this is an important framing. If it turns out that AI does have the capability to destroy the humanity, it would be really silly to think that humanity underestimated it to the extent the general population does now. Perhaps humans are naturally biased against taking low probability, but extremely bad things seriously -- there's probably a term for this.]
I don't think the danger relies on the idea of "cognitive superpowers" or "superintelligence" - both of which refer to capabilities vastly beyond those of humans. I think we still have a problem even if we assume that AIs will basically have similar capabilities to humans, and not be fundamentally or drastically more intelligent or capable.
- No location available
- ai cognition, ai intelligence, ai alignment,
Using these estimates (details in footnote)5 implies that once the first human-level AI system is created, whoever created it could use the same computing power it took to create it in order to run several hundred million copies for about a year each.6
- No location available
-
- [note::Based on this assumption, it seems like there's an enormous economic incentive for companies to create human-level AI. Why hasn't this already been done?]
something like 5-10% the size of the world's total working-age population.8
- No location available
-
- [note::So, mass job loss (?)]
Each of these AIs might have skills comparable to those of unusually highly paid humans, including scientists, software engineers and quantitative traders. It's hard to say how quickly a set of AIs like this could develop new technologies or make money trading markets, but it seems quite possible for them to amass huge amounts of resources quickly.
- No location available
-
- [note::I'm confused by this. My idea of a "human-level AI" does not mean they have all the skills or even the capacity to learn the skills that a scientists, software engineer, or quantitative trader would have to apply. Perhaps Holden is assuming that the AI either starts out as a "general purpose highly skilled human" or is able to improve itself rapidly?]
if there's something with human-like skills, seeking to disempower humanity, with a population in the same ballpark as (or larger than) that of all humans, we've got a civilization-level problem.
- No location available
-