Intelligence 2 — Is Artificial Intelligence Really Intelligent?

@tags:: #lit✍/🎧podcast/highlights
@links::
@ref:: Intelligence 2 — Is Artificial Intelligence Really Intelligent?
@author:: Simplifying Complexity

2023-06-14 Simplifying Complexity - Intelligence 2 — Is Artificial Intelligence Really Intelligent

Book cover of "Intelligence 2 —  Is Artificial Intelligence Really Intelligent?"

Reference

Notes

Quote

(highlight:: What is GPT-3? A high-level introduction
Key takeaways:
• Lambda and GBT3 are extraordinary programs that use a neural network called a transformer to scan through a vast corpus of texts and predict the most likely next word.
• GBT3 has hundreds of billions of parameters and is on the order of 45 terabytes of text, making it a massive digital library.
• GBT3 can convince you that it's sentient by talking to it through a terminal.
Transcript:
Speaker 1
We have these programs like Lambda, GBT3, that are, to be honest, extraordinary. I don't know if you've interacted with them, I have. They use a neural network called a transformer. This is zoo, by the way, of neural nets. That's something else to talk about, many different varieties. But this one's called a transformer. Essentially, it scans through a vast corpus of texts. What it is essentially programmed to do is take a series of words and then predict the most likely next word. GBT3 is slightly different. Lambda is mainly dialogue-based, whereas GBT3 is mainly text-based. I mean, just for the listeners to understand, I always say, and at the standard model of physics, which is the most powerful theory we have for the structure of the universe, has on the Order of tens of parameters. GBT3, which can convince you that it's sentient by talking to it through a terminal, has hundreds of billions of parameters, right? So this is an important point. What does GBT3 know? It's on the order of 45 terabytes of text, a massive library, like the Library of Congress, about 15. So this thing has in its digital memory.)
- Time 0:10:35
-

Quote

The "Parrot Intelligence" of Large Language Models (LLMs
Key takeaways:
• Most people make a distinction between those who memorize everything and those who work things out when it comes to intelligence. Employers are typically looking for those who can think on their feet and work things out. GPT-3 mimics information and intelligence, but it's not doing what humans do. GPT-3 can find the right answer statistically, but it can't answer complex questions that require physics knowledge.
Transcript:
Speaker 1
I mean, this is very interesting because if you ask someone, it's funny, we talked about this yesterday, or first, I can't remember when. What is intelligence and stupidity and so on? Most of us make a distinction between people who know a lot, which has memorized everything, and people who work things out. Now sometimes that's a bit unfair, it's sort of a lazy way of being critical. But there is something to that. You know, we've all gone to school with friends and they just memorize absolutely bloody everything. And so hoping that the right question would come up and they get an A, and the others are lazy bastards, and they sort of, you know, but they sort of sit down and they work it out. It's much more impressive to us, I think. And that's typically when we're actually interviewing people for jobs, we're looking for the latter, which is, you know, can you think on your feet? Can you work this out for yourself? That sort of thing. This is the opposite of that, right? This has libraries and libraries bigger than the history of libraries in its little par drives. And it's spitting out in some sense fractured projections through that library that convinced us of its intelligence and sentience. It's very important to point out that if you are willing to call that intelligence in the Alan Turing sense, it certainly isn't intelligent in our sense because most of us have not read Even the books on our own rather meager libraries with hundreds, perhaps maybe thousands of books on them. So whatever it's doing, it's not doing what we do. Now to be fair, in the spirit of pluralism, perhaps, you know, that's we've discovered a different kind of intelligence, a parrot intelligence, and we should respect it.
Speaker 2
I like that it parrots intelligence. And it mimics. That's what the term is. It mimics information and intelligence. Would you even go as far as say it mimics intelligence or just mimics information?
Speaker 1
Well, but this is the interesting question people are now asking, right? Because for example, to what extent do GPT-3 and lambda encode physics? They encode text, right? So if you ask a question in text somewhere in its vast corpus, there is the elements of the right answer that it can find statistically. But you can ask a harder question, which is what happens when you throw a basketball in the air on the surface of Mars? How long does it take for it to reach its maximum height? And how long does it take to hit the ground? How many times does it bounce? That of GPT-3, it was like, it can't do it. It can't do it. Now it's not that we can all do it, but it's amazing what it can do that we can't do. So that's the kind of interrogation required in the sophisticated Turing test sense to demonstrate that it's doing something very differently.)
- Time 0:11:59
-

Quote

(highlight:: GPT is like a super intelligence bacterium - it stores language like bacterium store DNA
Key takeaways:
• Many humans are principles first when it comes to learning, and their approach to understanding is more theoretical than practical.
• Deep neural networks have created a super intelligent bacterium that stores all its information through input output mappings.
• GPG3 is intelligent like a virus, not like a human.
Transcript:
Speaker 1
And I think I understand. And now these are the rules. As opposed to here's all the rules. And then you learn the principles. And I think humans, many, not all, are principles first. And I think that's a very different, much more theoretical approach to nature of intelligence. And then you get to nonhumans. And this gets to a very interesting point that is worth bearing in mind. And many people have made this point. And that is that what we might have created with deep neural networks is a super intelligent bacterium because if you think about intelligence in a virus that comes about through natural Selection, which by the way is mathematically equivalent to reinforcement learning, which is the preferred method of training, they're in a sense storing all of this information In a vast genome like a bacterium. They've turned declarative knowledge into reflexes. What we think requires abduction and deduction and inference. They're showing us can be produced by input output mappings. It's actually quite a startling discovery. For me, GPG3 is like a super clever virus. It's not like a human.)
- Time 0:18:50
-