To some, GPT-3 is a marvel. To others, it’s a faceplant.
What’s the big deal?
- GPT-3 is an AI system that has ingested 45 terabytes of English text and, from that, “learned” to read and write.
- It’s the creation of OpenAI, an AI lab dedicated to making sure “artificial general intelligence benefits all of humanity”.
- When OpenAI gave access to a private beta of GPT-3 in July, participants used it to churn out short stories, songs, press releases, and even HTML code. It can also answer questions and do translations, all without changing the algorithm, which in itself makes GPT-3 feel like magic. One developer who had a hands-on described playing with GPT-3 like “seeing the future.”
- Others don’t think so. According to Prof. Gary Marcus “…its comprehension of the world is often seriously off, which means you can never really trust what it says.”
- And “it has only the dimmest sense of what those words mean, and no sense whatsoever about how those words relate to the world.”
- Put another way, GPT-3 generates text, it does not write.
- Worse, the reasons for GPT-3’s shortcomings are opaque. [GPT-3 has] “no transparency as to why it performs well or makes certain errors” says natural language processing researcher Melanie Mitchell.
So, GTP-3 – since licensed exclusively to Microsoft(!) – adds fuel to the hype around AI while simultaneously undermining its credibility.
Both the triumphs and failures seem to stem from OpenAI’s approach: Bigger is better. The hundreds of billions of words that OpenAI had GPT-3 analyze, and the word model it subsequently built, were both 100x larger than those used in its predecessor. As a result of that effort, GPT-3 is simultaneously wowing us and falling flat because of the 175 billion connections it made between words pulled from the Internet.
It’s this approach – the size and the method – that inspires some interesting ideas.
Does GPT-3 parallel our own creation? Did the “creator” – whoever/whatever that may be – set up an environment where the fundamental components of the universe had the opportunity to establish some astronomical number of connections between them, then sit back to see what would happen?
There is rich symbolism to be mined looking at the genesis and evolution of GPT-3 and life in the universe. That holds true right up to present day, most especially if you’re underwhelmed with the state of the world (humans, mostly): Researchers suspect that the bigger is better approach isn’t going to make the best, or maybe even a real, AI. Similarly, the creator could be looking down upon us right now, shaking her head, realizing her creation hit a dead end. “I set up the opportunity for limitless possibilities and I end up with this? “
OK, so this might not be the most hopeful, but it could be a highly insightful, picture of our world.
Here’s the second itch that GPT-3 asks us to scratch: The biggest danger GPT-3 presents – bigger even than built-in biases[i] – is that it will end up feeding itself.
We’ve already seen that GPT-3 can generate text with little effort – and often little meaning. There is, therefore, no shortage of “food” it can produce. How easy it would be for all of that to be empty calories. And if that’s the case, what’s to stop GPT-3, or its descendants, from eating its own garbage?
In a sense this is happening already: thanks to technology we are producing and ingesting increasingly derivative and repetitive memes, music, videos, jokes, movies, articles, and TV. As GPT-3 and its brethren increasingly take over that production for us, and then base more of that production on what they have already produced and put in the wild, we could quickly have one ginormous echo chamber and, in time, one single story.
In more than one podcast in which I’ve heard Donald Hoffman discuss his fascinating investigation into the nature of consciousness he has mentioned Godel’s contention that mathematical investigation is inexhaustible. With that in mind, Hoffman says, consciousness that is a fundamental feature of the universe is really a “kid in a candy store” – a creation whose purpose is to endlessly explore and experience.
If true – and I have to say it’s a pretty cool and hopeful view of the world – then the reality is that today we’re working against our nature. And we’re continuing to do so by working against the upside offered by our own technology.
GPT-3 and the Internet itself should be a means for that endless exploration, but instead it is becoming a candy store with diminishing variety. If the trend continues, and we let tools like GPT-3 be guided by quantity over quality, we’ll eventually be staring at and ingesting nothing but shelves full of Almond Joy – or worse.
[i] There is a fear, even a warning, that the process OpenAI uses to train GPT-3 makes it inherently at risk of adopting biases. In my view, this is a problem of degree, not kind. There is no neutral source, no neutral way, to train an AI, just as there is no neutral person. If we saw an AI that was truly bias-free, we wouldn’t know what it was, and even if it were possible would anyone want it or like it? The key here is to keep it free of extreme bias and idiocy.
Mentions:
1 comment for “The Regurgitation Machine”