Subscribe to Our Bi-Weekly AI Newsletter

Strong AI, Weak AI & Super Intelligence

The press, the machine, the railway, the telegraph are premises whose thousand-year conclusion no one has yet dared to draw. - Friedrich Nietzsche

The strange thing is that all of this took so long and happened so suddenly. -Ted Nelson, author of “Computer Lib”, on the advent of the personal computer

Technological progress, like evolution, is non-linear. It can alternate between stagnations and explosions, operating at different speeds, and accelerating quickly. Why? Because both evolution and new technologies emerge from the slow accumulation of many causes, a confluence of criteria. Many of those factors are necessary, but in themselves insufficient, to trigger a breakthrough. Only when present in their totality is the event unleashed.

What Is Strong AI? What Is General AI?

Let’s talk synonyms: strong AI, general AI, artificial general intelligence (AGI) and superintelligence all basically refer to the same thing. And what they refer to is an algorithm or set of algorithms that can perform all tasks as well as or better than humans. And to be clear, that does not exist. It is an idea. Some AI researchers think they know how to get there, others are skeptical that getting there is possible, and still others think it is possible but undesireable.

We call a certain type of AI “strong” because we imagine it will be stronger than us. We call it “general” because it will apply to all problems; i.e. it will solve all or most problems better than humans do. The opposite of strong AI is weak AI. The opposite of general AI is narrow AI.

As these words are being written in 2018, we live in an age of weak and narrow AI. Weak AI is an algorithm that has been trained to do one thing, and it does that one thing very well. Weak AI is like a prodigy whose talent in one domain surpasses average human performance, but who may lag in other areas. At a local chess club I know, the inside joke was: “Good at chess, bad at life.” That’s weak AI. It’s bad at life, but very good at a few focused tasks. A given AI model may be able to win at Go, but does that same model know how to navigate a simple social situation? No.

The AI that data scientists are deploying to the world right now is a bunch of machine-learning models, each of which performs one task well. They are like a crowd of savants babbling their narrow responses to the world. That said, DeepMind’s algorithms, most recently AlphaZero, are able to solve a wider and wider array of video games. They are generalizing beyond a single problem.

Apply AI to Business Simulations »

AI Isn’t Strong Yet, but Is It Getting Stronger?

The short answer is “yes.”

The two organizations doing the most interesting work on general AI are probably Google and Open AI, a think tank created by Elon Musk and Sam Altman, among others. Google’s AI research mostly happens at DeepMind and Google Brain.

If AI is getting stronger, it is because of organizations like those. And AI is getting stronger, at least in the sense that it is able to produce more and more accurate predictions about the data you feed it. The progress made in computer vision over the last decade, approaching 100% accuracy recognizing objects in images correctly, is one indicator of increasingly strong AI. The ability of DeepMind algorithms to win more and more games, and to transfer learning from one game to another, is a second indication. The ability of OpenAI’s GPT-2 to solve problems that it was not trained to solve is yet another sign. But we’re not there yet.1

The discussion about AI, and whether we will be able to create a superintelligence, is fundamentally fideistic. That is, it has the characteristics of a faith-based argument, as much as we might prefer that it resemble a scientific debate. As G.K. Chesteron said, “The special mark of the modern world is not that it is skeptical, but that it is dogmatic without knowing it.” AI is like many other powerful technologies in its religious connotations: Prometheus stole fire from Zeus; the railroad gave us the gospel train.

Like religious debates or questions of taste, debate about AI often seems like a “dialog of the deaf,” as the French call it: different camps talk past each other. One of the fundamental differences that keep them from reaching an understanding, aside from sheer tribalism, is what they focus their attention on. AI skeptics tend to focus on our position (where the state of the art is now), while superintelligence believers tend to focus on our velocity (i.e. how quickly new advances are made). Both are right. We do not presently seem to be close to AI, and progress in the field is happening very quickly.

Superintelligence Quotes

A number of respected figures in science and technology have attempted to warn humanity about the dangers of strong AI. They are the AI millenarians.

The genie is out of the bottle. We need to move forward on artificial intelligence development but we also need to be mindful of its very real dangers. I fear that AI may replace humans altogether. If people design computer viruses, someone will design AI that replicates itself. This will be a new form of life that will outperform humans. - Stephen Hawking in WIRED

I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that…. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out. - Elon Musk at MIT

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned. - Bill Gates during his Reddit AMA

It is the business of the future to be dangerous. - Alfred North Whitehead

Thinking about AI is the cocaine of technologists: it makes us excited, and needlessly paranoid. - Chris Nicholson in WIRED ;)

Footnotes

1) In his review of Steven Pinker’s book, “Enlightenment Now”, Scott Aaronson analyzes Pinker’s AI optimism.

Chris Nicholson

Chris Nicholson is the CEO of Pathmind. He previously led communications and recruiting at the Sequoia-backed robo-advisor, FutureAdvisor, which was acquired by BlackRock. In a prior life, Chris spent a decade reporting on tech and finance for The New York Times, Businessweek and Bloomberg, among others.


Newsletter

A bi-weekly digest of AI use cases in the news.