Advertisement
X

What Is AI Superintelligence? Could It Destroy Humanity? And Is It Really Almost Here?

Find out what AI superintelligence is, what its risks are to humanity, and how soon it could become a reality.

In 2014, British philosopher Nick Bostrom published a book called Superintelligence: Paths, Dangers, Strategies, which posed a serious question: could highly advanced artificial intelligence (AI) one day surpass human intelligence and pose a threat to humanity? The book argued that superintelligent AI – a system with intelligence levels beyond that of any human – might take over the world, potentially harming us in the process.

Fast forward a decade, and today, Sam Altman, CEO of OpenAI, suggests superintelligence may be only a decade away, or “a few thousand days,” as he puts it. Altman’s OpenAI cofounder, Ilya Sutskever, also believes superintelligence is within reach. Sutskever recently formed a team dedicated to building “safe superintelligence,” and they have raised $1 billion to pursue this goal. But what does “superintelligence” really mean, and how close are we to seeing it? And most importantly, could it truly be a danger to humanity?

Levels of AI Explained

One of the best ways to understand AI’s different capabilities comes from Meredith Ringel Morris, a computer scientist who, along with colleagues at Google, developed a framework with six levels of AI performance: no AI, emerging, competent, expert, virtuoso, and superhuman. It also distinguishes between narrow AI (AI that performs specific tasks) and general AI (AI that’s versatile and can learn new tasks).

For example, a simple calculator is a “no AI” system: it performs mathematical calculations based on rules, without any understanding or intelligence. On the other hand, some narrow AI systems have advanced significantly, with a well-known example being Deep Blue, the chess program that defeated world champion Garry Kasparov in 1997. Deep Blue is a “virtuoso-level” narrow AI: it excels at one task but lacks any broader intelligence.

Some narrow systems can even perform at superhuman levels in their specific areas. For instance, DeepMind’s AlphaFold, which predicts protein structures, has achieved breakthroughs that many human scientists could not, earning its creators the Nobel Prize in Chemistry.

General AI, which is much more versatile, has so far shown much slower progress. General AI systems can perform a range of tasks but lack depth. According to Morris, today’s most advanced language models, such as those used in ChatGPT, are at an “emerging” level of general AI, meaning they perform like unskilled humans in various tasks. They haven’t yet reached “competent,” which would mean they perform as well as 50% of skilled adults. By this measure, general superintelligence, which would be far more versatile than any human, is still a long way off.

Advertisement

How Advanced is AI Right Now?

Determining AI’s current level of intelligence can be challenging, as it depends on the benchmarks we use. For example, DALL-E, an AI image generator, might be considered virtuoso-level for producing images that most humans couldn’t create. But its strange errors, like extra fingers or distorted objects, could place it closer to the emerging level.

There’s also debate over the true capabilities of current systems. Some 2023 studies suggest models like GPT-4 display “sparks” of artificial general intelligence, but others argue these models are mostly sophisticated pattern-matchers, lacking true intelligence. OpenAI claims its latest model, “o1,” can perform complex reasoning and reach the level of human experts in many areas. Yet, recent research from Apple suggests that o1, like other models, struggles with mathematical reasoning, indicating it may be less advanced than some claim.

Will AI Continue to Improve?

Some researchers believe that AI’s rapid progress over the last few years will continue, possibly even accelerating. Tech companies are investing hundreds of billions in AI development, and breakthroughs in deep learning (a technique that finds patterns in large datasets) have been responsible for many recent AI successes. In fact, this year’s Nobel Prize in Physics recognized foundational work in deep learning by John Hopfield and Geoffrey Hinton, a pioneer known as the “Godfather of AI.”

Advertisement

Most modern general AI models, like ChatGPT, rely on human-generated text data, but this approach may have its limits. If we exhaust the available human-generated data, improvement may slow down. AI developers are exploring solutions like generating synthetic data and refining “transfer learning” (helping AI transfer knowledge between tasks), but it’s unclear if these will be enough to reach superintelligence.

For true superintelligence, some experts believe AI would need open-ended learning abilities, meaning it could continuously create novel outputs and learn in a way that surprises humans. Current models are not built for this; instead, they specialize in specific tasks and patterns. This limitation suggests we need new AI structures to make superintelligence possible.

What Risks Could AI Pose?

Even if superintelligent AI isn’t just around the corner, today’s AI still brings certain risks. As AI becomes more capable, it may also become more autonomous, meaning it can make decisions or perform actions on its own. For now, AI systems are largely under human control, used mainly as consultants or aids. For example, people may ask ChatGPT to summarize documents or use YouTube’s algorithm to recommend videos. However, relying too much on AI in these ways could lead us to trust it more than we should.

Advertisement

Other risks include people forming “parasocial” relationships with AI (treating it like a friend or mentor) and significant job displacement, as automation could affect a wide range of industries. Society could face unexpected challenges as these systems become more advanced and integrated into everyday life.

What Happens Next?

If fully autonomous, superintelligent AI systems eventually become a reality, there is the question of whether they could threaten human interests. However, experts point out that highly autonomous systems could still be designed to give humans a high level of control. Many AI researchers believe it’s possible to create “safe superintelligence,” but it will require a complex, multidisciplinary effort to ensure AI systems remain aligned with human values and goals.

While we might see superintelligence within a decade, there’s hope that careful research and design will make it possible to harness this powerful technology safely. For now, though, experts generally agree that we don’t have to worry about AI taking over the world anytime soon.

Advertisement
Show comments
US