Technology

What Is AGI and How Would We Know It's Here?

Artificial general intelligence promises machines that think like humans across all domains — but scientists cannot even agree on what AGI means, let alone how to measure it.

R
Redakcia
4 min read
Share
What Is AGI and How Would We Know It's Here?

The Term Everyone Uses but Nobody Agrees On

Artificial general intelligence — AGI — has become the north star of the technology industry. Companies from OpenAI to Google DeepMind to Nvidia pour billions into the pursuit of a machine that can match or surpass human cognition across virtually any task. Yet beneath the hype lies a fundamental problem: nobody agrees on what AGI actually means, and without a shared definition, claims about achieving it remain impossible to verify.

Narrow AI vs. General Intelligence

Every AI system in use today — from voice assistants to image generators to medical diagnostic tools — is classified as narrow AI (also called artificial narrow intelligence, or ANI). These systems excel at specific, well-defined tasks but cannot transfer their skills to new domains. A chess engine that defeats grandmasters knows nothing about language; a chatbot that writes poetry cannot drive a car.

AGI, by contrast, would be a system capable of learning, reasoning, and adapting across virtually all cognitive tasks a human can perform. It could read a legal brief, diagnose a disease, compose a symphony, and negotiate a business deal — all without being specifically programmed for each task. The concept emerged to describe a system with a human mind's flexibility, one that can take a small amount of information and generalize it to entirely novel situations.

Why the Definition Matters

The stakes of defining AGI extend far beyond academic debate. As the journal Science has documented, researchers from computer science, cognitive science, policy, and ethics each bring fundamentally different understandings of the concept. Some define AGI by performance on benchmarks, others by internal workings, economic impact, or — as critics note — simply by "vibes."

This ambiguity has real consequences. At companies like OpenAI and Microsoft, contractual clauses and profit-sharing agreements are tied to whether AGI has been officially achieved. If the definition is elastic enough to mean anything, it can be deployed strategically — or prematurely.

Frameworks for Measuring Progress

Several organizations have tried to impose structure on the chaos. Google DeepMind published a "Levels of AGI" framework that defines five performance tiers — Emerging, Competent, Expert, Virtuoso, and Superhuman — crossed with breadth of capability, from narrow (single domain) to general (broad cognitive tasks). A "Competent AGI" would outperform 50% of skilled adults across a wide range of tasks, while a "Superhuman AGI" would surpass 100%.

Crucially, DeepMind separates performance from autonomy, arguing that how capable a system is and how independently it operates are two different questions that must be evaluated separately.

OpenAI maintains its own internal five-level scale for tracking progress. Meanwhile, benchmarks like ARC-AGI attempt to test genuine reasoning ability rather than pattern memorization — though critics argue no single test can definitively prove general intelligence, just as no single IQ test perfectly captures human cognition.

Why Skeptics Push Back

Cognitive scientists point out a deeper issue: science has no rigorous definition of "general intelligence" even in humans. Intelligence is not a single quantity that can be dialed up or down but a complex integration of specialized and general capabilities shaped by evolution, culture, and experience.

As Scientific American has reported, current AI systems still hallucinate facts, struggle with novel reasoning, and lack genuine understanding in the way humans build it through lived experience. Some researchers, including Meta's Yann LeCun, argue that intelligence — even human intelligence — is fundamentally specialized and task-optimized, making the concept of "general" intelligence itself misleading.

The Safety Dimension

Whether or not AGI arrives soon, the pursuit raises urgent safety questions. Researchers warn about the control problem: how to ensure that a recursively self-improving system continues to behave in ways aligned with human values. Concerns range from job displacement and political manipulation to, at the extreme end, existential risk.

Many AI safety experts argue that the fixation on AGI as a future milestone distracts from present-day harms — algorithmic bias, deepfake misinformation, and the erosion of critical thinking skills — that demand attention now, regardless of when or whether machines achieve human-level cognition.

A Moving Target

Perhaps the most honest assessment of AGI is that it remains a moving target. Every time AI masters a task once thought to require general intelligence — playing Go, writing code, passing medical exams — the goalposts shift. What was once considered proof of AGI becomes, in hindsight, just another narrow achievement. Until scientists agree on a definition, the question "Have we achieved AGI?" may say more about the person answering than about the technology itself.

Stay updated!

Follow us on Facebook for the latest news and articles.

Follow us on Facebook

Related articles