Introduction to AGI
First, let’s get the pesky business of defining AGI out of the way. In practice, it’s a deeply hazy and changeable term shaped by the researchers or companies set on building the technology. But it usually refers to a future AI that outperforms humans on cognitive tasks. Which humans and which tasks we’re talking about makes all the difference in assessing AGI’s achievability, safety, and impact on labor markets, war, and society. That’s why defining AGI, though an unglamorous pursuit, is not pedantic but actually quite important, as illustrated in a new paper published this week by authors from Hugging Face and Google, among others. In the absence of that definition, my advice when you hear AGI is to ask yourself what version of the nebulous term the speaker means.
Recent Developments in AGI
Okay, on to the news. First, a new AI model from China called Manus launched last week. A promotional video for the model, which is built to handle “agentic” tasks like creating websites or performing analysis, describes it as “potentially, a glimpse into AGI.” The model is doing real-world tasks on crowdsourcing platforms like Fiverr and Upwork, and the head of product at Hugging Face, an AI platform, called it “the most impressive AI tool I’ve ever tried.”
The Concept of Agentic AI
It’s not clear just how impressive Manus actually is yet, but against this backdrop—the idea of agentic AI as a stepping stone toward AGI—it was fitting that New York Times columnist Ezra Klein dedicated his podcast on Tuesday to AGI. It also means that the concept has been moving quickly beyond AI circles and into the realm of dinner table conversation. Klein was joined by Ben Buchanan, a Georgetown professor and former special advisor for artificial intelligence in the Biden White House.
Discussions on AGI
They discussed lots of things—what AGI would mean for law enforcement and national security, and why the US government finds it essential to develop AGI before China—but the most contentious segments were about the technology’s potential impact on labor markets. If AI is on the cusp of excelling at lots of cognitive tasks, Klein said, then lawmakers better start wrapping their heads around what a large-scale transition of labor from human minds to algorithms will mean for workers. He criticized Democrats for largely not having a plan.
Criticisms and Rebuttals
We could consider this to be inflating the fear balloon, suggesting that AGI’s impact is imminent and sweeping. Following close behind and puncturing that balloon with a giant safety pin, then, is Gary Marcus, a professor of neural science at New York University and an AGI critic who wrote a rebuttal to the points made on Klein’s show. Marcus points out that recent news, including the underwhelming performance of OpenAI’s new ChatGPT-4.5, suggests that AGI is much more than three years away. He says core technical problems persist despite decades of research, and efforts to scale training and computing capacity have reached diminishing returns. Large language models, dominant today, may not even be the thing that unlocks AGI.
Superintelligence Strategy
Just after Marcus tried to deflate it, the AGI balloon got blown up again. Three influential people—Google’s former CEO Eric Schmidt, Scale AI’s CEO Alexandr Wang, and director of the Center for AI Safety Dan Hendrycks—published a paper called “Superintelligence Strategy.” By “superintelligence,” they mean AI that “would decisively surpass the world’s best individual experts in nearly every intellectual domain,” Hendrycks told me in an email. “The cognitive tasks most pertinent to safety are hacking, virology, and autonomous-AI research and development—areas where exceeding human expertise could give rise to severe risks.”
Conclusion
In conclusion, the concept of AGI is complex and multifaceted, with different researchers and companies having varying definitions and expectations. While some believe that AGI is imminent and will have a significant impact on labor markets and society, others argue that it is still far away and that we need to be more cautious in our predictions. As the debate continues, it is essential to consider the potential risks and benefits of AGI and to have a nuanced understanding of its potential implications.
FAQs
Q: What does AGI stand for?
A: AGI stands for Artificial General Intelligence, which refers to a future AI that outperforms humans on cognitive tasks.
Q: What is the difference between AGI and other types of AI?
A: AGI is distinct from other types of AI in that it is designed to perform any intellectual task that a human can, whereas other types of AI are typically designed to perform specific tasks.
Q: Is AGI a realistic goal?
A: The feasibility of AGI is a topic of ongoing debate among researchers and experts, with some believing that it is achievable in the near future and others arguing that it is still far away.
Q: What are the potential risks and benefits of AGI?
A: The potential risks of AGI include job displacement, loss of human autonomy, and potential misuse, while the potential benefits include improved productivity, enhanced decision-making, and solving complex problems.
Q: How can we prepare for the potential impact of AGI?
A: To prepare for the potential impact of AGI, it is essential to have a nuanced understanding of its potential implications, to invest in education and retraining programs, and to develop policies and regulations that address the potential risks and benefits of AGI.