There's a lot of buzz these days around AI, and AGI in particular. It seems like everyone is busy sharing their thoughts on AGI on the internet. I would love to do the same on this small corner of the internet of mine.
In this post, I'm arguing that - AGI, as it stands, is not a very useful measure of technological advancement. - If anything, AGI is mostly a function of social readiness.
Definitions
So, what is Artificial General Intelligence? There are many definitions. OpenAI defines it as "highly autonomous systems that outperform humans at most economically valuable work". Sam Altman further said that AGI is "the equivalent of a median human that you can hire as a co-worker... they could do anything that you'd be happy with a remote co-worker doing". DeepMind recently defined 5 levels of AGI with level 1 meaning "equal to or somewhat better than an unskilled human". Levels 2 to 4 mean "at least 50th/90th/99th percentile of skilled adults". Level 5 is reserved for outperforming all humans. They further claim we are currently at level 1.
There are many more. For the interested reader I recommend checking the DeepMind paper above. It contains loads of relevant information. Also, check out Microsoft's 'Sparks of AGI' paper from last year and Goertzel's AGI review paper from 2014.
In this blog post I'm going to mostly address the definition by OpenAI about AI capability to work like a human. Note that this excludes many other intellectual things people do: reading a book, playing an instrument, engaging in a pointless yet interesting discussion. No, we are not talking about these in this post. Let's talk about work.
My parents
My mother is a primary school teacher. Or, sort of a teacher. She doesn't have the necessary qualifications so cannot teach the curricular material. One can say she's working with primary school children. Her classes are all about learning about nature. She brings animals to the class - snakes, rabbits, insects, etc. Show them to the children and teach them about their senses and behaviours. She's also gardening with the children and is very proud about the school garden she maintains.
My father runs a small accommodation business in our village. It's a Bedouin village. The bedouin community doesn't usually mix with the Jewish community in Israel, so our story as a Jewish family living together with Bedouins is quite unique. This is one of the main attractions for people coming to my father's place. People come over to hear the stories.
Back to AI. Let's think how AGI can do these types of work. Can it? What does it mean for AI if it can? And if it can, is that a capability of the AI or of the primary school children, and the people interested in stories about co-existence, who can now be satisfied by listening and learning from an AI?
Now, you can say "Tom, stop this emotional BS, it's just an irrelevant anecdote and it doesn't mean anything for the big picture. And besides, your parents exist on the fringes off society, they aren't the norm." and you will be totally right! You should say that. In fact, if you didn't say that, that's a shame. I expected more from you, my critical reader.
Let's dive deeper.
Interfacing with people and the physical world
There are many jobs in which interacting with other people is the main activity: teachers, consultants, managers, sales representatives, etc. Yes, the world might be changing by AI but to replace someone in these jobs your goal is not necessarily to replace the person, but to replace the entire system around it.
Many other jobs are about interfacing with the physical world. Drivers, builders, and plant workers come to mind. Since the industrial revolution we are pushing to automate more and more of these jobs, but that has nothing to do with AGI either.
In both of these examples I would argue that achieving AGI would reflect much more about the people and the social systems around the AI, than about the AI itself.
Now, Sam Altman is a smart guy. His definition of AGI avoids this problem by saying that AGI can replace the work of a remote worker. By that, I think he tries to avoid any physical aspect of the work. But interfacing with other people still remains a tricky point for AGI.
The CEO dilemma
Now imagine GPT 5 comes out, and Sam Altman announces 'AGI is here!'. This model can do the work of the average remote worker. Large language models often exhibit a jagged frontier - they can do some tasks amazingly well and fail miserably on simpler tasks (try to play a rock paper scissors with an LLM). So, it happens to be that this new GPT 5 has exactly the right capabilities to replace the CEO in the your company. It will make better decisions, have perfect understanding of the business, clients, and market, form an immaculate strategy, write highly motivational speeches, manage to align people to their vision, and drive the company forwards. This doesn't mean AI will replace the entire company. You still need people to physically meet with clients, IT support to fix devices when they break, cleaners, you name it. But the CEO, there's no need for them any more. What happens in this situation? Will they accept the new reality and just leave? Will the employees want them to leave, knowing AI can make better calls most of the time? Or do we still want to be guided be people? I think we do. I think we are not ready to be told what to do by an AI, even if we know well that the AI guidance is top notch.
My point
We've covered examples of jobs that involve integration with social systems and human processes. For these, I don't think it's clear what AGI means exactly, as the AI capabilities question is not the most important question. The main reason AI hasn't take over teachers, drivers, and CEOs is not a technical question, it's a question of social readiness. We, as humans, have some work to do to figure out what it means to live alongside AI.
So, what's next? I'm not sure. I think tech people generally underestimate the amount of social inertia we all operate in. Overall I think things will evolve but slower than most people predict. I guess the technology will improve, but I don't think AGI, as currently defined, is something that will happen along these advancements. The main reason is that these definitions don't take social context into account and are, therefore, not very useful. To be honest, I think we will forget the term AGI, or it will change from what it currently means, as I don't think the current term is very useful.
Note that none of that is about the usefulness of AI. There are plenty of use cases and I have no doubt it is going to have great implications on society.
This blog post was made with the help of ChatGPT and Claude 3.5 Sonnet.