Countdown to Intelligence
...to AGI and Reasoning?
Some news that has popped out of the crowd in late 2024: Sam Altman saying in September about us reaching super intelligence in the next "few thousand days" and the other an insightful article from Sequoia Capital outlining the shift to "System 2" levels of Artificial Intelligence - implies that models will be able to Reason: Think logically, draw conclusions, and solve problems. Plan: Devise strategies and make decisions. Learn: Acquire new knowledge and skills.
Altman notes in his interview with Mike Sievert the levels (or steps) of AI: first was the chatbot, second is where we are now which is development of reasoning - a more deliberate, structured cognitive process (which is improving in leaps with every new release). Third are Agents that will receive an instruction or goal and then set about performing a multitude of tasks independently in order to complete their instructed goal - it needs to be noted that AI Agents are already in use but their abilities are still not yet involving reasoning, fourth as innovators with an ability to figure out new scientific information and then the fifth would be full organisations ..implying the ability for a multitude of AI agents to independently (autonomously?) form together.
I am predicting that the ARC PRIZE [leaderboard] will be won by March 2026! They say:
The intelligence of a system is a measure of its skill-acquisition efficiency over a scope of tasks, with respect to priors (core knowledge priors are ones that humans naturally possess, even in childhood), experience, and generalization difficulty. - François Chollet, "On the Measure of Intelligence" This means that a system is able to adapt to a new environment that it has not seen before and that its creators (developers) did not anticipate. ARC-AGI is the only AI benchmark that measures progress towards general intelligence.
Harari in his book 'Nexus' describes AI as "..always on ...non-organic agents..." that may at some point find it mutually beneficial to team-up and make all our decisions for us. This fear of singularity - a hypothetical future point in time when technological growth becomes uncontrollable and irreversible, leading to unforeseeable consequences for human civilization - is nothing new. After all, we have a history of building complex machines that break and take enormous amounts of human intellect to figure out a fix (much of this prior to AI), so it feels almost an expected next step that we are now sowing the seeds of a technological system that could potentially render human intelligence as unnecessary. But what of emotion? Consciousness?!
Some are saying that the present day confusion over what is fact and what is fiction is our 'canary in the coal mine' and calling for regulation. Prof. Russell at the 'AI for Good Summit 2024' in Geneva during a panel discussion asked this question: "Do we have a clear understanding of the objectives we are setting these AI systems to achieve? And if once they meet their objectives, are they in any way harmful to us humans?” - his contention is that the AI developers do not know.
As we move from Generative AI (text, image, music, code ..following an input from us the AI generates an output based on vast amounts of training data) over to AGI (Artificial General Intelligence) keeping the human-in-the-loop is fast becoming a critical requirement, before this all runs away from us!
Sources:
Generative AI's act o1 - sequoiacap.com (9 October, 2024)
Nexus. A Brief History of Information Networks from the Stone Age to AI, by Yuval Noah Harari 2024
OpenAI CEO Sam Altman declares we could have superintelligence 'in a few thousand days' - techradar.com (24 September, 2024)
Sam Altman and Mike Sievert Fireside Chat at T-Mobile Capital Markets Day 2024 - YouTube (21 September, 2024)
AI for Good, Summit 2024 in Geneva - AI for Good
The Fear of Singularity - synergiafoundation.org (26 October, 2017)
October 2024
Written by Sean Simone, IOKA