Artificial Super Intelligence: How far off are we
- Curious
- Jun 25, 2024
- 5 min read
Updated: Aug 26, 2024
Noticeable advancements in machine learning and computational power have fuelled a widespread adoption of AI technology in the last 5-10 years. The development of transformer architectures like BERT and GPT, have revolutionised natural language processing demonstrating impressive capabilities in understanding and generating human-like text. This has led to the creation of Conversational AI models like ChatGPT-4.
Specialised hardware, such as Graphics Processing Units (GPUs) and Tensor Processing Units (TPUs), have dramatically increased the speed and efficiency of training these large models while scalable computing infrastructure, including cloud computing and distributed computing frameworks like TensorFlow and PyTorch making it easier to host and provision these large-scale models.
Thanks to these technologies businesses have recognised the potential of AI to drive efficiency, enhance customer experiences, and create new revenue streams. AI applications in areas such as personalised recommendations, predictive analytics, automation, and fraud detection have demonstrated significant return on investment (ROI).
The Ultimate Goal: Artificial Super Intelligence
All of these factors will lead to what some experts believe is the ultimate goal; Artificial General Intelligence (AGI) followed by Artificial Super Intelligence (ASI). However both of these concepts and their attainment at this point remain highly speculative and uncertain.
Predictions around when AGI and ASI will be achieved vary widely among experts, reflecting the complex nature of these advancements and the numerous technical, ethical, and philosophical challenges still unsolved; but here is an overview of how things may play out. For more detailed explanations of what some of these acronyms mean refer to our article:
Artificial General Intelligence (AGI)
Optimistic
Some attitudes have experts and researchers suggesting that AGI could be achieved within 1-3 decades. For example Ray Kurzweil predicts the year 2045 as a critical point when transformative technologies become reality, discussing this at length in his book "The Singularity is Near: When Humans Transcend Biology". Kurzweil outlines his vision of the future, including the timeline for achieving AGI and the concept of the technological singularity; a hypothetical scenario where AI becomes more intelligent than its human creators.
SoftBank a leading global venture capitalist that focuses on the AI sector recently held their annual meeting in Tokyo on June 21 2024. During the event the founder and CEO Masayoshi Son suggested when looking at the current trajectory, AGI could be a reality within the next decade. Son asserted that by 2030, AI could be “one to 10 times smarter than humans,” and by 2035, it might reach a staggering “10,000 times smarter” than human intelligence, heralding the age of ASI.
Moderate
More moderate predictions estimate that AGI will take at least until the end of the 21st century. Nick Bostrom, a prominent philosopher and AI researcher, offers a cautious perspective on the timeline in his book "Super Intelligence: Paths, Dangers, Strategies". While he acknowledges the possibility of AGI within a few decades, he also highlights significant uncertainties and suggests that it could take much longer.
Rodney Brooks, an AI pioneer and former director of MIT's Computer Science and Artificial Intelligence Laboratory, has expressed skepticism about the near-term achievement of AGI in his book "The Seven Deadly Sins of AI Predictions." He suggests that significant advances are still needed, and achieving AGI could take many decades or longer.
These perspectives consider the current limitations of AI, such as the need for more data to train AGI models but that we also need to consider the quality of that data. The difficulty of achieving true understanding and reasoning, and the challenge of integrating various cognitive capabilities also remain unsolved. Researchers in this camp argue that while progress is steady, significant conceptual and technical breakthroughs are still required.
True to the spirit of advancement organisations and groups have stepped in to try to solve these challenges by providing pioneering research in areas such as Brain Connectivity (Connectomics) and Neural Correlates of Consciousness (NCC) both seeking to identify the specific brain activities associated with conscious experience in an effort to help in developing AI systems that mimic human-like brain processing pathways and patterns, awareness and self-reflection.
Pessimistic
In contrast pessimistic predictions suggest that AGI will take at least several centuries to develop and that we should at least consider the possibility that it might never be achieved. This view is based around the profound complexity of human intelligence and the possibility that some aspects of cognition and consciousness might be inherently difficult or impossible to replicate in machines.
These attitudes continue to raise increasingly complex existential questions around what it means to be human and the nature of consciousness, the rights of ASI systems and their place within society and cultures. They point to the potential limitations of current AI paradigms and the possibility that entirely new approaches not yet explored are needed. "The Myth of Artificial Intelligence" by Erik J. Larson asserts this point while expressing that the current trajectory of AI research is unlikely to lead to AGI.
OK Then: What About Artificial Super Intelligence
Given such views on AGI, where does that put ASI?
To answer this question let us assume AGI is achieved for a second. Optimistically some might favour SoftBank CEOs outlook that it is possible ASI could follow relatively quickly. This proposal is based on a common belief that any realisation of AGI would result in its inherent ability to improve its own intelligence, leading to an intelligence explosion or a rapid transition to super intelligence that places its evolution on an exponential curve.
In a blog post on OpenAI’s website, Sam Altman CEO of OpenAI and arguably the world's leading organisation in the field of AI research wrote, "Predicting timelines for ASI is fraught with uncertainty. We might achieve AGI in the next couple of decades, but ASI could take much longer, if it’s achievable at all".
One would hope however, that any pathway is tempered with the cautionary development of ethical safeguards and regulatory measures in parallel. Granted history to date has shown that legislation always lags years behind the technology itself, take Information Privacy Laws for example. But perhaps we have learnt from our mistakes and are able to couple appropriate regulations with technology releases. Ideally this will ensure that such systems are safe, controllable, and aligned with human values before being released into the wild.
Love what you read? Rate this article and help us grow.
Artifical Super Intelligence - A Final Thought: In a recent discussion at the AI Alignment Forum, Sam Altman, stated, "We need to significantly ramp up our efforts in AI safety and alignment research now. The potential impacts of ASI are so profound that we must be as prepared as possible, regardless of the timeline".
Thumbnail image generated by Copilot Designer
Opmerkingen