Time Until Superintelligence: 1-2 Years, or 20? Something Doesn't Add Up.

Time Until Superintelligence: 1-2 Years, or 20? Something Doesn't Add Up.

March 17, 2024
Share
Author: Big Y

Table of Contents

1. Introduction

2. Disparate Timelines for Super Intelligence

3. Transformative AI and Super Intelligence

4. OpenAI's Recent Statement

5. Challenges in Aligning Super Intelligence

6. Factors That Could Slow Down Timelines

7. Factors That Could Speed Up Timelines

8. Implications of Super Intelligence

9. Planning for the Arrival of Super Intelligence

10. Conclusion

Introduction

In recent news, there have been varying timelines proposed for the arrival of super intelligence. OpenAI has suggested that it might need to be made safe within four years, while other lab leaders believe it is decades away. This article aims to explore these different timelines, discuss what might speed up or slow down the arrival of super intelligence, and delve into the implications of this transformative technology.

Disparate Timelines for Super Intelligence

Mustafa Suleiman, the head of inflection AI, raises concerns about the future risks of super intelligence. He suggests that slowing down the development of AI might become the safe and ethical choice in a decade or two. However, this timeline seems conservative considering the recent advancements in AI, such as inflection AI's world's second-highest performing supercomputer.

On the other hand, projections based on current scaling laws by Jacob Steinhardt of Berkeley indicate that super intelligence could be achieved within six and a half years. These projections consider the exponential growth of compute power and data availability. With such advancements, AI systems could surpass human capabilities in various domains, including coding, mathematics, and creative thinking.

Transformative AI and Super Intelligence

The concept of transformative AI or super intelligence refers to AI systems that possess capabilities far beyond human intelligence. These systems could excel in tasks like coding, hacking, protein engineering, and learning at an unprecedented rate. By training on diverse modalities, they could develop an intuitive understanding of domains where human experience is limited.

Research already shows that GPT-4, a language model developed by OpenAI, outperforms humans in certain benchmarks for creative thinking. The median forecast suggests that AI could surpass human coding abilities by 2027 and even win a gold medal at the international math Olympiad by 2028. The potential of GPT-4 on tests like the MMLU, which covers 57 subject matters, remains unknown but holds great promise.

OpenAI's Recent Statement

OpenAI has acknowledged the potential dangers of super intelligence and the need for scientific and technical breakthroughs to ensure its safe development. They have formed a new team dedicated to this effort, co-led by Ilya Satskova and Jan Leica. OpenAI aims to align and control AI systems that surpass human intelligence, emphasizing the importance of addressing safety concerns within four years.

While OpenAI's commitment to safety is commendable, their strict deadline and high bar for confidence in their solutions pose significant challenges. They acknowledge that current alignment techniques relying on human supervision will not scale to super intelligence. OpenAI's plan involves automating alignment and safety research, exploring innovative approaches like automated red teaming and internal model analysis.

Challenges in Aligning Super Intelligence

Aligning super intelligence presents significant challenges. Jailbreaking large language models like GPT-4 has been demonstrated, raising concerns about potential misuse and criminal activities. Competing objectives within the model, where prediction overrides safety training, make it difficult to address this issue solely through more data and scale.

Legal challenges and potential criminal sanctions also pose obstacles. Calls for holding AI firms accountable for the creation of fake humans and the proliferation of fake profiles on social media platforms highlight the need for responsible AI development. Such challenges could lead to lawsuits and sanctions, impacting the focus and resources dedicated to developing super intelligence.

Factors That Could Slow Down Timelines

Various factors could slow down the development of super intelligence. Legal and ethical concerns, including the need to prevent AI misuse and the creation of fake humans, might divert resources and attention from advancing AI capabilities. Additionally, addressing safety and alignment issues, as demonstrated by the challenges of jailbreaking language models, could require significant efforts.

Moreover, the potential for hallucinations in language models poses a hurdle to their widespread use. Overcoming this limitation, where models provide accurate and reliable information, is crucial for gaining trust and acceptance. These challenges, along with the need for scientific breakthroughs and community consensus, might contribute to delaying the arrival of super intelligence

- End -
VOC AI Inc. 8 The Green,Ste A, in the City of Dover County of Kent Zip Code: 19901Copyright © 2024 VOC AI Inc. All Rights Reserved. Terms & Conditions Privacy Policy
This website uses cookies
VOC AI uses cookies to ensure the website works properly, to store some information about your preferences, devices, and past actions. This data is aggregated or statistical, which means that we will not be able to identify you individually. You can find more details about the cookies we use and how to withdraw consent in our Privacy Policy.
We use Google Analytics to improve user experience on our website. By continuing to use our site, you consent to the use of cookies and data collection by Google Analytics.
Are you happy to accept these cookies?
Accept all cookies
Reject all cookies