When will singularity happen? 1700 expert opinions of AGI [2024] (2024)

We analyzed into 1,700 scientists’ opinion for quick answers:

Will AGI / singularity ever happen? According to most AI experts, yes.

When will the singularity / AGI happen? Before the end of the century. The consensus view was that it would take around 50 years in 2010s. After the advancements in Large Language Models (LLMs), some leading AI researchers updated their views. For example, Hinton believed in 2023 that it could take 5-20 years.1

What is our current status? While there are narrow AI solutions that exceed humans in many tasks, a generally intelligent machine doesn’t exist even though some researchers believe that large language models exhibit emerging, more generalist capabilities than other existing AI models.2

The more nuanced answers are below. See the summary of surveys asking AI scientists about AGI.

Understand what singularity is & why we fear it

Artificial intelligence scares and intrigues us. Almost every week, there’s a new AI scare on the news like developers afraid of what they’ve created or shutting down bots because they got too intelligent. Most of these AI myths result from research misinterpreted by those outside the AI and GenAI fields.

The greatest fear about AI is singularity (also called Artificial General Intelligence or AGI), a system that combines human-level thinking with rapidly accessible near-perfect memory. According to some experts, singularity also implies machine consciousness.

Regardless of whether it is conscious or not, such a machine could continuously improve itself and reach far beyond our capabilities. Even before artificial intelligence was a computer science research topic, science fiction writers like Asimov were concerned about this. They were devising mechanisms (i.e. Asimov’s Laws of Robotics) to ensure the benevolence of intelligent machines which is more commonly called alignment research today.

Understand the results of major surveys of AI researchers in 2 minutes

We looked at the results of 5 surveys with around 1,700 participants where researchers estimated when singularity would happen. In all cases, the majority of participants expected AIsingularity before 2060.

In the 2022 Expert Survey on Progress in AI, conducted with 738 experts who published at the 2021 NIPS and ICML conferences, AI experts estimate that there’s a 50% chance that high-level machine intelligence will occur until 2059.

Older surveys had similar conclusions.

In 2009, 21 AI experts participating the in AGI-09 conference were surveyed. Experts believed AGI will occur around 2050, and plausibly sooner. You can see above their estimates regarding specific AI achievements: passing the Turing test, passing third grade, accomplishing Nobel worthy scientific breakthroughs and achieving superhuman intelligence.

In 2012/2013, Vincent C. Muller, the president of the European Association for Cognitive Systems, and Nick Bostrom from the University of Oxford, who published over 200 articles on superintelligence and artificial general intelligence (AGI), conducted a survey of AI researchers. 550 participants answered the question: “When is AGI likely to happen?” The answers are distributed as

  • 10% of participants think that AGI is likely to happen by 2022
  • For 2040, the share is 50%
  • 90% of participants think that AGI is likely to happen by 2075.

In 2017 May, 352 AI experts who published at the 2015 NIPS and ICML conferences were surveyed. Based on survey results, experts estimate that there’s a 50% chance that AGI will occur until 2060. However, there’s a significant difference of opinion based on geography:Asian respondents expect AGI in 30 years, whereas North Americans expect it in 74 years. Some significant job functions that are expected to be automated until 2030 are: Call center reps, truck driving, and retail sales.

In 2019, 32 AI experts participated in asurvey on AGI timing:

  • 45% of respondents predict a date before 2060
  • 34% of all participants predicted a date after 2060
  • 21% of participants predicted that singularity will never occur.

AI entrepreneurs are also making estimates on when we will reach singularity and they are more optimistic than researchers. This is expected as they benefit from increased interest in AI:

  • Elon Musk in 2024: He expects development of an artificial intelligence smarter than the smartest of humans by 20263
  • Louis Rosenberg, computer scientist, entrepreneur, and writer: 2030
  • Patrick Winston, MIT professor and director of the MIT Artificial Intelligence Laboratory from 1972 to 1997: He mentioned 2040 while stressing that while it would take place, it is a very hard-to-estimate date.
  • Ray Kurzweil, computer scientist, entrepreneur, and writer of 5 national best sellers including The Singularity Is Near:Previously 2045, in 2024: 2032.4
  • Jürgen Schmidhuber,co-founder atAI company NNAISENSE anddirector of the Swiss AI lab IDSIA: ~2050

Keep in mind that AI researchers were over-optimistic before

Examples include:

  • AI pioneerHerbert A. Simon in 1965: “machines will be capable, within twenty years, of doing any work a man can do.”
  • Japan’s Fifth Generation Computer in 1980 had a ten-year timeline with goals like “carrying on casual conversations”

This historical experience contributed to most current scientists shying away from predicting AGI in bold time frames like 10-20 years. However, this has changed with the rise of generative AI as outlined below:

Understand why reaching AGI seems inevitable to most experts

Reaching AGI may seem like a wild prediction, but it seems like quite a reasonable goal when you consider these facts:

  • Human intelligence is fixed unless we somehow merge our cognitive capabilities with machines. Elon Musk’s neural lace startup aims to do this but research on brain-computer interfaces is in the early stages.
  • Machine intelligence depends on algorithms, processing power, and memory. Processing power and memory have been growing at an exponential rate. As for algorithms, until now we have been good at supplying machines with the necessary algorithms to use their processing power and memory effectively.

Considering that our intelligence is fixed and machine intelligence is growing, it is only a matter of time before machines surpass us unless there’s some hard limit to their intelligence. We haven’t encountered such a limit yet.

This is a good analogy for understanding exponential growth. While machines can seem dumb right now, they can grow quite smart, quite soon.

If classic computing slows its growth, quantum computing could complement it

Classic computing has taken us quite far. AI algorithms on classical computers can exceed human performance in specific tasks like playing chess or Go. For example, AlphaGo Zero beat AlphaGo by 100-0. AlphaGo had beaten the best players on earth. However, we are approaching the limits of how fast classical computers can be.

Moore’s law, which is based on the observation that the number of transistors in a dense integrated circuit double about every two years, implies that the cost of computing halves approximately every 2 years. However, most experts believe that Moore’s law is coming to an end during this decade. Though there are efforts to keep improving application performance, it will be challenging to keep the same rates of growth.

Quantum Computing, which is still an emerging technology, can contribute to reducing computing costs after Moore’s law comes to an end. Quantum Computing is based on the evaluation of different states at the same time whereas classical computers can calculate one state at one time. The unique nature of quantum computing can be used to efficiently train neural networks, currently the most popular AI architecture in commercial applications. AI algorithms running on stable quantum computers have a chance to unlock singularity.

For more information about quantum computers feel free to read our articles on quantum computing.

Understand why some believe that we will not reach AGI

There are 3 major arguments against the importance or existence of AGI. We examined them along with their common rebuttals:

1- Intelligence is multi-dimensional

Therefore, AGI will be different, not superior to human intelligence.

  • This is true and human intelligence is also different than animal intelligence. Some animals are capable of amazing mental feats like squirrels remembering where they hid hundreds of nuts for months.
  • Yann LeCun, one of the pioneers of deep learning, believes that we should retire the word AGI and focus on achieving “human-level AI”.5 He argues human mind is specialized and intelligence is a collection of skills and the ability to learn new skills. Each human can only accomplish a subset of human intelligence tasks.6 It is also hard to understand the specialization level of human mind as humans since we don’t know and can’t experience the entire spectrum of intelligence.
  • In areas where machines exhibited super-human intelligence, humans were able to beat them by leveraging machine-specific weaknesses. For example, in 2023 an amateur was able to beat a go program that is on par with go programs that beat world champions by studying and leveraging the program’s weaknesses.

However, the multi-dimensional nature of intelligence did not stop humans from achieving far more than other species in terms of many typical measures of success for a species. For example, hom*o sapiens contributes most to the bio-mass on the globe among mammals.

2- Intelligence is not the solution to all problems

For example, even the best machine analyzing existing data may not be able to find a cure for cancer. It will need to run experiments and analyze results to discover new knowledge in most areas.

This is true with some caveats. More intelligence can lead to better-designed and managed experiments, enabling more discovery per experiment. History of research productivity should probably demonstrate this but data is quite noisy and there are diminishing returns on research. We encounter harder problems like quantum physics as we solve simpler problems like Newtonian motion.

3- AGI is not possible because it is not possible to model the human brain

Theoretically, it is possible to model any computational machine including the human brain with a relatively simple machine that can perform basic computations and has access to infinite memory and time. This is the universally accepted Church-Turing hypothesis laid out in 1950. However as stated, it requires certain difficult conditions: infinite time and memory.

Most computer scientists believe that it will take less than infinite time and memory to model the human brain. However, there is not a mathematically sound way to prove this belief as we do not understand the brain enough to exactly understand its computational power. We will just have to build such a machine!

And we haven’t been successful, yet. For example, the ChatGPT large language model launched in November/2022 caused significant excitement with its fluency and quickly reached a million users. However, its lack of logical understanding makes its output error-prone.

For a more dramatic example, this is a video of what happens when machines play soccer. It is a bit dated (from 2017) but makes regular players feel like soccer legends in comparison.

To learn more about Artificial General Intelligence

Ray Kurzweil’s lecture:

Joshua Brett Tenenbaum, a Professor of Cognitive Science and Computation at MIT, is explaining how we can achieve AGI singularity:

Hope this clarifies some of the major points regarding AGI. For more on how AI is changing the world, you can check out articles on AI, AI technologies and AI applications in marketing, sales, customer service, IT, data or analytics.

And if you have a business problem that is not addressed here:

Find the Right Vendors

Sources

Arguments against AGI based partially on Wired’s summary of arguments against AGI and Wikipedia.

When will singularity happen? 1700 expert opinions of AGI [2024] (2024)

FAQs

How soon will the singularity happen? ›

How close are we to technological singularity? Predictions vary on when the technological singularity could occur, with Elon Musk predicting it could happen as soon as next year or 2026. However, leading futurist Ray Kurzweil stands by his original prediction of the technological singularity occurring around 2045.

What year will we achieve AGI? ›

Google DeepMind's co-founder Shane Legg said in an interview with a tech podcaster that there's a 50% chance that AGI can be achieved by 2028. He had publicly announced the same in his blog in 2011. While speaking about his vision behind xAI, Elon Musk predicted a full AGI by 2029.

What is the timeline for AGI? ›

A decade's delay from “AGI by 2027” claims would still significantly outpace average predictions from surveyed AI researchers, who place 50 percent chances of full labor automation in the 2100s and automating complex knowledge work like AI research— key to Aschenbrenner's AGI definition—in 2060.

How far from AGI are we? ›

Based on survey results, experts estimate that there's a 50% chance that AGI will occur until 2060. However, there's a significant difference of opinion based on geography: Asian respondents expect AGI in 30 years, whereas North Americans expect it in 74 years.

How close we are to singularity? ›

Kurzweil believes that the singularity will occur by approximately 2045.

What happens when you enter the singularity? ›

The tidal forces as you approached the singularity would be so powerful and unpredictable that not only would you be torn apart, but so would space and time. They would fragment into droplets, destroying any concept of past and future.

Is AGI the end of humanity? ›

Existential risks

The risk of human extinction from AGI has been the topic of many debates, but there is also the possibility that the development of AGI would lead to a permanently flawed future. Notably, it could be used to spread and preserve the set of values of whoever develops it.

Will AGI replace humans? ›

Can AI Replace Humans? While it is never really just black and white, AI is not expected to replace humans entirely. Instead, the consensus from most of the experts in the subject suggest that AI will augment human capabilities rather than replace them.

What will humans do after AGI? ›

Artificially generated human beings. The future after AGI are very possibly Artificially Generated Humans (AGH). Mankind will achieve immortality. Mankind will be able to generate entire humans that are indistinguishable from organic humans that were produced through natural reproduction.

How far away is artificial superintelligence? ›

Artificial Superintelligence Could Arrive by 2027, Scientist Predicts.

What does Sam Altman think about AGI? ›

Typically, we view AGI as AI that is broadly more capable than humans at a range of different tasks. In the book, Altman takes it further, describing AGI as "when AI will be able to achieve novel scientific breakthroughs on its own."

How do I predict my AGI? ›

Calculating your AGI requires just two steps:
  1. Gather all your income statements for taxable income: salary, self-employment, and any income reported on Forms 1099 forms. Add them up to arrive at your total or gross income.
  2. Subtract allowable deductions and expenses from the sum.
Jul 19, 2024

How close are we to artificial general intelligence? ›

We are still far from achieving AGI-level capability that allows reasoning planning across varied domains without retraining or human oversight (Saparov et al., 2024) .

What will happen when we reach singularity? ›

The theory of technological singularity predicts a point in time when humans lose control over their technological inventions and subsequent developments due to the rise of machine consciousness and, as a result, their superior intelligence.

Is AGI the singularity? ›

Defining the Singularity

The concept of the singularity, popularized by mathematician and computer scientist Vernor Vinge, refers to a hypothetical point in the future when AGI reaches a level of intelligence and self-improvement that is beyond human comprehension.

Will the universe collapse into a singularity? ›

As the energy density, scale factor and expansion rate become infinite, the universe ends as what is effectively a singularity.

Does anything ever reach the singularity? ›

Once you cross the threshold to form a black hole, everything inside the event horizon crunches down to a singularity that is, at most, one-dimensional. No 3D structures can survive intact.

How powerful will AI be in 2030? ›

By 2030, AI will be unfathomably more powerful than humans in ways that will transform our world. It will also continue to lag human capabilities in other ways.

Is the singularity really near? ›

Finally the exponential growth in computing capacity will lead to the singularity. Kurzweil spells out the date very clearly: "I set the date for the Singularity—representing a profound and disruptive transformation in human capability—as 2045".

Top Articles
5 Ways to Communicate Effectively with Big Clients
Examples of Contingent Liabilities | Lawsuits, Product Warranties
Regal Amc Near Me
Kentucky Downs Entries Today
Crime Scene Photos West Memphis Three
Which aspects are important in sales |#1 Prospection
Stream UFC Videos on Watch ESPN - ESPN
Craigslist Chautauqua Ny
Wordscape 5832
Bernie Platt, former Cherry Hill mayor and funeral home magnate, has died at 90
Wisconsin Women's Volleyball Team Leaked Pictures
Help with Choosing Parts
Nitti Sanitation Holiday Schedule
Who called you from +19192464227 (9192464227): 5 reviews
St Maries Idaho Craigslist
Virginia New Year's Millionaire Raffle 2022
Officialmilarosee
Keck Healthstream
Long Island Jobs Craigslist
Little Caesars 92Nd And Pecos
2024 INFINITI Q50 Specs, Trims, Dimensions & Prices
Craigslist Clinton Ar
Spn 520211
Ceramic tiles vs vitrified tiles: Which one should you choose? - Building And Interiors
Pioneer Library Overdrive
Relaxed Sneak Animations
Ticket To Paradise Showtimes Near Cinemark Mall Del Norte
Downloahub
Mercedes W204 Belt Diagram
134 Paige St. Owego Ny
Ark Unlock All Skins Command
Http://N14.Ultipro.com
The Minneapolis Journal from Minneapolis, Minnesota
Mvnt Merchant Services
Weather Underground Corvallis
Flipper Zero Delivery Time
Tunica Inmate Roster Release
Tattoo Shops In Ocean City Nj
Tricare Dermatologists Near Me
60 Days From May 31
Honkai Star Rail Aha Stuffed Toy
Studentvue Calexico
Bmp 202 Blue Round Pill
Reli Stocktwits
Victoria Vesce Playboy
RubberDucks Front Office
Paradise leaked: An analysis of offshore data leaks
Richard Mccroskey Crime Scene Photos
Mike De Beer Twitter
Coldestuknow
Bunbrat
Latest Posts
Article information

Author: Greg O'Connell

Last Updated:

Views: 5592

Rating: 4.1 / 5 (42 voted)

Reviews: 89% of readers found this page helpful

Author information

Name: Greg O'Connell

Birthday: 1992-01-10

Address: Suite 517 2436 Jefferey Pass, Shanitaside, UT 27519

Phone: +2614651609714

Job: Education Developer

Hobby: Cooking, Gambling, Pottery, Shooting, Baseball, Singing, Snowboarding

Introduction: My name is Greg O'Connell, I am a delightful, colorful, talented, kind, lively, modern, tender person who loves writing and wants to share my knowledge and understanding with you.