AI could pose ‘extinction-level’ threat to humans and the US must intervene, report warns | CNN Business (2024)

New York CNN

A new report commissioned by the US State Department paints an alarming picture of the “catastrophic” national security risks posed by rapidly evolving artificial intelligence, warning that time is running out for the federal government to avert disaster.

The findings were based on interviews with more than 200 people over more than a year – including top executives from leading AI companies, cybersecurity researchers, weapons of mass destruction experts and national security officials inside the government.

The report, released this week by Gladstone AI, flatly states that the most advanced AI systems could, in a worst case, “pose an extinction-level threat to the human species.”

A US State Department official confirmed to CNN that the agency commissioned the report as it constantly assesses how AI is aligned with its goal to protect US interests at home and abroad. However, the official stressed the report does not represent the views of the US government.

The warning in the report is another reminder that although the potential of AI continues to captivate investors and the public, there are real dangers too.

“AI is already an economically transformative technology. It could allow us to cure diseases, make scientific discoveries, and overcome challenges we once thought were insurmountable,” Jeremie Harris, CEO and co-founder of Gladstone AI, told CNN on Tuesday.

“But it could also bring serious risks, including catastrophic risks, that we need to be aware of,” Harris said.“And a growing body of evidence — including empirical research and analysis published in the world’s top AI conferences — suggests that above a certain threshold of capability, AIs could potentially become uncontrollable.”

White House spokesperson Robyn Patterson said President Joe Biden’s executive order on AI is the “most significant action any government in the world has taken to seize the promise and manage the risks of artificial intelligence.”

“The President and Vice President will continue to work with our international partners and urge Congress to pass bipartisan legislation to manage the risks associated with these emerging technologies,” Patterson said.

News of the Gladstone AI report was first reported by Time.

‘Clear and urgent need’ to intervene

Researchers warn of two central dangers broadly posed by AI.

First, Gladstone AI said, the most advanced AI systems could be weaponized to inflict potentially irreversible damage. Second, the report said there are private concerns within AI labs that at some point they could “lose control” of the very systems they’re developing, with “potentially devastating consequences to global security.”

“The rise of AI and AGI [artificial general intelligence] has the potential to destabilize global security in ways reminiscent of the introduction of nuclear weapons,” the report said, adding there is a risk of an AI “arms race,” conflict and “WMD-scale fatal accidents.”

Gladstone AI’s report calls for dramatic new steps aimed at confronting this threat, including launching a new AI agency, imposing “emergency” regulatory safeguards and limits on how much computer power can be used to train AI models.

“There is a clear and urgent need for the US government to intervene,” the authors wrote in the report.

Safety concerns

Harris, the Gladstone AI executive, said the “unprecedented level of access” his team had to officials in the public and private sector led to the startling conclusions. Gladstone AI said it spoke to technical and leadership teams from ChatGPT owner OpenAI, Google DeepMind, Facebook parent Meta and Anthropic.

“Along the way, we learned some sobering things,” Harris said in a video posted on Gladstone AI’s website announcing the report. “Behind the scenes, the safety and security situation in advanced AI seems pretty inadequate relative to the national security risks that AI may introduce fairly soon.”

Gladstone AI’s report said that competitive pressures are pushing companies to accelerate development of AI “at the expense of safety and security,” raising the prospect that the most advanced AI systems could be “stolen” and “weaponized” against the United States.

The conclusions add to a growing list of warnings about the existential risks posed by AI – including even from some of the industry’s most powerful figures.

Nearly a year ago, Geoffrey Hinton, known as the “Godfather of AI,” quit his job at Google and blew the whistle on the technology he helped develop. Hinton has said there is a 10% chance that AI will lead to human extinction within the next three decades.

Hinton and dozens of other AI industry leaders, academics and others signed a statement last June that said “mitigating the risk of extinction from AI should be a global priority.”

Business leaders are increasingly concerned about these dangers – even as they pour billions of dollars into investing in AI. Last year, 42% of CEOs surveyed at the Yale CEO Summit last year said AI has the potential to destroy humanity five to ten years from now.

Human-like abilities to learn

In its report, Gladstone AI noted some of the prominent individuals who have warned of the existential risks posed by AI, including Elon Musk, Federal Trade Commission Chair Lina Khan and a former top executive at OpenAI.

Some employees at AI companies are sharing similar concerns in private, according to Gladstone AI.

“One individual at a well-known AI lab expressed the view that, if a specific next-generation AI model were ever released as open-access, this would be ‘horribly bad,’” the report said, “because the model’s potential persuasive capabilities could ‘break democracy’ if they were ever leveraged in areas such as election interference or voter manipulation.”

Gladstone said it asked AI experts at frontier labs to privately share their personal estimates of the chance that an AI incident could lead to “global and irreversible effects” in 2024. The estimates ranged between 4% and as high as 20%, according to the report, which noes the estimates were informal and likely subject to significant bias.

One of the biggest wildcards is how fast AI evolves – specifically AGI, which is a hypothetical form of AI with human-like or even superhuman-like ability to learn.

The report says AGI is viewed as the “primary driver of catastrophic risk from loss of control” and notes that OpenAI, Google DeepMind, Anthropic and Nvidia have all publicly stated AGI could be reached by 2028 – although others think it’s much, much further away.

Gladstone AI notes that disagreements over AGI timelines make it hard to develop policies and safeguards and there is a risk that if the technology develops slower-than-expected regulation could “prove harmful.”

How AI could backfire on humans

A related document published by Gladstone AI warns that the development of AGI and capabilities approaching AGI “would introduce catastrophic risks unlike any the United States has ever faced,” amounting to “WMD-like risks” if and when they are weaponized.

For instance, the report said AI systems could be used to design and implement “high-impact cyberattacks capable of crippling critical infrastructure.”

“A simple verbal or types command like, ‘Execute an untraceable cyberattack to crash the North American electric grid,’ could yield a response of such quality as to prove catastrophically effective,” the report said.

Other examples the authors are concerned about include “massively scaled” disinformation campaigns powered by AI that destabilize society and erode trust in institutions; weaponized robotic applications such as drone swam attacks; psychological manipulation; weaponized biological and material sciences; and power-seeking AI systems that are impossible to control and are adversarial to humans.

“Researchers expect sufficiently advanced AI systems to act so as to prevent themselves from being turned off,” the report said, “because if an AI system is turned off, it cannot work to accomplish its goal.”

AI could pose ‘extinction-level’ threat to humans and the US must intervene, report warns | CNN Business (2024)

FAQs

Will AI become a threat to humanity? ›

How AI could backfire on humans. A related document published by Gladstone AI warns that the development of AGI and capabilities approaching AGI “would introduce catastrophic risks unlike any the United States has ever faced,” amounting to “WMD-like risks” if and when they are weaponized.

How is AI posing a threat to human rights? ›

The Human Rights Risks of Artificial Intelligence (AI)
  • Discrimination: A major concern revolves around potential and existing discrimination perpetuated by AI systems. ...
  • Privacy Concerns: Another concern about AI is the erosion of privacy rights. ...
  • Workers' and Creators' Rights: ...
  • Power Centralization.
Jul 24, 2023

What are the odds of AI destroying humanity? ›

In a survey of 2,700 AI experts, a majority said there was an at least 5% chance that superintelligent machines will destroy humanity.

Is AI a threat to human resources? ›

AI risks specific to HR

AI-powered HR systems rely on vast amounts of employee data. If that data isn't properly secured and private data gets breached by a cybercriminal, it can lead to major headaches including lawsuits.

Should I be worried about AI? ›

If AI systems are trained on biased data, they may perpetuate and amplify biases and discrimination. Ensuring fairness in AI decisions is a crucial but complex challenge. Loss of privacy.

How can AI be harmful to humans? ›

There are a myriad of risks to do with AI that we deal with in our lives today. Not every AI risk is as big and worrisome as killer robots or sentient AI. Some of the biggest risks today include things like consumer privacy, biased programming, danger to humans, and unclear legal regulation.

How is AI invading our privacy? ›

AI systems that use facial recognition, fingerprinting, and other biometric technologies can intrude into personal privacy, collecting sensitive data that is unique to individuals and, if compromised, irreplaceable.

How is AI a threat to human dignity? ›

How does AI impact human dignity? AI can impact human dignity through its potential to invade privacy, displace jobs, and perpetuate biases and discrimination. These impacts challenge the intrinsic worth and respect every individual deserves.

What jobs are most threatened by AI? ›

Jobs that involve data analysis, bookkeeping, and basic financial reporting are highly susceptible to automation. These roles, which focus on repetitive administrative tasks, are prime candidates for AI-driven efficiency improvements.

How bad is AI for the environment? ›

Here are some of the ways AI could prove harmful to the planet: Disposal & Electronic Waste – AI requires more technology and computers to be running than usual, meaning at the end of its life cycle – AI could create more electronic waste than previous computer systems installed at business facilities.

Is AI replacing HR? ›

Your people wonder if big decisions about hiring and firing are being made by AI software. Considering the scope of AI's capabilities, it's not an uncommon fear. Luckily, the evidence shows that AI in HR is unlikely to take over anyone's role or make final decisions about your employees.

Will AI lead to human extinction? ›

Many artificial intelligence researchers see the possible future development of superhuman AI as having a non-trivial chance of causing human extinction – but there is also widespread disagreement and uncertainty about such risks.

Will AI take over humanity yes or no? ›

The short answer to this fear is: No, AI will not take over the world, at least not as it is depicted in the movies.

What jobs will AI replace? ›

What Jobs Will AI Replace First?
  • Data Entry and Administrative Tasks. One of the first job categories in AI's crosshairs is data entry and administrative tasks. ...
  • Customer Service. ...
  • Manufacturing And Assembly Line Jobs. ...
  • Retail Checkouts. ...
  • Basic Analytical Roles. ...
  • Entry-Level Graphic Design. ...
  • Translation. ...
  • Corporate Photography.
Jun 17, 2024

Is AI a threat to national security? ›

6 core threats AI poses to national security

Cyberattacks: AI models can be trained to identify and exploit vulnerabilities in software and systems, potentially leading to major breaches.

Top Articles
What Makes Solana Unique?
Warehouse Services | Warehouse Solutions | Murphy Logistics
Zabor Funeral Home Inc
Sound Of Freedom Showtimes Near Governor's Crossing Stadium 14
La connexion à Mon Compte
Crocodile Tears - Quest
Plus Portals Stscg
Arrests reported by Yuba County Sheriff
Bloxburg Image Ids
What is IXL and How Does it Work?
Craigslist Heavy Equipment Knoxville Tennessee
Scholarships | New Mexico State University
Jc Post News
I Wanna Dance with Somebody : séances à Paris et en Île-de-France - L'Officiel des spectacles
Available Training - Acadis® Portal
Kürtçe Doğum Günü Sözleri
Powerball winning numbers for Saturday, Sept. 14. Check tickets for $152 million drawing
Convert 2024.33 Usd
Inter-Tech IM-2 Expander/SAMA IM01 Pro
Ruben van Bommel: diepgang en doelgerichtheid als wapens, maar (nog) te weinig rendement
Bridge.trihealth
Earl David Worden Military Service
ELT Concourse Delta: preparing for Module Two
Mychart Anmed Health Login
라이키 유출
Ppm Claims Amynta
Hannaford To-Go: Grocery Curbside Pickup
Bethel Eportal
[PDF] PDF - Education Update - Free Download PDF
Valic Eremit
How do you get noble pursuit?
Umn Biology
Taylored Services Hardeeville Sc
897 W Valley Blvd
Productos para el Cuidado del Cabello Después de un Alisado: Tips y Consejos
Edict Of Force Poe
Snohomish Hairmasters
Muziq Najm
Wsbtv Fish And Game Report
Htb Forums
Wait List Texas Roadhouse
Oppenheimer Showtimes Near B&B Theatres Liberty Cinema 12
Miami Vice turns 40: A look back at the iconic series
11 Best Hotels in Cologne (Köln), Germany in 2024 - My Germany Vacation
At Home Hourly Pay
Yourcuteelena
Ucla Basketball Bruinzone
Reli Stocktwits
Poster & 1600 Autocollants créatifs | Activité facile et ludique | Poppik Stickers
Osrs Vorkath Combat Achievements
Strange World Showtimes Near Century Federal Way
OSF OnCall Urgent Care treats minor illnesses and injuries
Latest Posts
Article information

Author: Sen. Emmett Berge

Last Updated:

Views: 5790

Rating: 5 / 5 (60 voted)

Reviews: 91% of readers found this page helpful

Author information

Name: Sen. Emmett Berge

Birthday: 1993-06-17

Address: 787 Elvis Divide, Port Brice, OH 24507-6802

Phone: +9779049645255

Job: Senior Healthcare Specialist

Hobby: Cycling, Model building, Kitesurfing, Origami, Lapidary, Dance, Basketball

Introduction: My name is Sen. Emmett Berge, I am a funny, vast, charming, courageous, enthusiastic, jolly, famous person who loves writing and wants to share my knowledge and understanding with you.