14 Dangers of Artificial Intelligence (AI) | Built In (2024)

As AI grows more sophisticated and widespread, the voices warning against the potential dangers ofartificial intelligence grow louder.

“These things could get more intelligent than us and could decide to take over, and we need to worry now about how we prevent that happening,”said Geoffrey Hinton, known as the “Godfather of AI” for his foundational work onmachine learning andneural network algorithms. In 2023, Hinton left his position at Google so that he could “talk about the dangers of AI,” noting a part of him evenregrets his life’s work.

The renowned computer scientist isn’t alone in his concerns.

Tesla and SpaceX founder Elon Musk, along with over 1,000 other tech leaders, urged in a 2023 open letterto put a pause on large AI experiments, citing that the technology can “pose profound risks to society and humanity.”

Dangers of Artificial Intelligence

  • Automation-spurred job loss
  • Deepfakes
  • Privacy violations
  • Algorithmic bias caused by bad data
  • Socioeconomic inequality
  • Market volatility
  • Weapons automatization
  • Uncontrollable self-aware AI

Whether it’s the increasing automation of certain jobs,gender andracially biased algorithms or autonomous weapons that operate without human oversight (to name just a few), unease abounds on a number of fronts. And we’re still in the very early stages of what AI is really capable of.

14 Dangers of AI

Questions about who’s developing AI and for what purposes make it all the more essential to understand its potential downsides. Below we take a closer look at the possible dangers of artificial intelligence and explore how to manage its risks.

Is AI Dangerous?

The tech community has long debated the threats posed by artificial intelligence. Automation of jobs, the spread of fake news and a dangerous arms race of AI-powered weaponry have been mentioned as some of the biggest dangers posed by AI.

1. Lack of AI Transparency and Explainability

AI anddeep learning models can be difficult to understand, even for those who work directly with the technology. This leads to a lack of transparency for how and why AI comes to its conclusions, creating a lack of explanation for what data AI algorithms use, or why they may make biased or unsafe decisions. These concerns have given rise to the use ofexplainable AI, but there’s still a long way before transparent AI systems become common practice.

To make matters worse, AI companies continue to remain tight-lipped about their products. Former employees of OpenAI and Google DeepMind have accused both companies of concealing the potential dangers of their AI tools. This secrecy leaves the general public unaware of possible threats and makes it difficult for lawmakers to take proactive measures ensuring AI is developed responsibly.

2. Job Losses Due to AI Automation

AI-powered job automation is a pressing concern as the technology is adopted in industries likemarketing,manufacturing andhealthcare. By 2030, tasks that account for up to 30 percent of hours currently being worked in the U.S. economy could be automated — with Black and Hispanic employees left especially vulnerable to the change —according to McKinsey. Goldman Sachs even states300 million full-time jobs could be lost to AI automation.

“The reason we have a low unemployment rate, which doesn’t actually capture people that aren’t looking for work, is largely that lower-wage service sector jobs have been pretty robustly created by this economy,” futurist Martin Ford told Built In. With AI on the rise, though, “I don’t think that’s going to continue.”

AsAI robots become smarter and more dexterous, the same tasks will require fewer humans. And while AI is estimated to create97 million new jobs by 2025, many employees won’t have the skills needed for these technical roles and could get left behind if companies don’tupskill their workforces.

“If you’re flipping burgers at McDonald’s and more automation comes in, is one of these new jobs going to be a good match for you?” Ford said. “Or is it likely that the new job requires lots of education or training or maybe even intrinsic talents — really strong interpersonal skills or creativity — that you might not have? Because those are the things that, at least so far, computers are not very good at.”

As technology strategist Chris Messina has pointed out,fields like law and accounting are primed for an AI takeover as well. In fact, Messina said, some of them may well be decimated. AI already is having a significant impact on medicine. Law and accounting are next, Messina said, the former being poised for “a massive shakeup.”

“It’s a lot of attorneys reading through a lot of information — hundreds or thousands of pages of data and documents. It’s really easy to miss things,” Messina said. “So AI that has the ability to comb through and comprehensively deliver the best possible contract for the outcome you’re trying to achieve is probably going to replace a lot of corporate attorneys.”

More on Artificial IntelligenceAI Copywriting: Why Writing Jobs Are Safe

3. Social Manipulation Through AI Algorithms

Social manipulation also stands as a danger of artificial intelligence. This fear has become a reality as politicians rely on platforms to promote their viewpoints, with one example being Ferdinand Marcos, Jr., wielding aTikTok troll army to capture the votes of younger Filipinos during the Philippines’ 2022 election.

TikTok, which is just one example of a social media platform that relies onAI algorithms, fills a user’s feed with content related to previous media they’ve viewed on the platform. Criticism of the app targets this process and the algorithm’s failure to filter out harmful and inaccurate content, raising concerns overTikTok’s ability to protect its users from misleading information.

Online media and news have become even murkier in light of AI-generated images and videos, AI voice changers as well asdeepfakes infiltrating political and social spheres. These technologies make it easy to create realistic photos, videos, audio clips or replace the image of one figure with another in an existing picture or video. As a result, bad actors have another avenue forsharing misinformation and war propaganda, creating a nightmare scenario where it can be nearly impossible to distinguish between credible and faulty news.

“No one knows what’s real and what’s not,” Ford said. “You literally cannot believe your own eyes and ears; you can’t rely on what, historically, we’ve considered to be the best possible evidence ... That’s going to be a huge issue.”

4. Social Surveillance With AI Technology

In addition to its more existential threat, Ford is focused on the way AI will adversely affect privacy and security. A prime example isChina’s use of facial recognition technology in offices, schools and other venues. Besides tracking a person’s movements, the Chinese government may be able to gather enough data to monitor a person’s activities, relationships and political views.

Another example is U.S. police departments embracing predictive policing algorithms to anticipate where crimes will occur. The problem is that these algorithms are influenced by arrest rates, whichdisproportionately impact Black communities. Police departments then double down on these communities, leading to over-policing and questions over whether self-proclaimed democracies can resist turning AI into an authoritarian weapon.

“Authoritarian regimes use or are going to use it,” Ford said. “The question is, ‘How much does it invade Western countries, democracies, and what constraints do we put on it?’”

RelatedAre Police Robots the Future of Law Enforcement?

5. Lack of Data Privacy Using AI Tools

A 2024 AvePoint survey found that the top concern among companies is data privacy and security. And businesses may have good reason to be hesitant, considering the large amounts of data concentrated in AI tools and the lack of regulation regarding this information.

AI systems often collect personal data tocustomize user experiences or to help train the AI models you’re using (especially if the AI tool is free). Data may not even be considered secure from other users when given to an AI system, as one bug incident that occurred withChatGPT in 2023 “allowed some users to see titles from another active user’s chat history.” While there are laws present to protect personal information in some cases in the United States, there is no explicit federal law that protects citizens from data privacy harm caused by AI.

Related ReadingAI-Generated Content and Copyright Law: What We Know

6. Biases Due to AI

Various forms of AI bias are detrimental too. Speaking to the New York Times, Princeton computer science professor Olga Russakovsky said AI bias goes well beyond gender and race. In addition to data andalgorithmic bias (the latter of which can “amplify” the former), AI is developed by humans — andhumans are inherently biased.

“A.I. researchers are primarily people who are male, who come from certain racial demographics, who grew up in high socioeconomic areas, primarily people without disabilities,” Russakovsky said. “We’re a fairly hom*ogeneous population, so it’s a challenge to think broadly about world issues.”

The narrow views of individuals have culminated in an AI industry that leaves out a range of perspectives. According to UNESCO, only 100 of the world’s 7,000 natural languages have been used to train topchatbots. It doesn’t help that 90 percent of online higher education materials are already produced by European Union and North American countries, further restricting AI’s training data to mostly Western sources.

The limited experiences of AI creators may explain whyspeech-recognition AI often fails to understand certain dialects and accents, or why companies fail to consider the consequences of achatbot impersonating historical figures. If businesses and legislators don’t exercise greater care to avoid recreating powerful prejudices, AI biases could spread beyond corporate contexts and exacerbate societal issues like housing discrimination.

7. Socioeconomic Inequality as a Result of AI

If companies refuse to acknowledge the inherent biases baked into AI algorithms, they may compromise theirDEI initiatives throughAI-powered recruiting. The idea that AI can measure the traits of a candidate through facial and voice analyses is still tainted by racial biases, reproducing the samediscriminatory hiring practices businesses claim to be eliminating.

Widening socioeconomic inequality sparked by AI-driven job loss is another cause for concern, revealing the class biases of how AI is applied. Workers who perform more manual, repetitive tasks have experiencedwage declines as high as 70 percent because of automation, with office and desk workers remaining largely untouched in AI’s early stages. However, the increase ingenerative AI use isalready affecting office jobs, making for a wide range of roles that may be more vulnerable to wage or job loss than others.

8. Weakening Ethics and Goodwill Because of AI

Along with technologists, journalists and political figures, even religious leaders are sounding the alarm on AI’s potential pitfalls. In a2023 Vatican meeting and in hismessage for the 2024 World Day of Peace, Pope Francis called for nations to create and adopt a binding international treaty that regulates the development and use of AI.

Pope Francis warned against AI’s ability to be misused, and “create statements that at first glance appear plausible but are unfounded or betray biases.” He stressed how this could bolster campaigns of disinformation, distrust in communications media, interference in elections and more — ultimately increasing the risk of “fueling conflicts and hindering peace.”

The rapid rise of generative AI tools gives these concerns more substance. Many users have applied the technology to get out of writing assignments, threatening academic integrity and creativity. Plus, biased AI could be used to determine whether an individual is suitable for a job, mortgage, social assistance or political asylum, producing possible injustices and discrimination, noted Pope Francis.

More on Artificial IntelligenceWhat Are AI Ethics?

9. Autonomous Weapons Powered By AI

As is too often the case, technological advancements have been harnessed for warfare. When it comes to AI, some are keen to do something about it before it’s too late: In a2016 open letter, over 30,000 individuals, including AI androbotics researchers, pushed back against the investment in AI-fueled autonomous weapons.

“The key question for humanity today is whether to start a global AI arms race or to prevent it from starting,” they wrote. “If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”

This prediction has come to fruition in the form of Lethal Autonomous Weapon Systems, which locate and destroy targets on their own while abiding by few regulations. Because of the proliferation of potent and complex weapons, some of the world’s most powerful nations have given in to anxieties and contributed to atech cold war.

Many of these new weapons posemajor risks to civilians on the ground, but the danger becomes amplified when autonomous weapons fall into the wrong hands. Hackers have mastered varioustypes of cyber attacks, so it’s not hard to imagine a malicious actor infiltrating autonomous weapons and instigating absolute armageddon.

If political rivalries and warmongering tendencies are not kept in check, artificial intelligence could end up being applied with the worst intentions. Some fear that, no matter how many powerful figures point out the dangers of artificial intelligence, we’re going to keep pushing the envelope with it if there’s money to be made.

“The mentality is, ‘If we can do it, we should try it; let’s see what happens,” Messina said. “‘And if we can make money off it, we’ll do a whole bunch of it.’ But that’s not unique to technology. That’s been happening forever.’”

10. Financial Crises Brought About By AI Algorithms

Thefinancial industry has become more receptive to AI technology’s involvement in everyday finance and trading processes. As a result, algorithmic trading could be responsible for our next major financial crisis in the markets.

While AI algorithms aren’t clouded by human judgment or emotions, they alsodon’t take into account contexts, the interconnectedness of markets and factors like human trust and fear. These algorithms then make thousands of trades at a blistering pace with the goal of selling a few seconds later for small profits. Selling off thousands of trades could scare investors into doing the same thing, leading to sudden crashes and extreme market volatility.

Instances like the2010 Flash Crash and theKnight Capital Flash Crash serve as reminders of what could happen when trade-happy algorithms go berserk, regardless of whether rapid and massive trading is intentional.

This isn’t to say that AI has nothing to offer to the finance world. In fact, AI algorithms can help investors make smarter and more informed decisions on the market. But finance organizations need to make sure theyunderstand their AI algorithms and how those algorithms make decisions. Companies should considerwhether AI raises or lowers their confidence before introducing the technology to avoid stoking fears among investors and creating financial chaos.

11. Loss of Human Influence

An overreliance on AI technology could result in the loss of human influence — and a lack in human functioning — in some parts of society. Using AI in healthcare could result in reducedhuman empathy and reasoning, for instance. And applying generative AI for creative endeavors could diminishhuman creativity and emotional expression. Interacting with AI systems too much could even causereduced peer communication and social skills. So while AI can be very helpful for automating daily tasks, some question if it might hold back overall human intelligence, abilities and need for community.

12. Uncontrollable Self-Aware AI

There also comes a worry that AI will progress in intelligence so rapidly that it will becomesentient, and actbeyond humans’ control — possibly in a malicious manner. Alleged reports of this sentience have already been occurring, with one popular account being from a former Google engineer who stated the AI chatbotLaMDA was sentient and speaking to him just as a person would. As AI’s next big milestones involve making systems withartificial general intelligence, and eventuallyartificial superintelligence, cries to completely stop these developmentscontinue to rise.

More on Artificial IntelligenceWhat Is the Eliza Effect?

13. Increased Criminal Activity

As AI technology has become more accessible, the number of people using it for criminal activity has risen. Online predators can now generate images of children, making it difficult for law enforcement to determine actual cases of child abuse. And even in cases where children aren’t physically harmed, the use of children’s faces in AI-generated images presents new challenges for protecting children’s online privacy and digital safety.

Voice cloning has also become an issue, with criminals leveraging AI-generated voices to impersonate other people and commit phone scams. These examples merely scratch the surface of AI’s capabilities, so it will only become harder for local and national government agencies to adjust and keep the public informed of the latest AI-driven threats.

14. Broader Economic and Political Instability

Overinvesting in a specific material or sector can put economies in a precarious position. Like steel, AI could run the risk of drawing so much attention and financial resources that governments fail to develop other technologies and industries. Plus, overproducing AI technology could result in dumping the excess materials, which could potentially fall into the hands of hackers and other malicious actors.

How to Mitigate the Risks of AI

AI still hasnumerous benefits, like organizing health data and powering self-driving cars. To get the most out of this promising technology, though, some argue that plenty of regulation is necessary.

“There’s a serious danger that we’ll get [AI systems] smarter than us fairly soon and that these things might get bad motives and take control,” Hintontold NPR. “This isn’t just a science fiction problem. This is a serious problem that’s probably going to arrive fairly soon, and politicians need to be thinking about what to do about it now.”

Develop Legal Regulations

AI regulation has been a main focus fordozens of countries, and now the U.S. and European Union are creating more clear-cut measures to manage the rising sophistication of artificial intelligence. In fact, the White House Office of Science and Technology Policy (OSTP) published theAI Bill of Rights in 2022, a document outlining to help responsibly guide AI use and development. Additionally, President Joe Biden issued anexecutive order in 2023 requiring federal agencies to develop new rules and guidelines for AI safety and security.

Although legal regulations mean certain AI technologies could eventually be banned, it doesn’t prevent societies from exploring the field.

Ford argues that AI is essential for countries looking to innovate and keep up with the rest of the world.

“You regulate the way AI is used, but you don’t hold back progress in basic technology. I think that would be wrong-headed and potentially dangerous,” Ford said. “We decide where we want AI and where we don’t; where it’s acceptable and where it’s not. And different countries are going to make different choices.”

More on Artificial IntelligenceWill This Election Year Be a Turning Point for AI Regulation?

Establish Organizational AI Standards and Discussions

On a company level, there are many steps businesses can take when integrating AI into their operations. Organizations candevelop processes for monitoring algorithms, compiling high-quality data and explaining the findings of AI algorithms. Leaders could even make AI a part of theircompany culture and routine business discussions, establishing standards to determine acceptable AI technologies.

Guide Tech With Humanities Perspectives

Though when it comes to society as a whole, there should be a greater push for tech to embrace the diverse perspectives of the humanities. Stanford University AI researchers Fei-Fei Li and John Etchemendy make this argument in a 2019 blog post thatcalls for national and global leadership in regulating artificial intelligence:

“The creators of AI must seek the insights, experiences and concerns of people across ethnicities, genders, cultures and socio-economic groups, as well as those from other fields, such as economics, law, medicine, philosophy, history, sociology, communications, human-computer-interaction, psychology, and Science and Technology Studies (STS).”

Balancing high-tech innovation with human-centered thinking is an ideal method for producingresponsible AI technology and ensuring thefuture of AI remains hopeful for the next generation. The dangers of artificial intelligence should always be a topic of discussion, so leaders can figure out ways to wield the technology for noble purposes.

“I think we can talk about all these risks, and they’re very real,” Ford said. “But AI is also going to be the most important tool in our toolbox for solving the biggest challenges we face.”

Frequently Asked Questions

AI (artificial intelligence) describes a machine's ability to perform tasks and mimic intelligence at a similar level as humans.

AI has the potential to be dangerous, but these dangers may be mitigated by implementing legal regulations and by guiding AI development with human-centered thinking.

If AI algorithms are biased or used in a malicious manner— such as in the form of deliberate disinformation campaigns or autonomous lethal weapons— they could cause significant harm toward humans. Though as of right now, it is unknown whether AI is capable of causing human extinction.

Self-aware AI has yet to be created, so it is not fully known what will happen if or when this development occurs.

Some suggest self-aware AI may become a helpful counterpart to humans in everyday living, while others suggest that it may act beyond human control and purposely harm humans.

AI is already disrupting jobs, posing security challenges and raising ethical questions. If left unregulated, it could be used for more nefarious purposes. But it remains to be seen how the technology will continue to develop and what measures governments may take, if any, to exercise more control over AI production and usage.

14 Dangers of Artificial Intelligence (AI) | Built In (2024)

FAQs

14 Dangers of Artificial Intelligence (AI) | Built In? ›

Real-life AI risks

Not every AI risk is as big and worrisome as killer robots or sentient AI. Some of the biggest risks today include things like consumer privacy, biased programming, danger to humans, and unclear legal regulation.

What are the dangers of using AI? ›

Real-life AI risks

Not every AI risk is as big and worrisome as killer robots or sentient AI. Some of the biggest risks today include things like consumer privacy, biased programming, danger to humans, and unclear legal regulation.

What is a question AI can't answer? ›

Unfamiliar: AI models are also not always able to understand questions that are unfamiliar to them. If a question is about a topic that the model has not been trained on, it may not be able to give a meaningful answer. Require inference: AI models are not able to make inferences the way tha.

What is the biggest problem in AI? ›

I look forward to career advancement and participation in solving industry-aligned Artificial Intelligence and Machine Learning problems.
  1. AI Ethical Issues. ...
  2. Bias in AI. ...
  3. AI Integration. ...
  4. Computing Power. ...
  5. Data Privacy and Security. ...
  6. Legal issues with AI. ...
  7. AI Transparency. ...
  8. Limited Knowledge of AI.
Jul 31, 2024

What is AI not good at? ›

Unlike humans, AI lacks the innate ability to grasp everyday knowledge and social norms, which can result in logically correct decisions but are practically or ethically flawed.

What did Elon Musk say about AI? ›

He said, "Probably none of us will have a job. If you want to do a job that's kinda like a hobby, you can do a job. But otherwise, AI and the robots will provide any goods and services that you want."

Can AI cause human extinction? ›

Many artificial intelligence researchers see the possible future development of superhuman AI as having a non-trivial chance of causing human extinction – but there is also widespread disagreement and uncertainty about such risks.

What questions will break AI? ›

To understand “how to break an AI chatbot” you can simply keep asking it to rephrase its responses. When you frequently ask “What does this mean?” it exposes the chatbot's limitations in responding outside of pre-set responses.

What should you not ask AI? ›

Six Things You Should Never Ask An AI Assistant
  • Don't ask voice assistants to perform any banking tasks. ...
  • Don't ask voice assistants to be your telephone operator. ...
  • Don't ask voice assistants for any medical advice. ...
  • Don't ask voice assistants for any illegal or harmful activities.

What is the Google answer to open AI? ›

Google's answer to OpenAI's Sora has landed – here's how to get on the waitlist. Among the many AI treats that Google tossed into the crowd during its Google I/O 2024 keynote was a new video tool called Veo – and the waiting list for the OpenAI Sora rival is now open for those who want early access.

How can AI be a threat to humanity? ›

If AI algorithms are biased or used in a malicious manner — such as in the form of deliberate disinformation campaigns or autonomous lethal weapons — they could cause significant harm toward humans. Though as of right now, it is unknown whether AI is capable of causing human extinction.

What is the biggest con of AI? ›

The advantages range from streamlining, saving time, eliminating biases, and automating repetitive tasks, just to name a few. The disadvantages are things like costly implementation, potential human job loss, and lack of emotion and creativity.

What problems can AI cause to society? ›

One of the most profound impacts of AI is on the workforce. Automation is replacing human workers at an unprecedented rate, with AI-powered machines taking over routine and low-skilled jobs. While this has the potential to improve efficiency and reduce costs, it also raises concerns about the displacement of jobs.

Which jobs will AI not replace? ›

Creative jobs that won't be replaced by AI
  • Sculptor.
  • Painter.
  • Jeweler.
  • Dancer.
  • Stage actor.
  • Watchmaker.
  • Glassblower.
  • Blacksmith.
Jan 16, 2024

What is impossible for AI to do? ›

Feel Empathy, Sympathy, or Anything Else for That Matter

Just as AI can't make a moral judgment, it cannot understand a person's feelings.

Can AI take over the world? ›

If you believe science fiction, then you don't understand the meaning of the word fiction. The short answer to this fear is: No, AI will not take over the world, at least not as it is depicted in the movies.

What are 5 disadvantages of AI? ›

Frequently cited drawbacks of AI include the following:
  • A lack of creativity. ...
  • The absence of empathy. ...
  • Skill loss in humans. ...
  • Possible overreliance on the technology and increased laziness in humans. ...
  • Job loss and displacement.
Jun 16, 2023

How is AI harmful to the environment? ›

Negative impacts

Consume a lot of electricity. Use vast amounts of water. Develop biases against nature and animals. Enable less sustainable transport choices.

Will AI take over humanity? ›

The idea of AI taking over the world and posing a threat to humanity is a common theme in science fiction, but it's not a likely scenario in reality. AI systems are created and controlled by humans, and their actions are ultimately determined by the goals and values instilled by their creators.

Top Articles
How to Quickly Copy Chart (Graph) Format in Excel - Video
How to compare two Word documents | Zapier
Kostner Wingback Bed
Froedtert Billing Phone Number
Free VIN Decoder Online | Decode any VIN
Jesse Mckinzie Auctioneer
Sunday World Northern Ireland
biBERK Business Insurance Provides Essential Insights on Liquor Store Risk Management and Insurance Considerations
Employeeres Ual
Student Rating Of Teaching Umn
Culvers Tartar Sauce
Sport Clip Hours
454 Cu In Liters
Dallas’ 10 Best Dressed Women Turn Out for Crystal Charity Ball Event at Neiman Marcus
Unlv Mid Semester Classes
Milspec Mojo Bio
Walmart Car Department Phone Number
18889183540
All Breed Database
Seeking Arrangements Boston
Vivaciousveteran
Tokyo Spa Memphis Reviews
Panolian Batesville Ms Obituaries 2022
Walgreens On Bingle And Long Point
Watson 853 White Oval
Craigslist Northern Minnesota
Log in to your MyChart account
Stouffville Tribune (Stouffville, ON), March 27, 1947, p. 1
Guide to Cost-Benefit Analysis of Investment Projects Economic appraisal tool for Cohesion Policy 2014-2020
The value of R in SI units is _____?
60 Second Burger Run Unblocked
Craigslist Lakeside Az
20+ Best Things To Do In Oceanside California
Can You Buy Pedialyte On Food Stamps
Second Chance Apartments, 2nd Chance Apartments Locators for Bad Credit
Bcy Testing Solution Columbia Sc
Jamesbonchai
Tableaux, mobilier et objets d'art
Flappy Bird Cool Math Games
Online-Reservierungen - Booqable Vermietungssoftware
9:00 A.m. Cdt
From Grindr to Scruff: The best dating apps for gay, bi, and queer men in 2024
Victoria Vesce Playboy
Dicks Mear Me
Craigslist Pets Lewiston Idaho
Hampton Inn Corbin Ky Bed Bugs
King Fields Mortuary
Law Students
The Ultimate Guide To 5 Movierulz. Com: Exploring The World Of Online Movies
Ranking 134 college football teams after Week 1, from Georgia to Temple
Latest Posts
Article information

Author: Sen. Emmett Berge

Last Updated:

Views: 6041

Rating: 5 / 5 (60 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Sen. Emmett Berge

Birthday: 1993-06-17

Address: 787 Elvis Divide, Port Brice, OH 24507-6802

Phone: +9779049645255

Job: Senior Healthcare Specialist

Hobby: Cycling, Model building, Kitesurfing, Origami, Lapidary, Dance, Basketball

Introduction: My name is Sen. Emmett Berge, I am a funny, vast, charming, courageous, enthusiastic, jolly, famous person who loves writing and wants to share my knowledge and understanding with you.