Responsible AI at PwC (2024)

The potential for AI-based technologies to fundamentally alter how we live, and work is potentially limitless — but to harness and preserve the value created requires attention to and the management of the attendant risks.

When you infuse AI into business processes, productivity tools, and critical decisions with the purpose of driving incremental value, you need to be sure that you understand what AI is doing and why. Is it making accurate, bias-aware decisions? Is it violating anyone’s privacy? Can you govern and monitor this powerful technology that doesn’t slow growth or innovation? Globally, organisations recognise the need for Responsible AI but are at different stages of the journey.

Responsible AI (RAI) is an approach to managing risks associated with an AI-based solution. Now is the time to evaluate and augment existing practices or create new ones to help you responsibly harness AI and be prepared for coming regulation. Investing in Responsible AI at the outset can give you an edge that competitors may not be able to overtake.

Risks can originate from many different sources as AI solutions are being implemented. A standardized AI risk taxonomy and toolkit can help assess potential risks and guide necessary mitigation strategies, creating the foundation for an effective and efficient AI governance framework.

Build trust with stakeholders and society

Your stakeholders, including board members, customers, and regulators, will have many questions about your organisation's use of AI and data, from how it’s developed and deployed to how it’s monitored and governed and whether it’s providing the value they expect. You not only need to be ready to provide the answers, but you must also demonstrate ongoing legal and regulatory compliance.

Our Responsible AI Toolkit is a suite of customizable frameworks, tools and processes designed to help you harness the power of AI in a manner that engenders trust in the solution from strategy through execution. With the Responsible AI Toolkit, we’ll tailor our solutions to address your organisation’s unique business requirements and AI maturity.

Potential AI Risks

A variety of factors can impact AI risks, changing over time, stakeholders, sectors, use cases, and technology. Below are the six major risk categories for application of AI technology.

  1. Performance
  2. Security
  3. Control
  4. Economic
  5. Societal
  6. Enterprise

Performance

AI algorithms that ingest real-world data and preferences as inputs may run a risk of learning and imitating possible biases and prejudices.

Performance risks include:

  • Risk of errors
  • Risk of bias and discrimination
  • Risk of opaqueness and lack of interpretability
  • Risk of performance instability

Security

For as long as automated systems have existed, humans have tried to circumvent them. This is no different with AI.

Security risks include:

  • Adversarial attacks
  • Cyber intrusion and privacy risks
  • Open source software risks

Control

Similar to any other technology, AI should have organisation-wide oversight with clearly-identified risks and controls.

Control risks include:

  • Lack of human agency
  • Detecting rogue AI and unintended consequences
  • Lack of clear accountability

Economic

The widespread adoption of automation across all areas of the economy may impact jobs and shift demand to different skills.

Economic risks include:

  • Risk of job displacement
  • Enhancing inequality
  • Risk of power concentration within one or a few companies

Societal

The widespread adoption of complex and autonomous AI systems could result in “echo-chambers” developing between machines, and can have broader impacts on human-human interaction.

Societal risks include:

  • Risk of misinformation and manipulation
  • Risk of an intelligence divide
  • Risk of surveillance and warfare

Enterprise

AI solutions are designed with specific objectives in mind which may compete with overarching organisational and societal values within which they operate. Communities often have long informally agreed to a core set of values for society to operate against. There is a movement to identify sets of values and thereby the ethics to help drive AI systems, but there remains disagreement about what those ethics may mean in practice and how they should be governed. Thus, the above risk categories are also inherently ethical risks as well.

  • Enterprise risks include:
  • Risk to reputation
  • Risk to financial performance
  • Legal and compliance risks
  • Risk of discrimination
  • Risk of values misalignment

PwC’s Responsible AI Toolkit

Your stakeholders, including board members, customers, and regulators, will have many questions about your organisation's use of AI and data, from how it’s developed to how it’s governed. You not only need to be ready to provide the answers, you must also demonstrate ongoing governance and regulatory compliance.

Our Responsible AI Toolkit is a suite of customizable frameworks, tools and processes designed to help you harness the power of AI in an ethical and responsible manner - from strategy through execution. With the Responsible AI toolkit, we’ll tailor our solutions to address your organisation’s unique business requirements and AI maturity.

Responsible AI at PwC (1)

Our Responsible AI Toolkit addresses the three dimensions of Responsible AI

  • Governance
  • Compliance
  • Risk Management
  • Privacy
  • Security
  • Robustness
  • Safety

Who is accountable for your AI system? The foundation for Responsible AI is an end-to-end enterprise governance framework, focusing on the risks and controls along your organization’s AI journey—from top to bottom. PwC developed robust governance models that can be tailored to your organisation. The framework enables oversight with clear roles and responsibilities, articulated requirements across three lines of defense, and mechanisms for traceability and ongoing assessment.

Responsible AI at PwC (2)

Are you anticipating future compliance? Complying with current data protection and privacy regulation and industry standards is just the beginning. We monitor the changing regulatory landscape and identify new compliance needs your organization should be aware of, and support the change management needed to create tailored organizational policies and prepare for future compliance.

How are you identifying risk? You need expansive risk detection and mitigation practices to assess development and deployment at every step of the journey, and address existing and newly identified risks and harms. PwC’s approach to Responsible AI works with existing risk management structures in your organization to identify new capabilities, and supports the development of any necessary operating models.

Is your AI unbiased? Is it fair? An AI system that is exposed to inherent biases of a particular data source is at risk of making decisions that could lead to unfair outcomes for a particular individual or group. Fairness is a social construct with many different and—at times—conflicting definitions. Responsible AI helps your organisation to become more aware of bias and potential bias, and take corrective action to help systems improve in their decision-making.

How was that decision made? An AI system that human users are unable to understand can lead to a “black box” effect, where organisations are limited in their ability to explain and defend business-critical decisions. Our Responsible AI approach can help. We provide services and processes to help you explain both overall decision-making and also individual choices and predictions, and we can tailor to the perspectives of different stakeholders based on their needs and uses.

How will your AI system protect and manage privacy? With PwC’s Responsible AI toolkit, you can identify strategies to lead with privacy considerations and to respond to consumers’ evolving expectations.

  • 2021 Global Digital Trust Insights

What are the security risks and implications that should be managed? Detecting and mitigating system vulnerabilities is critical to maintaining integrity of algorithms and underlying data while preventing the possibility of malicious attacks. The great possibilities of AI come with the need for great protection and risk management. PwC’s approach to Responsible AI includes essential cybersecurity assessments to help you manage effectively.

Will your AI behave as intended? An AI system that does not demonstrate stability, and consistently meets performance requirements, is at increased risk of producing errors and making the wrong decisions. To help make your systems more robust, Responsible AI includes services to help you identify potential weaknesses in models and monitor long-term performance. PwC has developed specific technical tools to support this area.

Responsible AI at PwC (3)

Is your AI safe for society? AI system safety should be evaluated in terms of potential impact to users, ability to generate reliable and trustworthy outputs, and ability to prevent unintended or harmful actions. PwC’s Responsible AI services enable you to assess safety and societal impact to support this dimension.

Is your data use and AI ethical? Our Ethical data and AI Framework provides guidance and a practical approach to help your organisation with the development and governance of AI and data solutions that are ethical and moral.

As part of this dimension, our framework includes a unique approach to contextualising and applying ethical principles, while identifying and addressing key ethical risks.

Are you positioning your AI toward future compliance? As the regulatory landscape continues to evolve, maintaining compliance and responding to regulatory change will be critical. Leveraging PwC’s approach to Responsible AI can help you identify and evaluate relevant policy, industry standards and regulations that may impact your AI solutions. Operationalize regulatory compliance while factoring in localized differences.

Responsible AI at PwC (4)

Trust is core to our purpose at PwC

Our human-led, tech-powered team can help you build AI responsibly and bring big ideas to life across all stages of AI adoption.

  • AI, Data & Data Use Governance
  • AI and Data Ethics
  • AI Expertise: Machine Learning, Model Operations & Data Science
  • Privacy
  • Cybersecurity
  • Risk Management
  • Change Management
  • Compliance and legal
  • Sustainability and Climate Change
  • Diversity and Inclusion

We support all phases of the RAI journey

The foundation for responsible AI is an end-to-end enterprise governance framework, focusing on the risks and controls along your organization’s AI journey—from top to bottom.

  • Assess: Technical and qualitative assessments of models and processes to identify gaps
  • Build: Development and design of new models and processes, given a specific need and opportunity
  • Validate + Scale: Technical model validation and deployment services; governance and ethics change management
  • Evaluate + Monitor: Readiness for AI including confirming controls framework design, internal audit training

Responsible AI - Maturing from Theory to Practice

Download(PDF of 3.59mb)

Responsible AI placement

Download the brochure(PDF of 394.42kb)

Innovate responsibly

Whether you're just getting started or are getting ready to scale, Responsible AI can help. Drawing on our proven capability in AI innovation and deep global business expertise, we'll assess your end-to-end needs, and design a solution to help you address your unique risks and challenges.

Data and analytics specialists where you are Meet the global team

{{filterContent.facetedTitle}}

{{filterContent.facetedTitle}}

{{filterContent.filterByLabel}}:

{{item.title}}

{{contentList.loadingText}}

{{contentList.loadingText}}

Recognition & Awards

  • 2020 World Changing Idea, Responsible AI Toolkit, FastCompany
  • Ranked Leader for AI Consulting, Forrester
  • Outstanding achievement in Enterprise, Adoption of AI and AI Ethics, CogX
  • Ranked Leader for Data & Analytics Services, Gartner
  • 100 Brilliant Women in AI Ethics

Responsible AI at PwC (9)

Sean Joyce

Partner, Global Cybersecurity and Privacy Leader, PwC United States

Responsible AI at PwC (10) Responsible AI at PwC (11) Email

Responsible AI at PwC (12)

Sudipta Ghosh

Data & Analytics Leader, PwC India

Tel: +91 9987434327

Responsible AI at PwC (13) Responsible AI at PwC (14) Email

Responsible AI at PwC (15)

Chris Oxborough

Global Assurance Artificial Intelligence Leader, PwC United Kingdom

Tel: +44 (0) 78 1851 0537

Email

Responsible AI at PwC (16)

Hendrik Reese

Partner, PwC Germany

Tel: +49 1517 0423-201

Responsible AI at PwC (17) Email

Hide

Responsible AI at PwC (2024)

FAQs

What is responsible AI PwC? ›

Our Responsible AI Toolkit is a suite of customizable frameworks, tools and processes designed to help you harness the power of AI in an ethical and responsible manner - from strategy through execution.

What are the 5 pillars of responsible AI? ›

These are explainability, bias and fairness, reproducibility, sustainability, and transparency. Explainability involves developing tools that help ML systems gain context and improve the explanations offered in outcomes.

What are the 6 principles of responsible AI? ›

Microsoft outlines six key principles for responsible AI: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability.

What is PwC doing with AI? ›

At PwC we are working with clients and technology alliance partners to unlock value with generative AI, from delivering efficiency and productivity gains to powering business model transformations across multiple industries.

What does responsible AI do? ›

Responsible Artificial Intelligence (Responsible AI) is an approach to developing, assessing, and deploying AI systems in a safe, trustworthy, and ethical way.

What is the roadmap for responsible AI? ›

A responsible AI framework operationalises guidelines, oversight, and technical safeguards to optimise outcomes. Enterprises should adopt a three-pronged approach that includes monitoring risks, enhancing defences and development, and governing control and risk management.

Is responsible AI the same as ethical AI? ›

And how is it different from responsible AI? Many people use the two terms interchangeably, but there's a big difference between the two. Ethical AI is about doing the right thing and has to do with values and social economics. Responsible AI is more tactical.

How do you practice responsible AI? ›

Collect and handle data responsibly
  1. Identify whether your ML model can be trained without the use of sensitive data, e.g., by utilizing non-sensitive data collection or an existing public data source.
  2. If it is essential to process sensitive training data, strive to minimize the use of such data.

Which is one of four key principles of responsible AI? ›

Focusing on those four foundations of responsible AI — empathy, fairness, transparency, and accountability — will not only benefit customers, it will differentiate any organization from its competitors and help generate a significant financial return.

What are the three golden rules of AI? ›

A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

What is the responsible AI lifecycle? ›

Our responsible AI principles

Our AI principles are designed to foster trust and respect for people and the environment throughout the entire AI lifecycle—from acquisition, design and development to use, monitoring and decommissioning of AI systems.

What are the ethical considerations of responsible AI? ›

Transparency: AI systems should be designed in a way that allows users to understand how the algorithms work. Non-maleficence: AI systems should avoid harming individuals, society or the environment. Accountability: Developers, organizations and policymakers must ensure AI is developed and used responsibly.

Why not to work at PwC? ›

People at PwC complain that the pay is not necessarily as good as it could be, particularly given the long hours, and that it doesn't compare particularly well to rival Big Four firms.

Does PwC use ChatGPT? ›

Building upon this commitment, our US and UK firms have signed an agreement with OpenAI making PwC OpenAI's first reseller for ChatGPT Enterprise and the largest user of the product.

Is PwC having layoffs? ›

Layoffs are never a secret.

PwC conducted layoffs at the end of 2023. The company's employees know this.

What is the difference between responsible AI and generative AI? ›

Transparency: Making AI systems' decision-making processes understandable to users. Accountability: Ensuring mechanisms are in place to hold AI systems and their creators responsible for outcomes. While Generative AI focuses on the 'creation' aspect of AI, Responsible AI is concerned with the 'consequence' aspect.

What is one of four key principles of responsible artificial intelligence in TQ? ›

The four key principles of Responsible AI – Fairness, Transparency, Accountability, and Security & Privacy – serve as a roadmap for developing and deploying AI in a way that benefits everyone.

What is responsible AI for business? ›

Responsible AI encourages collaboration between stakeholders to implement strategies and policies that prioritize and promote effective risk management, responsible practices and AI systems aligned with the organization's values and objectives.

Top Articles
Understanding Process Performance Metrics | zenphi
China blocks Intel and AMD CPUs for government offices and servers, plans to switch to domestic-made alternatives
Jack Doherty Lpsg
The Largest Banks - ​​How to Transfer Money With Only Card Number and CVV (2024)
Splunk Stats Count By Hour
Klustron 9
Sinai Web Scheduler
Tv Schedule Today No Cable
Craigslistdaytona
Derpixon Kemono
Mawal Gameroom Download
Scholarships | New Mexico State University
Lonadine
Job Shop Hearthside Schedule
Los Angeles Craigs List
RBT Exam: What to Expect
“In my day, you were butch or you were femme”
Minecraft Jar Google Drive
Hocus Pocus Showtimes Near Amstar Cinema 16 - Macon
Curry Ford Accident Today
Exterior insulation details for a laminated timber gothic arch cabin - GreenBuildingAdvisor
We Discovered the Best Snow Cone Makers for Carnival-Worthy Desserts
Craigslist Lakeville Ma
Woodmont Place At Palmer Resident Portal
UMvC3 OTT: Welcome to 2013!
Toothio Login
Defending The Broken Isles
Why Are Fuel Leaks A Problem Aceable
Regina Perrow
Dal Tadka Recipe - Punjabi Dhaba Style
Cylinder Head Bolt Torque Values
Why comparing against exchange rates from Google is wrong
Craigs List Tallahassee
Craigslist Central Il
Kvoa Tv Schedule
Edict Of Force Poe
Instafeet Login
Ise-Vm-K9 Eol
The Best Restaurants in Dublin - The MICHELIN Guide
Sam's Club Gas Prices Deptford Nj
Nba Props Covers
Letter of Credit: What It Is, Examples, and How One Is Used
Shell Gas Stations Prices
Coffee County Tag Office Douglas Ga
Citymd West 146Th Urgent Care - Nyc Photos
Gli italiani buttano sempre più cibo, quasi 7 etti a settimana (a testa)
Egg Inc Wiki
German American Bank Owenton Ky
Cvs Minute Clinic Women's Services
Where To Find Mega Ring In Pokemon Radical Red
Latest Posts
Article information

Author: Madonna Wisozk

Last Updated:

Views: 5615

Rating: 4.8 / 5 (68 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Madonna Wisozk

Birthday: 2001-02-23

Address: 656 Gerhold Summit, Sidneyberg, FL 78179-2512

Phone: +6742282696652

Job: Customer Banking Liaison

Hobby: Flower arranging, Yo-yoing, Tai chi, Rowing, Macrame, Urban exploration, Knife making

Introduction: My name is Madonna Wisozk, I am a attractive, healthy, thoughtful, faithful, open, vivacious, zany person who loves writing and wants to share my knowledge and understanding with you.