The potential for AI-based technologies to fundamentally alter how we live, and work is potentially limitless — but to harness and preserve the value created requires attention to and the management of the attendant risks.
When you infuse AI into business processes, productivity tools, and critical decisions with the purpose of driving incremental value, you need to be sure that you understand what AI is doing and why. Is it making accurate, bias-aware decisions? Is it violating anyone’s privacy? Can you govern and monitor this powerful technology that doesn’t slow growth or innovation? Globally, organisations recognise the need for Responsible AI but are at different stages of the journey.
Responsible AI (RAI) is an approach to managing risks associated with an AI-based solution. Now is the time to evaluate and augment existing practices or create new ones to help you responsibly harness AI and be prepared for coming regulation. Investing in Responsible AI at the outset can give you an edge that competitors may not be able to overtake.
Risks can originate from many different sources as AI solutions are being implemented. A standardized AI risk taxonomy and toolkit can help assess potential risks and guide necessary mitigation strategies, creating the foundation for an effective and efficient AI governance framework.
Build trust with stakeholders and society
Your stakeholders, including board members, customers, and regulators, will have many questions about your organisation's use of AI and data, from how it’s developed and deployed to how it’s monitored and governed and whether it’s providing the value they expect. You not only need to be ready to provide the answers, but you must also demonstrate ongoing legal and regulatory compliance.
Our Responsible AI Toolkit is a suite of customizable frameworks, tools and processes designed to help you harness the power of AI in a manner that engenders trust in the solution from strategy through execution. With the Responsible AI Toolkit, we’ll tailor our solutions to address your organisation’s unique business requirements and AI maturity.
Potential AI Risks
A variety of factors can impact AI risks, changing over time, stakeholders, sectors, use cases, and technology. Below are the six major risk categories for application of AI technology.
- Performance
- Security
- Control
- Economic
- Societal
- Enterprise
Performance
AI algorithms that ingest real-world data and preferences as inputs may run a risk of learning and imitating possible biases and prejudices.
Performance risks include:
- Risk of errors
- Risk of bias and discrimination
- Risk of opaqueness and lack of interpretability
- Risk of performance instability
Security
For as long as automated systems have existed, humans have tried to circumvent them. This is no different with AI.
Security risks include:
- Adversarial attacks
- Cyber intrusion and privacy risks
- Open source software risks
Control
Similar to any other technology, AI should have organisation-wide oversight with clearly-identified risks and controls.
Control risks include:
- Lack of human agency
- Detecting rogue AI and unintended consequences
- Lack of clear accountability
Economic
The widespread adoption of automation across all areas of the economy may impact jobs and shift demand to different skills.
Economic risks include:
- Risk of job displacement
- Enhancing inequality
- Risk of power concentration within one or a few companies
Societal
The widespread adoption of complex and autonomous AI systems could result in “echo-chambers” developing between machines, and can have broader impacts on human-human interaction.
Societal risks include:
- Risk of misinformation and manipulation
- Risk of an intelligence divide
- Risk of surveillance and warfare
Enterprise
AI solutions are designed with specific objectives in mind which may compete with overarching organisational and societal values within which they operate. Communities often have long informally agreed to a core set of values for society to operate against. There is a movement to identify sets of values and thereby the ethics to help drive AI systems, but there remains disagreement about what those ethics may mean in practice and how they should be governed. Thus, the above risk categories are also inherently ethical risks as well.
- Enterprise risks include:
- Risk to reputation
- Risk to financial performance
- Legal and compliance risks
- Risk of discrimination
- Risk of values misalignment
PwC’s Responsible AI Toolkit
Your stakeholders, including board members, customers, and regulators, will have many questions about your organisation's use of AI and data, from how it’s developed to how it’s governed. You not only need to be ready to provide the answers, you must also demonstrate ongoing governance and regulatory compliance.
Our Responsible AI Toolkit is a suite of customizable frameworks, tools and processes designed to help you harness the power of AI in an ethical and responsible manner - from strategy through execution. With the Responsible AI toolkit, we’ll tailor our solutions to address your organisation’s unique business requirements and AI maturity.
Our Responsible AI Toolkit addresses the three dimensions of Responsible AI
- Governance
- Compliance
- Risk Management
- Privacy
- Security
- Robustness
- Safety
Who is accountable for your AI system? The foundation for Responsible AI is an end-to-end enterprise governance framework, focusing on the risks and controls along your organization’s AI journey—from top to bottom. PwC developed robust governance models that can be tailored to your organisation. The framework enables oversight with clear roles and responsibilities, articulated requirements across three lines of defense, and mechanisms for traceability and ongoing assessment.
Are you anticipating future compliance? Complying with current data protection and privacy regulation and industry standards is just the beginning. We monitor the changing regulatory landscape and identify new compliance needs your organization should be aware of, and support the change management needed to create tailored organizational policies and prepare for future compliance.
How are you identifying risk? You need expansive risk detection and mitigation practices to assess development and deployment at every step of the journey, and address existing and newly identified risks and harms. PwC’s approach to Responsible AI works with existing risk management structures in your organization to identify new capabilities, and supports the development of any necessary operating models.
- Model risk management of AI and machine learning systems
- Managing the risks of machine learning and artificial intelligence models in the financial services industr
Is your AI unbiased? Is it fair? An AI system that is exposed to inherent biases of a particular data source is at risk of making decisions that could lead to unfair outcomes for a particular individual or group. Fairness is a social construct with many different and—at times—conflicting definitions. Responsible AI helps your organisation to become more aware of bias and potential bias, and take corrective action to help systems improve in their decision-making.
- Bias Analyzer: Track and mitigate bias in AI model
- What is fair when it comes to AI bias
How was that decision made? An AI system that human users are unable to understand can lead to a “black box” effect, where organisations are limited in their ability to explain and defend business-critical decisions. Our Responsible AI approach can help. We provide services and processes to help you explain both overall decision-making and also individual choices and predictions, and we can tailor to the perspectives of different stakeholders based on their needs and uses.
How will your AI system protect and manage privacy? With PwC’s Responsible AI toolkit, you can identify strategies to lead with privacy considerations and to respond to consumers’ evolving expectations.
2021 Global Digital Trust Insights
What are the security risks and implications that should be managed? Detecting and mitigating system vulnerabilities is critical to maintaining integrity of algorithms and underlying data while preventing the possibility of malicious attacks. The great possibilities of AI come with the need for great protection and risk management. PwC’s approach to Responsible AI includes essential cybersecurity assessments to help you manage effectively.
Will your AI behave as intended? An AI system that does not demonstrate stability, and consistently meets performance requirements, is at increased risk of producing errors and making the wrong decisions. To help make your systems more robust, Responsible AI includes services to help you identify potential weaknesses in models and monitor long-term performance. PwC has developed specific technical tools to support this area.
Is your AI safe for society? AI system safety should be evaluated in terms of potential impact to users, ability to generate reliable and trustworthy outputs, and ability to prevent unintended or harmful actions. PwC’s Responsible AI services enable you to assess safety and societal impact to support this dimension.
Is your data use and AI ethical? Our Ethical data and AI Framework provides guidance and a practical approach to help your organisation with the development and governance of AI and data solutions that are ethical and moral.
As part of this dimension, our framework includes a unique approach to contextualising and applying ethical principles, while identifying and addressing key ethical risks.
Are you positioning your AI toward future compliance? As the regulatory landscape continues to evolve, maintaining compliance and responding to regulatory change will be critical. Leveraging PwC’s approach to Responsible AI can help you identify and evaluate relevant policy, industry standards and regulations that may impact your AI solutions. Operationalize regulatory compliance while factoring in localized differences.
Trust is core to our purpose at PwC
Our human-led, tech-powered team can help you build AI responsibly and bring big ideas to life across all stages of AI adoption.
- AI, Data & Data Use Governance
- AI and Data Ethics
- AI Expertise: Machine Learning, Model Operations & Data Science
- Privacy
- Cybersecurity
- Risk Management
- Change Management
- Compliance and legal
- Sustainability and Climate Change
- Diversity and Inclusion
We support all phases of the RAI journey
The foundation for responsible AI is an end-to-end enterprise governance framework, focusing on the risks and controls along your organization’s AI journey—from top to bottom.
- Assess: Technical and qualitative assessments of models and processes to identify gaps
- Build: Development and design of new models and processes, given a specific need and opportunity
- Validate + Scale: Technical model validation and deployment services; governance and ethics change management
- Evaluate + Monitor: Readiness for AI including confirming controls framework design, internal audit training
Responsible AI - Maturing from Theory to Practice
Download(PDF of 3.59mb)
Responsible AI placement
Download the brochure(PDF of 394.42kb)
Innovate responsibly
Whether you're just getting started or are getting ready to scale, Responsible AI can help. Drawing on our proven capability in AI innovation and deep global business expertise, we'll assess your end-to-end needs, and design a solution to help you address your unique risks and challenges.
{{filterContent.facetedTitle}}
{{filterContent.filterByLabel}}: {{filterContent.filtersDisplayNames[filterContent.menuOrder[key]]}}
- {{filterContent.dataService.numberHits}} {{filterContent.dataService.numberHits == 1 ? 'result' : 'results'}}
- {{vf.elipsedTagsTitle}}
- {{filterContent.resetFiltersLabel}}
{{contentList.loadingText}}
{{item.videoDuration}}
{{item.title}}
{{item.text}}
{{contentList.loadingText}}
Recognition & Awards
- 2020 World Changing Idea, Responsible AI Toolkit, FastCompany
- Ranked Leader for AI Consulting, Forrester
- Outstanding achievement in Enterprise, Adoption of AI and AI Ethics, CogX
- Ranked Leader for Data & Analytics Services, Gartner
- 100 Brilliant Women in AI Ethics
Chris Oxborough
Global Assurance Artificial Intelligence Leader, PwC United Kingdom
Tel: +44 (0) 78 1851 0537
Hide