Understanding AI governance in 2024: The stakeholder landscape

  • July 17, 2024
1519546_Understanding_AI_governance_in_2024_blog_450x250.png

Artificial intelligence (AI) and generative AI (GenAI) are rapidly evolving. But with AI implementation comes risks, both known and unknown. GenAI’s capability to create images, text and other information, for example, raises the risks of bias, plagiarism, false information and copyright infringement. Deepfakes, which present an extreme likeness to original information, can spread misinformation and ruin a person’s or company’s reputation.

From algorithmic bias to existential hazards, AI governance is necessary to mitigate risks. It also promotes trust, guarantees accountability, fosters innovation and protects human rights. The rising AI risk trends underline the importance of strong governance structures that value openness, fairness, accountability and human rights. They also highlight the significance of taking proactive actions and working together to address the ethical, legal and societal ramifications of AI technologies.

What is AI governance?

AI governance ensures trust and compliance in the data that AI systems provide. It involves policies, regulations, ethical frameworks, strategies, operating models, and data and technology infrastructure. These elements guide appropriate AI system development, implementation and use.

The evolving role of stakeholders in AI governance

Governments and regulatory bodies, industry stakeholders, researchers and academics, and civil society and international organizations all have varying interests and obligations when it comes to AI. This multilayered set of internal and external stakeholders adds to the complexity of AI governance, and stakeholders worldwide are struggling. However, efforts such as identifying and resolving ethical issues and building legal frameworks are guiding the development, deployment and application of responsible and constructive AI.

Navigating the changing landscape of AI regulations

Several nations are developing or updating AI-specific regulations. These policies cover various issues, including data privacy, algorithmic accountability, autonomous system safety standards and liability frameworks for AI-related mishaps. The NIST AI Risk Management Framework (NIST AI RMF) and SR 11-7 (Model Risk Management) are the most current canonical frameworks in the U.S., as policymakers continue to develop a comprehensive AI regulatory environment.

Efforts are also underway to promote international cooperation and harmonization of AI rules. The Organization for Economic Cooperation and Development (OECD) and the European Union, through the EU AI Act, are spearheading efforts to build standard principles and regulations for AI governance across borders. Regulatory bodies are establishing specialized agencies or task groups to monitor compliance and implement accountability measures for these AI rules. Once in place, they may conduct audits and investigations and act to enforce infractions of AI-related laws and regulations.

Building ethical frameworks and setting technical standards for AI

Academic institutions and research groups are building ethical frameworks and best practices through interdisciplinary research collaborations and AI ethics workshops. Both will play a pivotal role in advising regulators as they develop AI governance standards and regulations. Leading tech companies and industry organizations have created ethical standards and principles for AI development and implementation. For example, the Partnership on AI and the Institute of Electrical and Electronics Engineers (IEEE) have issued recommendations that emphasize transparency, fairness, accountability and privacy in AI systems.

Organizations like IEEE and the International Organization for Standardization (ISO) are building AI governance standards and protocols to address numerous AI elements. These include interoperability, dependability, safety, security and ethical considerations. Open-source communities are creating tools and frameworks to encourage openness, accountability and fairness in AI algorithms. Projects such as AI Fairness 360 offer tools to assess and address AI bias and privacy problems.

Engaging society in AI governance

Governments, industry stakeholders and civil society organizations constantly engage with the public to increase awareness about AI governance challenges and gather feedback on regulatory and policy decisions. Public consultations, citizen panels, and industry and global forums make participatory decision-making possible while also taking varied opinions into consideration. Comprehensive impact evaluations will help everyone better understand the potential societal repercussions of AI technologies. These studies explore the economic, social, cultural and ethical implications of AI deployment and advise policymakers.

Non-profit, commercial educational institutions and training providers are creating programs to develop AI governance expertise. These programs cover ethics, policy analysis, regulatory compliance and stakeholder engagement. And they give policymakers, industry professionals and researchers the information and skills they need to handle AI governance concerns effectively.

Collaborative strategies for effective AI governance

As AI technologies continue to evolve, the need for an adaptive, inclusive and forward-thinking governance framework is increasingly apparent. Current AI governance efforts show a rising realization of the importance of comprehensive, multi-stakeholder methods to address the ethical, legal and societal ramifications of AI technologies. Such efforts will promote responsible AI development and deployment while protecting the interests and rights of persons, communities, businesses and governments by encouraging collaboration and communication among all stakeholders.

As the AI governance environment evolves, regulatory and industry conferences, workshops and online forums will serve as platforms for knowledge sharing, collaboration, and the exchange of best practices, case studies and lessons learned. These efforts will encourage cross-sector collaboration and peer learning to quicken progress in tackling AI governance issues.

And by encouraging collaboration among governments, industry leaders, academia and civil society, we can make sure that AI advances in a manner that is ethical, transparent and beneficial to all segments of society. Together, we can harness the transformative power of AI while safeguarding human rights, promoting a sustainable future and minimizing risks. Let’s commit to continuous dialogue and collective learning and action to shape an AI-enhanced world that truly reflects our shared values and aspirations.

Watch our recent on-demand webinar — Are you playing roulette with your TPRM strategy? — to learn more about how leaders are navigating third-party risk in the age of AI.

Subscribe to our blog

ribbon-logo-dark
Karan Dave headshot
Karan Dave
Karan is a Director with NTT DATA’s Risk and Compliance Practice. He is a value generation and compliance-focused risk professional with over 15 years of experience partnering with banks globally, providing risk advisory and digital transformation services. He has helped various start-ups and mid-size and large banks globally in Asia, Europe, and North America across all Retail and Commercial banking areas, including Deposits, Lending and Leasing, Trade Finance, Treasury, and Branch banking.

Related Blog Posts