GenAI is becoming a business priority – helping to optimize costs, improve quality, increase speed and achieve sustainability goals.
GenAI is becoming a business priority – helping to optimize costs, improve quality, increase speed and achieve sustainability goals.
In a rapidly evolving field organizations must balance these opportunities with risk mitigation.
Agile organizations are developing enterprise-level governance frameworks for deploying genAI, factoring a risk-assessment matrix.
Generative AI (genAI) is predicted to add $4.4 trillion annually to the global economy, according to itu. As boards and the C-suite urgently try to understand the landscape of opportunities to deploy genAI across their business operations and not lose out on a “once in a generation” technological evolution, risks oversight becomes a key consideration.
Risks and related mitigation approaches
We pooled insights from the World Economic Forum’s Chief Legal Officers (CLO) community on how leading enterprises are navigating genAI risks and opportunities. As an organization’s most senior executive responsible for legal strategy and corporate governance, CLOs are uniquely positioned to advise their boards on AI-related risks, through risk assessment frameworks, and develop enterprise-wide mitigation policies while strategically meeting business priorities.
‘Intellectual property’
While the current regulatory landscape surrounding AI remains fragmented around the world, it also offers unique opportunities for organizations to actively shape best practices. Companies that embrace responsible AI compliance programmes early on will be able to ensure that they lead with integrity. Due diligence, compliance programmes and documentation coupled with testing and learning are necessary efforts on this AI journey. They should be viewed as catalysts for scale and growth and will be integral to ensure trust, quality and safety of these solutions.
Building trust’
Manifesting generative AI’s remarkable economic and humanitarian potential goes beyond technological innovation. It’s all about trust. Within their own enterprises and industries, the private sector can model a trust-first approach to maximizing AI’s benefits. This means taking a “both-and” approach that prioritizes the sustained success of internal and external customers and stakeholders: in anticipating and managing risk, we also apply a lens requiring AI to be an enabler of smart growth, increased productivity and wise decision-making, and embrace the upskilling, reskilling, and talent expansion potential ahead of us.
But harnessing the power of AI in a trusted way will also require regulators, businesses, and civil society to work together and abide by guidelines and guardrails:
Prioritizing transparency: People should know when they’re interacting with AI systems and have access to information about how AI-driven decisions are made.
Protecting privacy: Since AI is based on data, promoting and protecting the quality, integrity and proper collection and use of that data is critical to build trust. Industry standards, harmonized regulation, and common sense legislation should support privacy and customer control of their own data.
Developing risk-based frameworks that address the entire value chain: AI is not one-size-fits-all, and effective frameworks should protect citizens while encouraging inclusive innovation.
‘Enterprise-level policies’
Companies are developing mitigation strategies to ensure a checks and balance approach. Some innovative companies have put in place an AI oversight or ethics committee to develop and monitor their AI governance strategy. Such committees pools the expertise of different senior-level executives within the enterprise, such as Chief Technology Officer, Chief Data Officer along with the Chief Legal Officer and others, and can include strategies ranging from data strategy to people strategy, in line with company’s core values and priorities.