AI Regulations to Drive Responsible AI Initiatives

AI Regulations to Drive Responsible AI Initiatives
The rapid expansion and deployment of artificial intelligence (AI) throughout organizations has resulted in a broad global push to regulate AI.
The rapid expansion and deployment of artificial intelligence (AI) throughout organizations has resulted in a broad global push to regulate AI. 
By 2026, Gartner predicts 50% of governments worldwide will enforce use of responsible AI through regulations, policies and the need for data privacy.
We spoke with, Director Analyst at Gartner, to discuss the impact of AI regulations on CIOs’ future plans and what they can do to implement responsible AI in their organizations.
Q: Gartner predicts that by 2026, 50% of governments worldwide will enforce use of responsible AI through regulations, policies and the need for data privacy. What are the implications of these types of regulations?
A: Responsible AI regulations will erect geographic borders in the digital world and create a web of competing regulations from different governments to protect nations and their populations from unethical or otherwise undesirable applications of AI and GenAI. This will constrain IT leaders’ ability to maximize foreign AI and GenAI products throughout their organizations. These regulations will require AI developers to focus on more AI ethics, transparency and privacy through responsible AI usage across organizations.
“Responsible AI” is an umbrella term for aspects of making the appropriate business and ethical choices when adopting AI in the organization’s context. Examples include being transparent with the use of AI, mitigating bias in algorithms, securing models against subversion and abuse, and protecting the privacy of customer information and regulatory compliance. Responsible AI operationalizes organizational responsibilities and practices that ensure positive and accountable AI development and utilization.
Development of and use of responsible AI will not only be crucial for AI products and service developers, but for organizations that use AI tools as well. Failure to comply will expose organizations to ethical scrutiny by citizens in general, leading to significant financial, reputational and legal risks for the organization.
Q: When will responsible AI become mainstream?
A: Responsible AI is just three years from reaching early majority adoption due to accelerated AI adoption, particularly GenAI, and growing attention to associated regulatory implications.
Responsible AI will impact virtually all applications of AI across industries. In the near term, more regulated industries, such as financial services, healthcare, technology and government, will remain the early adopters of responsible AI. However, responsible AI will also play an important role in less-regulated industries by helping build consumer trust and foster adoption, as well as mitigate financial and legal risks.
Q: What actions should organizations take to future-proof their GenAI projects with both current and any possible future government regulations in mind?
A: There are several actions organizations can consider when it comes to future-proofing their GenAI projects:
Monitor and incorporate the evolving compliance requirements of responsible AI from different governments by developing a framework that maps the organization’s GenAI portfolio of products and services to the different nations’ AI regulatory requirements.
Understand, implement and utilize responsible AI practices contextualized to the organization. This can be done by determining a curriculum for responsible AI and then establishing a structured approach to educate and create visibility across the organization, engage stakeholders and identify the appropriate use cases and solutions for implementation.
Operationalize AI trust, risk and security management (AI TRiSM) in user-centric solutions by integrating responsible AI to accelerate adoption and improve user experience.
Ensure service provider accountability for responsible AI governance by enforcing contractual obligations and mitigate the impact of risks arising out of unethical and noncompliant behaviors or outcomes from uncontrolled and unexplainable biases from AI solutions.
Mar 4, 2024
End of news
لوگو فوتر انگلیسی
  • TIC Central Bldg Shariati Ave. Seyyed Khandan Tehran, I.R. of Iran - P.Code 1631713711
  • International services commercial.contract@tic.ir
نماد
24 User
1,117,495 User
235 User
267 User
Jan 21, 2024
44.192.67.10
Other
USA
Blue titles
Red titles
Increase font size
Decrease font size
Zoom In
Zoom Out
Return to default