Skip to main content

Fill in the details to unlock webinar

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.

Speakers

  • Richard Benjamins Headshot

    Richard Benjamins

    Chief Responsible AI Officer, and Head of AI for Society & Environment at Telefónica, Co-founder and Vice President at OdiseIA

  • Eske Montoya Martinez van Egerschot Headshot

    Eske Montoya Martinez van Egerschot

    Chief AI Governance and Ethics at DigiDiplomacy, Associate Partner at Meines Holla & Partners

For Business

Training 2 or more people?

Get your team access to the full DataCamp library, with centralized reporting, assignments, projects and more
Try DataCamp For BusinessFor a bespoke solution book a demo.

What Leaders Need to Know About Implementing AI Responsibly

March 2024
Webinar Preview
Share

Summary

Responsible AI is a vital topic, focusing on the harmony between AI's potential to enhance efficiency and the risks it poses. Esther Montoya Martinez van Egershot and Richard Benjamin discussed privacy, discrimination, and employment as primary concerns in AI use. The webinar spotlighted AI's misuse, such as false facial recognition leading to wrongful arrests and societal discrimination in the Netherlands' social security system. The conversation also touched on the significance of AI accuracy, regulations like GDPR and CCPA, and the need for businesses to conduct thorough risk and impact assessments. A key point is the necessity of involving various stakeholders in AI governance to ensure ethical AI use across all business departments, ultimately encouraging responsible innovation.

Key Takeaways:

  • Privacy and discrimination are major concerns in AI deployment.
  • Responsible AI involves understanding and mitigating risks and impacts.
  • AI regulations like GDPR and the European AI Act are shaping global standards.
  • Businesses should adopt AI principles and involve diverse stakeholders in AI governance.
  • AI accuracy and its implications vary significantly across different use cases.

Deep Dives

Privacy and Discrimination Concerns

Esther Montoya highlighted privacy as the primary concern when deploying AI, alongside issues of bias and discrimination, especially in employment. The risks of AI misuse, such as false facial recognition leading to wrongful arrests, were brought to light by Richard Benjamin. He cited the OECD database documenting over 8,000 AI incidents, spotlighting the potential for AI to negatively influence democratic processes and societal outcomes. ...
Read More

Esther added that AI's integration in systems like the Netherlands' social security exacerbates existing biases, demanding more regulatory attention and corporate responsibility in AI governance.

AI Regulations and Global Standards

AI regulations, such as GDPR and the upcoming European AI Act, are key in establishing global standards for AI deployment. Esther discussed the complexities of adhering to these regulations across different jurisdictions, highlighting the need for continuous education and sector-specific compliance strategies. Richard shared how Telefonica manages AI regulation globally by adopting a thorough internal AI policy aligned with the strictest jurisdictions, ensuring compliance across markets. He emphasized the role of audits and the necessity of a global approach to AI governance to avoid stifling innovation.

Accuracy and Impact of AI Predictions

Accuracy in AI predictions is vital, but its importance varies across applications. Richard explained that while errors in movie recommendations are negligible, inaccuracies in medical diagnoses can have severe consequences. He advocated for assessing AI systems' severity, scale, and probability of risk. This involves understanding the potential human rights impacts and employing frameworks like those from NIST and OECD. Richard stressed that addressing these issues by design is cost-effective and encourages a culture of responsible innovation within businesses.

Organizational Involvement and Ethical AI Use

Implementing responsible AI needs a combined top-down and bottom-up approach involving all organizational levels. Esther argued for leadership to set clear AI usage guidelines and involve diverse stakeholders across business departments. This approach minimizes reputational risks and maximizes AI's potential benefits. Richard shared Telefonica's strategy of defining AI principles and training employees on AI ethics. He emphasized the role of AI champions in business departments to facilitate ethical AI discussions and ensure compliance with established guidelines, creating a culture of responsible innovation.


Related

webinar

Building Trust in AI: Scaling Responsible AI Within Your Organization

Explore actionable strategies for embedding responsible AI principles across your organization's AI initiatives.

webinar

Leading with AI: Leadership Insights on Driving Successful AI Transformation

C-level leaders from industry and government will explore how they're harnessing AI to propel their organizations forward.

webinar

Is Artificial Super Intelligence Dangerous?

Richie Cotton, a Senior Data Evangelist at DataCamp, will demystify the current state of AI systems, the different possible levels of AI, and what we know about the risks of powerful AI.

webinar

Data Literacy for Responsible AI

The role of data literacy as the basis for scalable, trustworthy AI governance.

webinar

Designing An Effective AI Literacy Strategy: A How-to Guide for Leaders

Alex Jaimes, CAIO at Dataminr, and Doug Laney, Innovation Fellow at West Monroe, teach you how to develop a strategy to enable all your employees to become AI literate.

webinar

Getting ROI from AI

In this webinar, Cal shares lessons learned from real-world examples about how to safely implement AI in your organization.

Join 5000+ companies and 80% of the Fortune 1000 who use DataCamp to upskill their teams.

Request DemoTry DataCamp for Business

Loved by thousands of companies

Google logo
Ebay logo
PayPal logo
Uber logo
T-Mobile logo