Skip to main content

Fill in the details to unlock webinar

By continuing, you accept our Terms of Use, our Privacy Policy and that your data is stored in the USA.

Speakers

For Business

Training 2 or more people?

Get your team access to the full DataCamp library, with centralized reporting, assignments, projects and more
Try DataCamp for BusinessFor a bespoke solution book a demo.

Building Trust in AI: Scaling Responsible AI Within Your Organization

July 2024
Share

AI holds great promise for improving productivity, automating tasks, and improving customer experiences. Done wrong however, it can lead to bias and discrimination, privacy and security violations, regulatory compliance problems, and erosion of trust with your customers. In this session, Haniyeh Mahmoudian, Chief AI Ethicist at Datarobot, Eske Montoya Martinez van Egerschot, Chief AI Governance and Ethics at DigiDiplomacy & Associate Partner at Meines Holla & Partners, and Alexandra Ebert, Chief Trust Officer at MOSTLY AI, will explore actionable strategies for embedding responsible AI principles across your organization's AI initiatives. Emphasizing transparency, fairness, and accountability, they will highlight practical steps for building and maintaining trust in AI systems among stakeholders.

Summary

Discussing the immediate need for ethical AI usage, the webinar examines the complex challenges and responsibilities linked to artificial intelligence. Experts Eske Montoya and Alexandra Ebert explore the potential dangers of AI, ranging from privacy concerns and fairness issues to regulatory hurdles and ethical problems. They highlight the risks of accelerating AI development under pressure, which can result in poorly designed products leading to societal harm. The conversation also probes into the role of synthetic data in reducing privacy concerns and emphasizes the significance of AI understanding among all levels of an organization. The experts stress the need for clear governance structures and multidisciplinary approaches in addressing AI ethics, advocating for collaboration between data scientists, management, and policymakers.

Key Takeaways:

  • The speedy development of AI technologies presents considerable risks if not handled ethically.
  • Fairness in AI is intricate, with multiple conflicting definitions that require careful thought and collaboration.
  • AI governance is vital, with emerging regulations like the EU AI Act setting the pace for compliance.
  • Synthetic data can help close the gap between privacy and bias detection in AI systems.
  • AI understanding is necessary for all organizational levels to ensure ethical AI implementation.

Deep Dives

Privacy Concerns in AI

Privacy continues to be a major concern in AI development, mainly due to the vast amount of data needed to train AI systems. Alexandra Ebert points out the tension between maintaining privacy and ensuring that AI sy ...
Read More

stems do not discriminate. Access to sensitive attributes like gender and ethnicity is often required to detect and reduce bias, yet these are protected classes under privacy laws. Synthetic data emerges as a feasible solution, allowing developers to use artificial data that retains necessary attributes without compromising individual privacy. Companies must interpret existing privacy laws while considering whether all collected data is necessary for their AI applications. The case of Samsung employees accidentally training an AI model on confidential information highlights the complexities and risks associated with AI privacy.

Understanding Fairness in AI

Fairness in AI is an intricate topic with no single definition. Alexandra Ebert illustrates this with an analogy about her imaginary niece and nephew, highlighting that fairness can be interpreted in various ways. For AI, a mathematical fairness definition is necessary but challenging, as different definitions can contradict. Drawing from the ProPublica case, the discussion highlights the importance of understanding systemic biases in data collection and interpretation. The speakers emphasize that fairness should not only be the responsibility of data scientists but require a multidisciplinary approach, including guidance from regulators. The need for collaborative efforts to define and implement fairness in AI systems is vital to prevent discrimination and societal harm.

AI Governance and Regulatory Standards

AI governance is becoming increasingly significant as regulations evolve. The EU AI Act, expected to be implemented soon, represents a substantial step in regulating AI use. Eske Montoya highlights the necessity for organizations to understand the jurisdictional laws applicable to their AI systems, especially when operating across borders. The process of tracking AI use within a company is vital to determine which regulations apply. Organizations must integrate AI governance into existing compliance structures rather than creating new ones. Montoya stresses that a lack of understanding and preparation can expose companies to significant risks, urging leadership to prioritize AI understanding and ethical AI principles.

The Role of AI Literacy

AI understanding is a recurring theme in the discussion, with both speakers advocating for its importance across all organizational levels. As Eske Montoya notes, many executives claim to be too old to understand AI, yet the responsibility for oversight cannot be delegated entirely to data protection officers. AI understanding initiatives are vital to dispel misconceptions and fears surrounding AI, ensuring that employees understand its capabilities and limitations. Alexandra Ebert emphasizes the need for basic AI education for all professionals, as this foundational knowledge supports ethical AI usage and reduces risks associated with AI deployment. Closing the skills gap in AI and data understanding is vital for closing the digital divide and promoting equitable technology development.


Related

webinar

What Leaders Need to Know About Implementing AI Responsibly

Richie interviews two world-renowned thought leaders on responsible AI. You'll learn about principles of responsible AI, the consequences of irresponsible AI, as well as best practices for implementing responsible AI throughout your organization.

webinar

Data Literacy for Responsible AI

The role of data literacy as the basis for scalable, trustworthy AI governance.

webinar

Driving AI Literacy in Organizations

Gain insight into the growing importance of AI literacy and its role in driving success for modern organizations.

webinar

Building an AI Strategy: Key Steps for Aligning AI with Business Goals

Experts unpack the key steps necessary for building a comprehensive AI strategy that resonates with your organization's objectives.

webinar

Leading with AI: Leadership Insights on Driving Successful AI Transformation

C-level leaders from industry and government will explore how they're harnessing AI to propel their organizations forward.

webinar

Getting ROI from AI

In this webinar, Cal shares lessons learned from real-world examples about how to safely implement AI in your organization.

Hands-on learning experience

Companies using DataCamp achieve course completion rates 6X higher than traditional online course providers

Learn More

Upskill your teams in data science and analytics

Learn More

Join 5,000+ companies and 80% of the Fortune 1000 who use DataCamp to upskill their teams.

Don’t just take our word for it.