The EU AI Act for Developers and Technical Teams
.MP4, AVC, 1280x720, 30 fps | English, AAC, 2 Ch | 53m | 107 MB
Instructor: Merve Hickok
.MP4, AVC, 1280x720, 30 fps | English, AAC, 2 Ch | 53m | 107 MB
Instructor: Merve Hickok
In this course, Merve Hickock—a globally renowned expert on AI policy, ethics, and governance—dives into the EU AI Act, the first international legally binding framework on AI. Discover the objectives behind the regulation, including market harmonization and the protection of fundamental rights.
Learn how to determine if your AI system falls under the Act and explore ways to comply with technical requirements. Gain insights into risk assessment, data governance, documentation, and quality management. Understand the implications of the Act for various AI applications and learn about prohibited practices. Use detailed guidance to navigate the complex landscape of AI regulation, making informed decisions that foster innovation and safeguard user interests. When you complete this course, you'll have the knowledge you need to implement best practices and proactively contribute to a trustworthy AI ecosystem.
Learning objectives
- Articulate the scope, purpose, and significance of the EU AI Act for AI development.
- Define the structure of risk-based classification of AI systems and corresponding compliance obligations.
- Explain the principles which underpin the EU AI Act, such as transparency, data governance, and human oversight.
- Describe ex-ante and ongoing risk management requirements.
- Differentiate between general-purpose AI models and those with systemic risk.