Workshop AI Safety & Governance for Enterprise Applications
Online or in-house trainings within
Europe
All workshops in English or
German
Full days or half days
Content and schedule adapted to your needs
Build responsible AI systems that are safe, ethical, and compliant with emerging regulations while maintaining business value.
This is why you should participate in the workshop (for developers)
- Avoid months of learning regulatory requirements and safety frameworks through trial and error when building AI systems
- To learn on your own is not for everyone, instead join an interactive session with an expert trainer and a group of participants
- Boost your career and stay up-to-date with critical AI safety practices that are becoming mandatory for enterprise deployments
This is why you should participate in the workshop (for decision makers)
- Thinking about deploying AI in your organization? Understand regulatory requirements and risk management frameworks before costly compliance issues arise
- Protect your company from reputational damage, legal liability, and security breaches with proven safety and governance practices
- Educate your team and your company with the latest safety standards in order to build trustworthy AI systems and stay competitive
Schedule
3 full days or 6 half days
Description
In this course, you’ll learn how to design, implement, and operate AI systems that meet emerging safety and governance requirements while delivering business value. We’ll explore the complete landscape of AI risks, from technical failures and adversarial attacks to bias, fairness issues, and unintended societal impacts. Rather than treating safety as an afterthought or compliance checkbox, you’ll learn to integrate responsible AI practices throughout the entire development lifecycle, from initial design to deployment and monitoring.
We will examine practical implementation of AI governance frameworks including the EU AI Act, NIST AI Risk Management Framework, and ISO/IEC standards for AI systems. You’ll learn how to conduct AI impact assessments, implement model cards and documentation practices, and establish accountability structures for AI systems in your organization. We’ll cover essential technical safeguards including input validation, output filtering, adversarial robustness, and techniques to detect and prevent prompt injection, jailbreaking, and other emerging attack vectors.
Throughout the workshop, we’ll work with real-world scenarios covering fairness testing and bias mitigation, explainability and interpretability requirements, privacy-preserving techniques for AI systems, and security best practices for LLM applications. You’ll implement monitoring systems for detecting model drift, performance degradation, and potential safety issues in production. We’ll explore how to establish proper testing protocols for AI systems, including adversarial testing, red teaming exercises, and safety validation procedures.
A significant focus will be on building organizational capabilities for responsible AI deployment. You’ll learn how to create internal governance structures, establish review processes for high-risk AI applications, and implement incident response procedures for AI failures. We will examine case studies of AI system failures and their consequences, analyzing what went wrong and how proper safety measures could have prevented issues. Ethics and societal impact will be integrated throughout, helping you build AI systems that are not only technically sound but also aligned with human values and societal expectations.
I work since more than 20 years as a developer, product manager and AI lead with language technologies. Starting with speech recognition and machine translation I now focus on education in AI, LLMs and semantic technologies.
Check out my AI trainings.
Contact me and book your training.