Abstract
Ensuring Reliability and Trustworthiness of AI systems is crucial in the era of LLMs and, of course, particularly in the realm of Agentic AI. This talk explores strategies to mitigate risks in AI-powered systems by enhancing transparency, fostering integrity, and applying watermarking for better traceability.
Topics To Be Covered
Strategies to enhance AI reliability & trustworthiness
Risk mitigation techniques for AI-powered systems
How transparency, integrity, and watermarking improve AI traceability
Who Is This For?
AI Governance & Compliance Leaders
Enterprise AI & Risk Management Teams
Machine Learning & AI Engineers
Ethics & Policy Experts in AI
Investors & Innovators in AI Security
Meet Your Speaker
Professor of Cybersecurity and Artificial Intelligence, Freie Universität Berlin
Prof. Dr.-Ing. habil. Gerhard Wunder is a Professor of Cybersecurity and Artificial Intelligence at Freie Universität Berlin, leading the Cybersecurity and AI (C-AI) Group.
Supported by Bundesdruckerei GmbH, his group focuses on AI with cybersecurity and cybersecurity through AI, hosting the Center for Trustworthy AI at the university. Prof. Wunder's research encompasses generative AI, resilient networks, privacy-preserving synthetic data, explainability and fairness in AI, quantum crypto analysis, AI engines for anomaly detection, cybersecurity architectures in IoT, AI-assisted device biometrics, and low-resource federated learning with blockchains.
His work bridges the gap between advanced AI technologies and practical cybersecurity solutions, driving progress in creating secure and intelligent systems for real-world applications. Prof. Wunder is dedicated to promoting innovation in AI and cybersecurity, making significant contributions to both academic research and industry practices.
ADDITIONAL INFORMATION
Time & Place
Wed, March 25
14:00 - 14:15
Grand Ballroom II
Roundtables & Theatre Seating
Max. Capacity: 256 Seats
Secure your seat – registration required.
Notes
Agenda for this session
15 min presentation

.png)