In this talk, Professor Zeynep Akata delves into the transformative impacts of representation learning, foundation models, and explainable AI on machine learning technologies.
She highlights how these approaches enhance the adaptability, transparency, and ethical alignment of AI systems across various applications.
Professor Akata will address the synergy between these technologies and their crucial role in advancing AI, aiming to make these complex systems more accessible and understandable.
(We will share more info on this talk shortly.)
Zeynep Akata is the Director of the Institute of Explainable Machine Learning and a Professor of Computer Science at the Technical University of Munich. Her research focuses on making AI-based systems more transparent and accountable, particularly through explainable, multi-modal, and low-shot learning in computer vision. She has held positions at the University of Tübingen and Max Planck Institutes, and her notable recognitions include the Lise Meitner Award for Excellent Women in Computer Science, the ERC Starting Grant and the German Pattern Recognition Award. For more details, you can visit her profile.
Please help us plan ahead by registrating for the event via Meetup:
Event Registration
.
(no need to register with the Helmholtz Imaging Conference)
What? | Interpretable Vision and Language Models |
---|---|
Who? | Zeynep Akata, Director of Institute for Explainable Machine Learning, Professor of Computer Science at Technical University of Munich |
When? | May 14th 2024 @ 11:15am |
Where? | Frauenbad Heidelberg (Event Location) |
Registration | This event is part of the Helmholtz Imaging Conference. Participants of this Heidelberg AI talk don't have to be registered with the conference! However, please do register on the meetup event-site. |