Skip to main content

Trustworthy and Responsible AI


KKU

About this course

Our society is relying extensively on artificial intelligence, in many cases, using neural networks. Artificial intelligence comes with great opportunities, but also certain safety challenges. For example, it is important that those neural networks are robust against so-called adversarial attacks. In this course, Dr. Jan N. van Rijn (Leiden University) will explain various concepts about safety in AI that are important to consider when deploying a neural network in practice. In particular, you will learn what kind of methods can be used to verify whether a network is robust against adversarial attacks and how Automated Machine Learning can be used to further improve these methods.

Instructors

Course Staff Image #1

Jan N. van Rijn

Assistant Professor, Ph.D., Leiden University

Jan N. van Rijn holds a tenured position as assistant professor at Leiden University, where he works in the computer science department (LIACS) and Automated Design of Algorithms cluster (ADA). His research interests include trustworthy artificial intelligence, automated machine learning (AutoML) and metalearning. He obtained his PhD in Computer Science in 2016 at Leiden Institute of Advanced Computer Science (LIACS), Leiden University (the Netherlands).

During his PhD, he developed OpenML.org, an open science platform for machine learning, enabling sharing of machine learning results. He made several funded research visits to the University of Waikato (New Zealand) and the University of Porto (Portugal).

After obtaining his PhD, he worked as a postdoctoral researcher in the Machine Learning lab at the University of Freiburg (Germany), headed by Prof. Dr. Frank Hutter, after which he moved to work as a postdoctoral researcher at Columbia University in the City of New York (USA).

In 2023, he visited the College of Computing, Khon Kaen University, Thailand. His research aim is to democratize access to machine learning and artificial intelligence across societal institutions, by developing knowledge and tools that support domain experts, and make AI-experts more aware of safety risks. He is one of the authors of the book ‘Metalearning: Applications to Automated Machine Learning and Data Mining’ (freely accessible, published by Springer).


Learning outcomes

  • To understand the basics of the AI Safety and Robustness Verification.
  • To know the state-of-the-art in neural network robustness verification.
  • Target Learners

  • Anyone who is interested in AutoML for Neural Network Robustness Verification.
  • Level

    Upper-intermediate

    Length

    3 Hours

    Learning Methods Media

    45 Minutes

    Enroll