Topic outline


    The course encompasses three separate modules that will be held in the second semester.

    • "Security and privacy of machine learning"
      by Prof. Stjepan Picek (Radboud University, Netherlands)
      March 29 - Apr 4

    • "Formal verification of security protocols"
      by Prof. Alessandro Bruni (IT-University of Copenhagen, Denmark).
      In the Weeks Apr 17 -28

    • "Trustworthy AI: Technology, Regulation, Implementation"
      by Dr. Tarek R. Besold (Eindhoven University of Technology, Netherlands)
      In the weeks May 22 - Jun 1

    For each module, a dedicated section below will describe the content of the module and the examination modality.

  • "Security and privacy of machine learning" - March 29 - Apr 4

    prof. Stjepan Picek, Radboud University  

    Machine learning has accomplished significant advances and produced a broad spectrum of approaches used for diverse application domains. Machine learning usage also results in negative effects and brings new challenges. The deployment of machine learning in real-world systems requires complementary technologies to ensure that machine learning maintains security and privacy goals. Numerous works are showing how machine learning can fail due to various attacks. Such a situation is not surprising as machine learning is commonly not designed to be secure against threats like poisoning attacks, backdoor attacks, model stealing, membership inference attacks, and perturbation attacks.

    This course treats the security and privacy aspects of machine learning, types of compromises, and attack and defense techniques.

    • Day 1: Introduction to the security and privacy of machine learning
    • Day 2: Evasion attacks
    • Day 3: Poisoning attacks
    • Day 4: Model stealing
    • Day 5: Inference attacks