Topic outline

  • SCP6076301 - ADVANCED TOPICS IN COMPUTER SCIENCE 2022-2023

    The course encompasses three separate modules that will be held in the second semester. 

    • "Security and privacy of machine learning"
      by Prof. Stjepan Picek (Radboud University, Netherlands)
      March 29 - Apr 4
      (Local contact: prof. Mauro Conti)

    • "Formal verification of security protocols"
      by Prof. Alessandro Bruni (IT-University of Copenhagen, Denmark).
      In the Weeks Apr 17 -28
      (Local contact: prof. Paolo Baldan)

    • "Trustworthy AI: Technology, Regulation, Implementation"
      by Dr. Tarek R. Besold (Eindhoven University of Technology, Netherlands and Sony AI, Barcelona)
      In the weeks May 22 - Jun 1
      (Local contact: prof. Roberto Confalonieri)


    For each module, a dedicated section below will describe the content of the module and the exact schedule. Each module will last 10-12 hours. Lessons will be placed in the intersection between the periods which are indicated above and the official schedule (Mon-Tue-Wed 16:30-18:00, Thu 8:30-10:30, Fri 16:30-18:00).

  • "Security and privacy of machine learning" - March 29 - Apr 4

    prof. Stjepan Picek, Radboud University  

    Machine learning has accomplished significant advances and produced a broad spectrum of approaches used for diverse application domains. Machine learning usage also results in negative effects and brings new challenges. The deployment of machine learning in real-world systems requires complementary technologies to ensure that machine learning maintains security and privacy goals. Numerous works are showing how machine learning can fail due to various attacks. Such a situation is not surprising as machine learning is commonly not designed to be secure against threats like poisoning attacks, backdoor attacks, model stealing, membership inference attacks, and perturbation attacks.

    This course treats the security and privacy aspects of machine learning, types of compromises, and attack and defense techniques.

    • Day 1 (Wed 29/03: 16:30-18:30): Introduction to the security and privacy of machine learning
    • Day 2 (Thu 30/03:  8:30-10:30): Evasion attacks
    • Day 3 (Fri 31/03: 16:30-18:30): Poisoning attacks
    • Day 4 (Mon 03/04: 16:30-18:30): Model stealing
    • Day 5 (Tue 04/04: 16:30-18:30): Inference attacks


    EXAM
    Select one paper from the list of papers. Write a report (suggested length is not shorter than 2 pages, one column, 10 pt font). The report should include the basic idea and motivation for the paper. Discussed threat model, experiments, discussion on results, and relevance of the results. Finally, you must provide your perspective on the paper - its strengths and weaknesses. Also, discuss what you would consider possible extensions of the paper.
    The deadline for submission is May 7
    You should send the report to stjepan.picek@ru.nl

    Please include info about what master's program you are in.
    The assignments are individual.


    Evasion attacks


    Poisoning/backdoors attacks

    Model stealing

    Privacy of ML

  • "Automated Analysis of security protocols" - In the Weeks Apr 17 -28

     Prof. Alessandro Bruni (IT-University of Copenhagen, Denmark).

    Many issues with security protocols, like TLS, are due to the logic design of the protocol itself and not to the choice of cryptographic  algorithms. Therefore analysing the protocol design is an important aspect of checking the security of a protocol, and the standard  computational proofs of security that cryptography use don’t scale so  well and are brittle to changes in the design. In recent years even standardisation bodies like the IETF require a formal cryptographic analysis of the protocols considered for standardisation. In this mini-course we explore the symbolic analysis of cryptographic protocols, an approach that has been very successful in the past 20 years in detecting flaws. We will study both the theory and the typical abstractions used to deal with cryptography, as well as do some practical exercises and look at concrete case studies.

    • Day 1 (Tue 18/04 16:30-18:30): Message deduction
    • Day 2 (Wed 19/04 16:30-18:30): Static equivalence of frames
    • Day 3 (Thu 27/04  8:30-10:30): The Applied Pi-calculus
    • Day 4 (Thu 04/05 8:30-10:30): Exercise session
    • Day 5 (Thu 04/05 8:30-10:30): Case studies


    The course is based on the lecture notes
         Formal Models and Techniques for Analyzing Security Protocols: A Tutorial
         Véronique Cortier,  Steve Kremer
         [PDF]

    You can follow by zoom via the following [link]

    A GitHub link with some resources at the following [link]


    EXAM

    The instructions for the exams are below.

  • "Trustworthy AI: Technology, Regulation, Implementation" - In the weeks May 22 - Jun 1

    Dr. Tarek R. Besold (Eindhoven University of Technology, Netherlands and Sony AI Lab, Barcelona)

    The term “trustworthy AI” has become increasingly popular over the last years – from academic research and publications, lawmakers and regulators, to the press and the general public. In this course we have a look at what dimensions are involved in making an AI system trustworthy, what are the scientific underpinnings and the current state of the art in the corresponding fields of academic and industry R&D, and what are the regulatory and market mechanisms which aim to make sure that end products indeed meet a certain (minimum) level of trustworthiness. We will see that the concept of “trust” is multifaceted and spans across several criteria some of which may well be in conflict with each other, requiring (ideally: conscious) tradeoffs between those mutually not fully compatible aspects. 

    We will visit the foundations of the explainability and interpretability of AI systems, of privacy-preservation, of fairness/bias mitigation, of security, and of safety and discuss these from technological, regulatory and societal perspectives. After successful completion of the course the participants will be able to take conscious decisions about tradeoffs between different sub-dimensions of trustworthiness of AI systems, can contextualize the question of trustworthy AI in the relevant European regulatory discourse, and understand the mechanisms which are being put in place to assure that consumers do not have to worry about the (un)trustworthiness of AI systems.


    Week 1 (3 sessions) 

    Tuesday, 23/05, 16:30-18:30

    • Introduction to Trustworthy AI – Mapping the Landscape
      • Explainability/Interpretability
      • Privacy-Preservation
      • Fairness/Bias
      • Security
      • Safety


    Wednesday, 24/05, 16:30-18:30
    • Explainable/Interpretable AI
    Thursday, 25/05, 08:30-10:30

    • Privacy-Preservation
    • Fairness/Bias

    Week 2 (2 sessions)
    Tuesday, 30/05, 16:30-18:30

    • Fairness/Bias (continued)
    • Security
    • Safety (Functional Safety, Human Oversight)

    Wednesday, 31/05, 16:30-18:30

    • Regulation & Testing/Certification
      • The EU AI Act
      • The Trustworthy AI Framework of the EU HLEG
      • AI Standardization
      • Testing/Certification



    Exam instructions and modality.

    In the final evaluation we are having a look at the intersection between explainable AI and privacy-preservation or explainable AI and cyber security.

    Select one paper from the list of papers by putting your student ID in one of the free fields to the right of the paper title and category. 

    https://docs.google.com/spreadsheets/d/1WFPvPaGQrXQanb8O8T1WPF_vpjyiAyicyreDc9t1pY4/edit?usp=sharing

    Please note: Each paper can at most be chosen by four people; once there are four IDs then that paper is "completely occupied" and you have to pick one of the remaining papers with open fields.

    Write a report about your chosen paper (suggested length is not shorter than 3 pages, one column, single-line spacing, 10pt font).

    The report should summarize the key technical contributions of the paper in such a way that any computer scientist (i.e., also if not being an expert in XAI and privacy-preservation/cyber security) can follow the description.

    Additionally, you should give your view on the paper, discussing the potential relevance of the results, advantages/disadvantages, etc.

    Finally, the report should feature a section in which you put the paper and its contribution into the wider context of trustworthy AI, discussing potential trade-offs the presented technique(s) require as regards different dimensions of trustworthiness (e.g., thinking of the HLEG Trustworthy AI criteria and/or the trade-offs between different aspects of trustworthiness we also mentioned in the lecture).

    The assignment is individual, i.e., whilst you can -- of course! -- discuss with fellow students your report must be written be you alone.

    Please send your reports to roberto.confalonieri@unipd.it and to tarek.besold@gmail.com.

    The deadline for submissions is Wednesday, 28/06, 23:59 CEST.

    At the top of your submission, please include your name, student ID and what master's programme you are in, as well as whether you are taking this module to gain credits for other training activities (i.e., OTHER) or for the whole ATCS exam (i.e., EXAM).

    Recommended table of content:

    1. Overview of the paper
    2. Trustworthy AI context
    3. Personal evaluation (advantages/disadvantages, etc.)

    The following papers are recommended readings for everyone:

    • S Ali, T Abuhmed, S El-Sappagh, K Muhammad, JM Alonso-Moral, R Confalonieri, R Guidotti, J Del Ser, N Díaz-Rodríguez, F Herrera. Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Information Fusion, 101805, 2023. https://www.sciencedirect.com/science/article/pii/S1566253523001148  ;
    • Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., García, S., Gil-López, S., Molina, D., Benjamins, R., et al. (2020). Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai. Information Fusion, 2020. https://www.sciencedirect.com/science/article/pii/S1566253519308103 ;