Skip to content

Latest commit

 

History

History
328 lines (162 loc) · 50.8 KB

File metadata and controls

328 lines (162 loc) · 50.8 KB

A School for all Seasons on Trustworthy Machine Learning

List curated by Reza Shokri (National University of Singapore) and Nicolas Papernot (University of Toronto and Vector Institute)

Machine learning algorithms are trained on potentially sensitive data, and are increasingly being used in critical decision making processes. Can we trust machine learning frameworks to have access to personal data? Can we trust the models not to reveal personal information or sensitive decision rules? In the settings where training data is noisy or adversarially crafted, can we trust the algorithms to learn robust decision rules? Can we trust them to make correct predictions on adversarial or noisy data? Bias affecting some groups in the population underlying a dataset can arise from both a lack of representation in data but also poor choices of learning algorithms. Can we build trustworthy algorithms that remove disparities and provide fair predictions for all groups? To identify various issues with machine learning algorithms and establish trust, can we provide informative interpretation of machine learning decisions? These are the major questions that the emerging research field of trustworthy machine learning aims to respond.

We have selected different sub-topics and key related research papers (as starting points) to help a student learn about this research area. There are so many good papers which are being published in this domain. This list is by no means comprehensive. Papers are selected here with the intention of maximizing coverage of the techniques introduced in the literature in as few papers as possible. Students are encouraged to dive deeper by reading the follow-up research papers.

Privacy and Confidentiality

Data Inference Attacks

Memorization

Model Inference Attacks

Privacy-Preserving Learning

Confidential Computing

Machine Unlearning

Decentralized (Collaborative, Federated) Learning

Law and Policy

Tools and Libraries

Related Courses and Schools

Robustness

Training Phase

  • Background

  • Battista Biggio, Blaine Nelson, Pavel Laskov. "Poisoning Attacks against Support Vector Machines". In International Conference on Machine Learning, 2012. [paper]

  • Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. "Understanding deep learning requires rethinking generalization." In International Conference on Learning Representations, 2017. [paper] [conference talk] [citations]

  • Jacob Steinhardt, Pang Wei W. Koh, and Percy S. Liang. "Certified defenses for data poisoning attacks." In Advances in neural information processing systems, 2017. [paper] [citations]

  • Matthew Jagielski, Alina Oprea, Battista Biggio, Chang Liu, Cristina Nita-Rotaru, Bo Li. "Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning" In IEEE Symposium on Security and Privacy, 2018. [paper]

  • Ilias Diakonikolas, Gautam Kamath, Daniel M. Kane, Jerry Li, Jacob Steinhardt, Alistair Stewart. Sever: A Robust Meta-Algorithm for Stochastic Optimization. In ICML 2019. [paper]

Inference Phase

Integrity

Availability

Testing and Verification

Tools and Libraries

Law and Policy

  • Ram Shankar Siva Kumar, Jonathon Penney, Bruce Schneier, Kendra Albert. "Legal Risks of Adversarial Machine Learning Research" In ICML 2020 Workshop on Law & Machine Learning. [paper]

Related Courses and Schools

Algorithmic Fairness

Measures

Mechanisms

Analysis

Robustness

  • Avrim Blum, and Kevin Stangl. "Recovering from biased data: Can fairness constraints improve accuracy?." In 1st Symposium on Foundations of Responsible Computing (FORC), 2020. [paper] [conference talk]

  • Heinrich Jiang, and Ofir Nachum. "Identifying and correcting label bias in machine learning." In International Conference on Artificial Intelligence and Statistics, 2020. [paper]

  • Hongyan Chang, Ta Duy Nguyen, Sasi Kumar Murakonda, Ehsan Kazemi, and Reza Shokri. "On Adversarial Bias and the Robustness of Fair Machine Learning." arXiv preprint arXiv:2006.08669, 2020. [paper]

Related Courses and Schools

Tools and libraries

Algorithmic Transparency

Model Explanation

Interpretability

  • Cynthia Rudin. "Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead." Nature Machine Intelligence 1, 2019. [paper]

Recourse

Robustness

Privacy and Confidentiality

Analysis

Law and Policy

Related Courses and Schools