Foundations of Safe Learning
As deep learning moves from the lab into real-world applications, ensuring the correctness and robustness of deep neural networks becomes a paramount concern. Specifying what it means for a neural network to behave correctly is a challenging problem, especially for classifiers and generative models. Verifying that deep networks meet these specifications is computationally challenging. Recent demonstrations of brittleness in deep learning – including adversarial examples and RL agents that learn pathological control policies – have motivated new computationally tractable approaches toward both specifying and verifying salient properties of neural networks. This workshop will bring together academics, industry researchers, and practitioners to delve into the current state of safe learning. Relevant topics include robustness evaluation/verification, safe reinforcement learning, fairness, robustness against model compression, interpretability, and deep learning with invariance and equivariance constraints.
This workshop is part of IBM AI Research Week.
RegistrationRegistration is free but required for attendance. You may register at any time, including the day of the event.
|8:45 am to 9:00 am||Registration
|9:00 am to 9:05 am||Opening Remarks
|9:05 am to 9:35 am||Towards AI You Can Rely On
|9:35 am to 10:05 am||Yanzhi Wang
|10:05 am to 10:35 am||Secure Learning in Adversarial Environments
|10:35 am to 10:50 am||Coffee and light refreshments
|10:50 am to 11:20 am||Stefanie Jegelka
|11:20 am to 11:50 am||A Formal Method's Perspective to AI Safety: Promises and Challenges
|11:50 am to 12:20 pm||Can Deep Learning Models Be Trusted?
|12:20 pm to 12:25 pm||Closing Remarks
The workshop will take place in the Samberg Conference Center (map below) in room DR 3.
Questions should be directed to firstname.lastname@example.org