April 21, 2020

12:00 pm / 1:00 pm

Venue

ZOOM

Zoom information to follow

Abstract: Graph Neural Networks (GNNs) have become a popular tool for learning representations of graph-structured inputs, with applications in computational chemistry, recommendation, pharmacy, reasoning, and many other areas.

After a brief introduction to Graph Neural Networks, this talkwill show recent results on representational power and learning in GNNs. First, we study the discriminative power of message passing type networks as a function of architecture choices, and, in the process, find that some popular architectures cannot learn to distinguish certain simple graph structures. Second, while many network architectures can represent a task, some learn it better than others. At the example of reasoning tasks, We formalize the interaction of the network architecture and the structure of the task, and probe its effect on learning. Third, we analyze learning via new generalization bounds for GNNs.

This talk is based on joint work with Keyulu Xu, Jingling Li, Mozhi Zhang, Simon S. Du, Ken-ichi Kawarabayashi, Weihua Hu, Jure Leskovec, Vikas Garg and Tommi Jaakkola.

Bio: Stefanie Jegelka is an X-Window Consortium Career Development Associate Professor in the Department of EECS at MIT, and a member of the Computer Science and AI Lab (CSAIL). Before joining MIT, shewas a postdoctoral researcher at UC Berkeley, and obtained her PhD from ETH Zurich and the Max Planck Institute for Intelligent Systems. Stefanie has received a Sloan Research Fellowship, an NSF CAREER Award, a DARPA Young Faculty Award, the German Pattern Recognition Award and a Best Paper Award at ICML. Her research interests span the theory and practice of algorithmic machine learning, including discrete and continuous optimization, discrete probability, and learning with structured data.

Read more