September 20, 2022

12:00 pm / 1:00 pm

Venue

Clark Hall, Room 110

In Person Clark Hall, Room 110

Join Zoom Meeting

https://wse.zoom.us/j/98624413365

Meeting ID: 986 2441 3365

Sammy Khalife, PhD

Post-Doctoral Fellow

Johns Hopkins University

?Neural networks with linearthreshold activations: structure and algorithms?

Abstract: Inthis talk I will present new results on neural networks with linear thresholdactivation functions. The class of functions that are representable by suchneural networks can be precisely described, and 2 hidden layers are necessaryand sufficient to represent any function representable in the class. This is asurprising result in the light of recent exact representability investigationsforneural networks using other popular activation functions like rectifiedlinear units (ReLU). I will also discuss some upper and lower bounds on thesizes of the neural networks required to represent any function in the class.We also were able to design an algorithm to solve the empirical riskminimization (ERM) problem to global optimality for these neural networks witha fixed architecture. The algorithm’s running time is polynomial in the size ofthe data sample, if the input dimension and the size of the networkarchitecture are considered fixed constants. The algorithm is unique in thesense that it works for any architecture with any number of layers, whereasprevious polynomial time globally optimal algorithms work only for restrictedclasses of architectures. Finally, I will present a new class of neuralnetworkscalled shortcut linear threshold networks. To the best of ourknowledge, this way of designing neural networks have not been explored beforein the literature. We show that these neural networks have several desirabletheoretical properties.

Biography: Dr.Khalife joined the Applied Mathematics and Statistics Department of JohnsHopkins University in Fall 2021 as a postdoctoral fellow. His work is relatedto discrete optimization and theoretical deep learning, and he is interested inthe formal expressivity and complexity of neural networks. Dr. Khalife has aMScfrom ENSTA Paristech with a specialization in Mathematical Optimizationjointly with Paris 1 Sorbonne, and a MSc from Ecole Normale Sup√©rieure ParisSaclay, in the Mathematics of Vision and Learning. Dr. Khalife received hisPhDfrom Ecole Polytechnique in Computer Science in 2020. 

 For more information, visit the MINDS website.