Dr Marco Gallieri, NNAISENSE, Switzerland

Deep Learning meets Control Theory: Research at NNAISENSE
When Feb 19, 2019
from 03:30 PM to 04:30 PM
Where LR1, Thom Building
Contact Name
Contact Phone 01865-273030
Add event to calendar vCal
iCal

NNAISENSE inherits IDSIA’s long lasting track record of ground-breaking results in artificial intelligence (AI). From perception to reinforcement learning, the company’s legacy of super-human performance places them in the right position to take AI technology into everyday control systems. While AI approaches control problems from an information theoretical and statistical perspective, control theory addresses issues concerning the physical world with a strong focus on safety, hard constraints and theoretical guarantees. While control approaches can be very robust they can seldom suffer from conservativeness of their assumptions. This is believed not to be the case for AI, where non-conservative results can be achieved. AI performance depends mainly on the quality and the amount of data and no unified framework exist for the analysis of stability and robustness. For this reason, while deep learning is becoming the industry standard for perception, its use in control is mostly limited to simulated or non-critical tasks. Combining the fields of control and AI has the potential for retaining the best of both Worlds. The first part of the talk will briefly introduce NNAISENSE’s research in this direction. 

 

In the second part, we will then introduce Non-Autonomous Input-Output Stable Network (NAIS-Net): a very deep architecture where each stacked processing block is derived from a time-invariant non-autonomous dynamical system. Non-autonomy is implemented by skip connections from the block input to each of the unrolled processing stages and allows stability to be enforced so that blocks can be unrolled adaptively to a pattern-dependent processing depth. NAIS-Net induces non-trivial, Lipschitz input-output maps, even for an infinite unroll length. We prove that the network is globally asymptotically stable so that for every initial condition there is exactly one input-dependent equilibrium assuming tanh units, and multiple stable equilibria for ReL units. An efficient implementation that enforces the stability under derived conditions for both fully-connected and convolutional layers is also presented. Experimental results show how NAIS-Net exhibits stability in practice, yielding a significant reduction in generalization gap compared to ResNets.