About

First Max Planck ETH Workshop on Learning Control
11-13 November 2015, Tübingen, Germany


We are pleased to announce the First Max Planck ETH Workshop on Learning Control within the Max Planck ETH Center for Learning Systems. The workshop will take place November 11-13 2015 at the Max Planck Institute for Intelligent Systems (MPI-IS) in Tübingen. We cordially invite all researchers from ETH Zürich and MPI-IS interested in the area of Learning Control to participate and actively contribute to this workshop.



Aims and Scope

The workshop aims at bringing together researchers from ETH Zürich and MPI-IS working in the area of Learning Control in order to create a community of interest within the Max Planck ETH Center for Learning Systems. The workshop will provide a platform to exchange ideas, present current research, discuss challenges for Learning Control, and initiate future research collaborations within the Center.

The topic and scope of this workshop is Learning Control. Albeit not uniquely defined, we understand Learning Control as the rather broad research area that lies at the intersection of Machine Learning and Automatic Control. This includes, but is not limited to data-driven approaches for control design, adaptive control, dual control, machine learning for control, online learning, active learning for control, reinforcement learning, and applications of learning control.

The workshop is open to all researchers from MPI-IS and ETH. We seek to create an informal atmosphere to foster open discussions and exchange of ideas.


Submissions

All participants are asked to submit a maximum one page abstract detailing their research interests. In addition to presenting research results in the area of Learning Control, we encourage participants to also include open research questions, early ideas, or any topic that can lead to interesting and fruitful discussions in this exciting area.

Submissions will be briefly reviewed to ensure adequacy with the scope of the workshop. A few submissions will be selected for short plenary presentation. All other accepted submissions will be presented in interactive poster format.


Format

In addition to interactive presentations and short talks, there will be invited keynote talks, a panel discussion, as well as social events with ample room for discussions and informal interactions.

We are excited to have the following invited speakers:


Costs & logistics

Meals (lunch, dinner, coffee breaks) and accommodation for participants from ETH are covered by the Max Planck ETH Center for Learning Systems. However, participants will have to organize their own travels and see with Sabrina Nepozitek (Sabrina.Nepozitek@inf.ethz.ch) how to handle reimbursements.

Accommodation is taken care of by Andrea Odermatt. The following hotels have been booked for ETH participants:

You will receive more information and the hotel you have been accommodated in shortly via email. If you have any questions meanwhile regarding accommodation or other administrative issues, please contact Andrea Odermatt (andrea.odermatt@tuebingen.mpg.de).


Workshop location

All workshop activities, including registration on the first day, take place at the Max Planck Haus on the MPI Tübingen campus.
Max Planck House
Spemannstr. 36
72076 Tübingen
Germany
+49 (0)7071 601-765
+49 (0)7071 601-790
max-planck-haus@tuebingen.mpg.de
The dinner on Thursday will be in town (information will be provided during the workshop).

Due to the construction site, drivers can NOT access the Campus from Spemannstr. anymore! Drivers should use the new central parking lot of the Campus, via "Paul-Ehrlich-Str" - see plan

The walk from the parking lot to the Max Planck House has been shown using a "dashed-green-line".
More information about arriving at the campus can be found here


Dates

Dates

  • August 11th - Registration opens (registration through website below)
  • September 11th September 18th - Registration and abstract submission deadline
  • September 25th October 2nd - Notification of acceptance/presentation format
  • November 11th (late afternoon) - Workshop starts
  • November 13th (early afternoon) - Workshop ends

Program

Schedule (talks) & poster sessions


Schedule (overview)

November 11th (Wednesday)

Description From To
Registration (@Max Planck Haus, lobby) 16:00 17:00
Opening, Invited talk: Melanie Zeilinger: Towards Safe Learning in Control - Leveraging Online Data for Performance
Show details
(@Max Planck Haus, lecture hall)
17:00 ...

Abstract: Demanding performance requirements combined with increasing complexity, uncertainty and human interaction in many emerging application problems, e.g. in robotic, transportation, or power systems, are pushing traditional control methods to their limits. A new opportunity to address these challenges is offered by sensor technologies with the ability to collect large amounts of data online. While machine learning provides powerful techniques to analyze and utilize such large-scale data, safety concerns when integrating them in a closed-loop, automated decision-making process represent a key limitation for leveraging their potential.

In this talk, we will discuss some of our recent work towards an automatic controller synthesis that utilizes online data to enhance system performance, while ensuring satisfaction of safety conditions at all times. We show how a predictive controller can be systematically tailored to the particular system at hand by improving predictions, quantifying uncertainties and/or tailoring the objective function online based on data, providing a high performance controller with reduced development times. Then, a safety wrapper is introduced that exploits reachability analysis to ensure satisfaction of constraints for any online control scheme. The key novelty is the learning capability of the wrapper itself, utilizing data to find the largest region of safe operation where a performance-maximizing controller can be employed. Finally, experimental results are shown for a quad-rotor safely learning to fly.

Dinner (@Max Planck Haus, conference room) ... 21:00

November 12th (Thursday)

Description From To
Invited talk: Stefan Schaal (@Max Planck Haus, lecture hall) 09:00 10:00
Poster session / coffee - See poster sessions below (@Max Planck Haus, lobby) 10:00 11:30
Participant talks: (@Max Planck Haus, lecture hall) 11:30 12:30
Farbod Farshidian
Path Integral Stochastic Optimal Control for Reinforcement Learning
Abstract
11:30 11:50
Manuel Wuethrich
A New Perspective and Extension of the Gaussian Filter
Co-auothors: Sebastian Trimpe, Daniel Kappler and Stefan Schaal
Abstract
11:50 12:10
Nicolas Gerig
Outcome prediction to assist therapists in selecting exercises in patient-tailored, robot-assisted neurorehabilitation
Co-auothors: Georg Rauter, Roland Sigrist, Robert Riener, Peter Wolf
Abstract
12:10 12:30
Lunch (@ Max Planck Haus, conference room) 12:30 14:00
Invited talk: Andreas Krause: From Proteins to Robots: Learning to Optimize with Confidence
Show details
(@Max Planck Haus, lecture hall)
14:00 15:00

Abstract: With the success of machine learning, we increasingly see learning algorithms make decisions in the real world. Often, however, this is in stark contrast to the classical train-test paradigm, since the learning algorithm affects the very data it must operate on. I will explain how statistical confidence bounds can guide data acquisition in a principled way to make effective decisions in a variety of complex settings. I will discuss several applications, ranging from autonomously guiding wetlab experiments in protein structure optimization, to safe automatic parameter tuning on a robotic platform.

Participant talks (@Max Planck Haus, lecture hall) 15:00 16:20
Alexander Herzog
Optimization-based whole-body planning and control under multi-contact interaction
Co-auothors: Brahayam Ponton, Stefan Schaal, Ludovic Righetti
Abstract
15:00 15:20
Anja Zai
Can we infer the microstructure of reinforcement learning from behavioral data?
Co-auothors: Alessandro Canopoli, Anna E. Stepien, Richard H.R. Hahnloser
Abstract
15:20 15:40
Janis Edelmann
Learning Magnetic Control
Co-auothors: Andrew Petruska, Ayoung Hong, Samuel Charreyron
Abstract
15:40 16:00
Edgar Klenske
Dual Control for Approximate Bayesian Reinforcement Learning
Co-auothors: Philipp Hennig
Abstract
16:00 16:20
Coffee break (@Max Planck Haus, lobby) 16:20 17:00
Panel discussion (@Max Planck Haus, conference room) 17:00 18:00
Conference dinner & focus discussions (@Casino am Neckar, Wöhrdstrasse 25, 72070 Tübingen) 19:30 22:00

November 13th (Friday)

Description From To
Invited talk: Bernhard Schölkopf (@Max Planck Haus, lecture hall) 09:00 10:00
Poster session / coffee - See poster sessions below (@Max Planck Haus, lobby) 10:00 11:30
Invited talk: Pieter Abbeel: Making Robots Learn - Show details (@Max Planck Haus, lecture hall) 11:30 12:30

Abstract: Programming robots remains notoriously difficult. Equipping robots with the ability to learn would by-pass the need for what often ends up being time-consuming task specific programming. In this talk I will describe the ideas behind two promising types of robot learning: First I will discuss apprenticeship learning, in which robots learn from human demonstrations, and which has enabled autonomous helicopter aerobatics, knot tying, basic suturing, and cloth manipulation. Then I will discuss deep reinforcement learning, in which robots learn through their own trial and error, and which has enabled learning locomotion as well as a range of assembly and manipulation tasks.

Bio: Pieter Abbeel (Associate Professor, UC Berkeley EECS) works in machine learning and robotics, in particular his research is on making robots learn from people (apprenticeship learning) and how to make robots learn through their own trial and error (reinforcement learning). His robots have learned: advanced helicopter aerobatics, knot-tying, basic assembly, and organizing laundry. He has won various awards, including best paper awards at ICML and ICRA, the Sloan Fellowship, the Air Force Office of Scientific Research Young Investigator Program (AFOSR-YIP) award, the Office of Naval Research Young Investigator Program (ONR-YIP) award, the DARPA Young Faculty Award (DARPA-YFA), the National Science Foundation Faculty Early Career Development Program Award (NSF-CAREER), the MIT TR35, the IEEE Robotics and Automation Society (RAS) Early Career Award, and the Dick Volz Best U.S. Ph.D. Thesis in Robotics and Automation Award.

Lunch (@Max Planck Haus, conference room) 12:30 14:00
Participant talks (@Max Planck Haus, lecture hall) 14:00 15:00
Felix Berkenkamp
Learning-based Robust Control: Guaranteeing Stability while Improving Performance
Co-auothors: Angela P. Schoellig
Abstract
14:00 14:20
Alonso Marco
Automatic LQR Tuning Based on Gaussian Process Optimization
Co-auothors: Philipp Hennig, Jeannette Bohg, Stefan Schaal and Sebastian Trimpe
Abstract
14:20 14:40
Tobias Sutter
Decision making under uncertainty: Performance bounds for the scenario approach
Co-auothors: Peyman Mohajerin Esfahani, John Lygeros
Abstract
14:40 15:00
Closing remarks (@Max Planck Haus, lecture hall) 15:00 15:15
Lab tours (meeting: @Max Planck Haus, lobby) 15:30 17:00
MPI Friday beer (@Spemannstr. 41, PS dept.) 17:00 ...



Poster Sessions

November 12th (Thursday)

Author Title Co-authors
Daniel Kappler Data-Driven Online Decision Making for Autonomous Manipulation Peter Pastor, Manuel Wuethrich, Jeannette Bohg, Stefan Schaal Abstract
Chengzhi Hu Real Time Control and Monitoring of Plant Cell Micro-Indention via Cellular Force Microscope Jan Burri Abstract
Jim Mainprice Using Inverse Optimal Control To Learn Collaborative Human Reaching Motion Policies Rafi Hayne, Dmitry Berenson Abstract
Andreas Doerr Adaptive and Learning Concepts in Hydraulic Force Control Cédric de Crousaz, Ludovic Righetti, Sebastian Trimpe Abstract
Fabian Just Enhancing robotic arm rehabilitation through intelligent interaction between the patient, the therapist, and the rehabilitation robot Robert Riener, Georg Rauter Abstract
Robin Oswald Velocity Control of Trapped Ions for Transport Quantum Logic Gates Abstract
Burak Zeydan Reinforcement learning for particle manipulation with the RodBot Roel Pieters, Bradley J. Nelson Abstract
Dieter Büchler Using Pneumatic Artificial Muscles to Facilitate Robot Learning Performance Yanlong Huang, Jan Peters Abstract
Stefano Palagi Towards bioinspired self-adaptive soft microrobots Peer Fischer Abstract
Michael Neunert Integrating Optimal Control and Learning Farbod Farshidian, Jonas Buchli Abstract
Tobias Sutter Asymptotic Capacity of a Random Channel David Sutter, John Lygeros Abstract
Nitish Kumar Agile Digital Fabrication: Robotics for manufacturing at the large scale Timothy Sandy, Markus Giftthaler Abstract
Abstract
Abstract

November 13th (Friday)

Author Title Co-authors
Simon Ebner Adaptive Communication for Control Sebastian Trimpe Abstract
Johannes Pfleging Learning tool use to infer prehistoric human behaviour Jonas Buchli Abstract
Miroslav Bogdanovic Imitation learning for games with convolutional networks Abstract
Jemin Hwangbo Foothold Selection Using Direct State-to-Action Mapping Marco Hutter Abstract
Thiago Boaventura Learning transparency controllers for exoskeleton robots Jonas Buchli Abstract
Mazen Al Borno Domain of Attraction Expansion for Physics-Based Characters Javier Romero Abstract
Okan Koc Cautious Learning Control with Total Least Squares Guilherme Maeda, Jan Peters Abstract
Franziska Meier Drifting Gaussian Processes for Online Model Learning Stefan Schaal Abstract
Jakob Buhmann Communication in a distributed and hierarchical control system Matthew Cook Abstract
Alexander Winkler Tracking optimized and learned whole body motions on real robots Jonas Buchli Abstract
Amin Rezaeizadeh Iterative Learning Control Application in the SwissFEL Roy S. Smith Abstract
Diego Pardo Learning Rigid Body Dynamics of Constrained Multibody Systems Jonas Buchli Abstract

Information for presenters

Poster presentations:

Please bring an A0 poster (portrait or landscape).

Participants talks:

We have scheduled 20 min for each talk. Please prepare a 15 min presentation and allow 5 min for questions and discussions afterward.

Organizers

Jonasbuchli

Jonas Buchli

ETH Zürich, Agile and Dexterous Robotics Lab
Ludovicrighetti

Ludovic Righetti

MPI-IS, Autonomous Motion Department
Sebastiantrimpe

Sebastian Trimpe

MPI-IS, Autonomous Motion Department
Melaniezeilinger

Melanie Zeilinger

ETH Zürich, Institute for Dynamic Systems and Control

Administration & Support

For questions regarding travel, accommodation and administrative matters, please contact Andrea Odermatt:

Andreaodermatt

Andrea Odermatt

MPI IS, Personal Assistant / Conference Planning

Impressions

Impressions from the workshop

We thank all participants and speakers for their contributions and for making this a very interesting and exciting workshop.

Photo Credits: Jan Issac (group photo) and Claudia Däfler (small photos)