Here is the list of other associated projects that are relevant to the research questions under the frame of the ONE MUNICH project, but are funded from different resources.
1. Limitations of Deep Neural Networks
Limitations of Deep Neural Networks
The success of Deep Learning in many different practical application fields, ranging from image classification, protein folding prediction to natural language processing, lead to the (ongoing) development of a rich mathematical theory. Although great strides have been made to deepen our understanding of the field, many questions remain open at the moment. One of the most important but also most fundamental issues concerns the capabilities and limitations of Deep Learning. Simply put, which problems can we reasonably expect Deep Learning to solve and where can we, with great certainty, predict failures of Deep Learning methods. An often neglected aspect of this discussion are the restrictions imposed by the hardware the systems are running on. Deep Learning methods can not exceed the fundamental barriers of its computation platforms. Because of this it is crucial to link the capabilities of Deep Learning to (actual or theoretical) computation devices. The aim of this project is to characterize the possibilities and boundaries of Deep Learning inflicted by computational limits.
2. AI.D – Artificial Intelligence for Neuro Deficits
AI.D – Artificial Intelligence for Neuro Deficits
Limb loss, spinal cord injury (SCI), stroke, neuromusculoskeletal disorders (NMD), multiple sclerosis (MS), and cerebral palsy (CP), whose combined prevalence is approximately 3.8%, are all conditions that affect individuals ability to use their limbs without assistance. Prostheses, exoskeletons, and robotic assistive systems are promising means for chronic patients to regain their autonomy. In this context, there is a strong need for designing systems that recognize human intent and mimic the intended behavior as close as possible to the natural limb, in order to maximize the therapeutic effect and patient benefit.
The project AI.D aims to restore lost function of neurologically and neuromuscularly impaired patients through a new generation of human-model-informed and learning-enabled neuro machines, designed and controlled according to the principles of human neuromechanics and motor control, using a Brain/Body-Machine Interface (BMI). The mission of this translational research project is to develop novel methods and technologies that substantially improve the state of the art and provide support to such patients in the following three central phases:
- Receiving assistance and mobilization care
- Regaining mobility and independence
- Reinclusion into society
In order to achieve this ambitious goal the strategic approach will be based on three pillars:
- Brain-Machine Interface: Human model-informed causal signal processing
- Human-informed AI: Intelligent control and learning algorithms
- Clinical studies: From technology to clinical validation
Based on our pioneering work in robotics, neural control, neuromechanics modeling and physics-informed machine learning, the innovative special feature of AI.D project is the development of a new generation of AI-enhanced human-model-informed neuro machines, designed and controlled according to the principles of the human body, to restore autonomy and mobility to the physically challenged – amputees, paralyzed, and stroke patients.
3. Interactive learning of explainable, situation-adapted decision models
Interactive learning of explainable, situation-adapted decision models
The focus of this project investigates a novel approach through which the space of possible models explaining a certain decision can be explored interactively by a user until a model is found that satisfies the needs of the user in terms of the trade-off between accuracy and model complexity. The project defines and explores a refinement relation that defines a lattice as an explanation space from which explanations can be selected.
- Prof. Dr. Eyke Hüllermeier (LMU – IN) – website
- Prof. Dr. Kirsten Thommes (UPB)
4. Remaining Useful Lifetime for new and used Technical Systems under Non-Stationary Conditions
Remaining Useful Lifetime for new and used Technical Systems under Non-Stationary Conditions
Condition-based maintenance and predictive maintenance are increasingly applied in the industry due to their ability of ensuring an optimum utilization of the monitored system. These maintenance strategies allow for diagnosing and predicting the health states of the system under stationary operating conditions. However, technical systems mostly operate under non-stationary conditions, e.g. a wind turbine affected by different loads and speeds due to stochastic wind excitation. Non-stationary conditions lead to changed sensor data and thereby mask alterations caused by either faults or degradation of the system. Therefore, condition monitoring methods need to be extended and adapted for systems operating under non-stationary conditions.The proposed project aims to develop methods for remaining useful lifetime prediction for systems operated under non-stationary conditions. Therefore, classical data-driven and model-based approaches from engineering are combined with approaches from the field of artificial intelligence. By a hybrid combination of clustering and classification with knowledge-based approaches, operating conditions are categorized and failure modes are identified. Based on uncertainty quantification and analyzed relationships between the operating conditions, the sensor data und the degradation evolution, suitable features for enabling the prediction of the remaining useful lifetime are developed and evaluated. Embedding non-stationary future operating conditions is realized by the use of different machine learning methods such as learning on data streams. These methods enable incremental learning and adaption to changes like variation of operating conditions. Moreover, hybrid methods are developed to allow a prediction of the remaining useful lifetime for used systems that are retrofitted with suitable sensors but lack sensor data of their past operation.For validation of the methods for remaining useful lifetime predictions, three application examples are chosen which have been selected from various thematic fields. To generate data for the first example, a suitable ball bearing test rig needs to be developed and constructed. The test rig should allow varying operating conditions regarding speed and bearing load. Run-to-failure data is acquired by different sensors such as acceleration sensors, temperature sensors, and strain gauges. For the second example, a laboratory experiment based on piezoelectric transducers is also implemented, whose failure is characterized by cracks and should be monitored. The third example is based on simulated data of a turbofan engine whose degradation under six conditions has been detected by various sensors.
- Prof. Dr. Eyke Hüllermeier (LMU – IN) – website
- Prof. Dr.-ing Walter Sextro (UPB)
5. 6G-life Digital transformation and sovereignty of future communication networks
6G-life Digital transformation and sovereignty of future communication networks
TUM and the Technical University of Dresden have joined forces to form the 6G-life research hub to drive cutting-edge research for future 6G communication networks with a focus on human-machine collaboration. The merher of the teo universities of excellence combines their world leading preliminary work in the field of Tactile Internet in the Cluster of Excellence CeTI, 5G communication networks, quantum communication, Post-Shannon theory, artificial intelligence methods, and adaptive and flexible hardware and software platforms.
6G-life will significantly stimulate industry and the startup landscape in Germany through positive showcase projects and thus sustainably strengthen digital sovereignty in Germany. Test fields for two use cases will drive research and economic stimulation. The goal is to create at least 10 new startups through 6G-life in the first four years and to involve at least 30 startups. 6G-life will significantly contribute to the creation of a skilled workforce. In addition, 6G-life has set itself the task of accompanying the population in the digital transformation and thus making a contribution to society.
Supervising PIs (with ONE MUNICH):
6. PerforM – Personalities for Machinery in Personal Pervasive Smart Spaces
PerforM – Personalities for Machinery in Personal Pervasive Smart Spaces
The “Smart Home” concept promises an intelligent, helpful environment, in which technology makes life easier, simpler, or safer for its inhabitants. On a technical level, this is currently achieved by many networked devices interacting with each other, working on shared protocols and standards. From the perspective of user experience (UX), however, the configuration of and interaction with such a collection of devices has become so complex that it currently rather stands in the way of widespread adoption and use. Thus, instead of many singular, but interacting intelligent devices, the project “PerforM – Personalities for Machinery in Personal Pervasive Smart Spaces” proposes an overarching interaction concept for the environment as a whole, addressing the mental model of a central, omnipresent “room intelligence”. This room intelligence will control existing UI-less smart home devices, but will also be able to deal with “legacy”, i.e. non-smart machinery or generally any physical object by using a robotic manipulator (for example, a mobile robotic arm). Besides an exploration of an innovative way to address the current challenges of pervasive computing environments (PCEs), our research program also addresses fundamental questions and gaps in previous research about how different design cues are integrated into an overall perception of “system intelligence”, “entity”, and “personality”.
- Prof. Dr. Andreas Butz (LMU – IN) – website
- Prof. Dr. Sarah Diefenbach (LMU)
7. The Curious Robot: An Unsupervised Human-in-the-Loop Action-Learning Approach
The Curious Robot: An Unsupervised Human-in-the-Loop Action-Learning Approach
Domestic robots can bring the next step in human-computer collaboration, envisioned to allow many shared tasks such as cooking and cleaning. However, understanding the many ways humans perform actions is an unsolved problem. We propose a curiosity-driven robot that will learn user behavior based on videos. We use human pose estimation in combination with dimensionality reduction to understand the pose space. Using unsupervised clustering, we will find new unknown actions. As soon as a new action cluster emerges, the robot will ask for a label for this cluster and, thus, extend the knowledge graph with this human-in-the-loop approach.
- Prof. Dr. Albrecht Schmidt (LMU – IN) – website
- Prof. Dr. Sven Mayer (LMU)
- Chao Wang, Michael Gienger (Honda Research Institute Europe, Germany)