Control Synthesis and Embodied Intelligence

Autonomous systems use sensory data to make decisions about how to act in their environments. Often, either a system itself or the environment it is in can make control synthesis challenging, particularly in real-time constrained settings where the environment is changing. Embodied intelligence, where the physical design of a robot implicitly or explicitly encodes part or all of a control policy, can make control synthesis easier or unnecessary. Embodied intelligence can realized by simultaneously designing the physical system and synthesizing control.

Feedback Synthesis for Controllable Underactuated Systems

Various robotic agents are underactuated either by design (to reduce actuator weight or expenses) or as a result of technical failures. Although underactuated, many of these agents remain controllable through the net effect of combined actions. Using sequential second-order single actions, we derive a feedback controller that has formal guarantees of descent for certain controllable systems.

Our synthesis method finds control solutions for a wider part of the state space and outperforms first-order single actions. In simulations, we are able to track in real-time a moving target using the dynamics of an underactuated, fish-like, 3D rigid body in the presence of fluid flow. 

The synthesized control actions for the underwater fish from the derived second-order feedback controller are successful and much more accurate than the first-order controller.

For more information, read: 

https://arxiv.org/abs/1709.01947 

 

Embodied Intelligence Using Finite State Machines

In this project, both micro- and macro-scale robots are being designed to physically encode part or all of a control policy. Robots can take advantage of principles of embodied intelligence to reduce or eliminate the necessity for real-time computation, whether it be to decrease reaction time in a walking robot or to reduce memory (and therefore size) requirements for a microscopic electronic device.

Control policies are being constructed for these systems to achieve tasks (e.g. demonstrating a periodic hopping gait or locomoting toward a desired point) by mapping sensory inputs directly to control outputs – so no real-time computation is necessary. These control policies can be simplified down to their essential components, so that very little memory is required to execute them. Experiments are being conducted to see whether these robots really can achieve desired tasks autonomously. 

Walking Robot Example: The figures on the left show the SLIP configuration and the desired hopping dynamics. The middle figure illustrates the finite state machine. The figure on the right shows the control policy where one of four possible controls (illustrated by different colors) are used, depending on the y and zm states.

 

Microscopic Electronic Device Example: The top row illustrates control policies where one of six possible controls (indicated by different colors) are used, depending on the location (x,y) in the state space. The middle row shows the initial and final positions of simulations with random initial conditions, all trying to reach the same desired point. As the control policy is simplified (from right to left), similar results in trajectory simulations are achieved. The bottom row shows the reduction in complexity of the finite state machines that result from each control policy.

Related publications