What If Robots Could Learn Skills from Scratch?

Any machine can learn to move with enough engineering, according to Karen Liu, but imagine what could happen if machines were able to evolve and learn new motions over time with very little instruction, just like a human child does.

Liu, an associate professor in the School of Interactive Computing and member of the Machine Learning Center at Georgia Tech, conducts research on simulating and controlling human and animal movements in the digital world with virtual “agents” or using actual robots in the lab. 

Creating moving agents in a digital landscape has been around for many years but Liu and her team are teaching agents to move by using artificial intelligence.

In previous iterations, robots and agents have been taught using reinforcement learning (RL), which requires extensive coding and algorithmic development for each movement, no matter how big or small. 

In contrast to the common approach of mimicking motion trajectories, Liu’s lab wanted to create a virtual agent that learns how to walk on its own.

By combining RL with deep learning, the recent advancement in deep RL has demonstrated that it is possible to use a “minimalist” approach to learn locomotion, but the resulting motion appears unnatural.

Liu’s team proposed to train the agent using curriculum learning with adjustable physical aid to create more natural animal locomotion using the minimalist learning approach. 

Curriculum learning is, as it sounds, very similar to how a person goes through their educational process. An agent is given a simpler task at the beginning of the learning process and once it masters the skill, it is able to progress to the next lesson. 

One of the challenges researchers face is making sure the agent’s motion looks natural.

“Without motion trajectory to mimic, most locomotion produced by deep RL methods are too energetic or asymmetrical.” said Liu. 

To help combat these issues, Liu and her team have introduced a virtual spring to assist an agent to provide physical aid during the training process. 

For instance, if the agent needs to walk forward, the spring helps to propel it forward. If it is about to fall, the spring pushes it back up. Because the spring is a physical force, its stiffness can easily be adjusted, making the lesson more or less difficult. As the agent learns the skill, the spring is adjusted before eventually being taken out completely. 

For Liu, creating generative models for natural animal motion has always been a fascinating research area. “We have been trying to mimic the kinematics and the dynamic characteristics of real animal movements. Thanks to the recent development in deep reinforcement learning, for the first time, we are able to also mimic “how” the real animals acquire motion skills.”

Karen Liu and co-authors Wenhao Yu and Greg Turk recently presented their paper, “Learning Symmetric and Low Energy Locomotion” at SIGGRAPH 2018 in Vancouver BC, Canada. 

 

Related Media

Click on image(s) to view larger version(s)

  • What If Robots Could Learn Skills from Scratch?

For More Information Contact

Allie McFadden

Communications Officer

allie.mcfadden@cc.gatech.edu