Emotions show up in artificial intelligence for robotics in two places. One is in human-robot interaction we expect social robots to understand, and at least play along, with our emotions. The other is embedding real emotion into robots to simplify control and learning. Since the 1980’s, AI researchers have explored inserting emotions for self- regulation into basic robotics software architecture. Emotions in biological systems are simple mechanisms to regulate control without having to think about it. If a behavior is not working or achieving its goals, an animal generates what we’d call frustration or a negative valence. As the frustration builds, the animal generally tries the same thing harder. It might adjust the gains and parameters on the behavior and other behaviors, like growling at nearby relatives who could be distracting it. If the frustration continues, the animal may abort the unsuccessful behavior and try something else- an alternative or redundant behavior. If the alternative doesn’t work, then the animal either has to abandon the goal (e.g., times out) or reason about what to do. From a robotics perspective, emotions are an extremely simple and robust approach to control and adaptation, with very little coding required. However, emotions have to be embedded in the lowest levels of the architecture, so it’s not something that gets added on at the tail end of robot development, like in Enthiran. The Robots and Romance slideshow is a good start for learning about emotions in robots for control and learning.