top of page

Robot ethics is currently being debated in the robotics community. The discussions often focus on the robot as an Artificial General Intelligence and thus is expected to be a full moral agent that is trustworthy, aware of ethical situations (liking killing people), and explicitly reasons about “good” decisions. Well, in theory we would have programmed or trained the AGIs to be moral-- that worked with Vision but not so much with Ultron in The Avengers: Age of Ultron. But most robots aren’t an AGI so this ability to conduct Vision-like reasoning to help people isn’t really possible. Instead, the dumber, more realistic robots may be used for situations that require ethical sensitivities. One example is warfighting, but other examples are medical decision aids and systems with autonomy, such as an autopilot. In these functional morality situations, the ethical requirement is often just to work correctly. That requirement is also a key component of operational morality, the expectation that the designers and users of a robot are using this responsibly; robot ethics starts with the designers being responsible in their design and correctness of implementation (operational morality), especially for safety-critical or ethically sensitive situations (functional morality), and then can someday include situations where a AGI robot would reason and make its own decisions as to what was ethical (full moral agency).

robot reading book

Thanks! Message sent.

Join our mailing list

Never miss an update

  • Grey Twitter Icon
  • Grey Facebook Icon
  • Grey YouTube Icon
  • Grey Instagram Icon
  • Grey Pinterest Icon
bottom of page