Prevalent Challenges for Robotic Engineers

“Google admits that one of the hardest problems for their programming is how an automated car should behave at a four-way sign,” states Colin Allen, a professor of cognitive science and the philosophy of science at Indiana University. Allen co-wrote Moral Machines: Teaching Robots Right from Wrong.

“In this kind of scenario,” he continues, “it’s a matter of being attuned to local norms, rather than following the highway code- which no humans follow strictly.”

old robo]Allen is outlining a core issue that all robotics engineers must face; trying to teach robots how to act in a way that is philosophically and ethically attuned to our society so that their development can help rather than hinder mankind. The entire process is, of course, riddled with problems.

“We acquire an intuitive sense of what’s ethically acceptable y watching how others behave and react to situations,” Allen explains. “In other words, we learn what is and isn’t acceptable, ethically speaking, from others- with the danger that we may learn bad behaviors when presented with the wrong role models. Either machines will have to have similar learning capacities or they will have to have very tightly constrained spheres of action, remaining bolted to the factory floor, so to speak.”

Not only is it difficult to figure out how to program a computer to, given an unexpected situation, reason out the most morally admirable choice; it’s difficult for humans to reason it out for themselves before the hypothetical situation even arises. Human values change over national lines, cultures, and time frames. How does one account for this when a computer must be specifically programmed to do a particular thing given a particular situation?

“Imagine if the US founders had frozen their values to permit slavery, the restricted rights of women and so forth,” muses Gary Marcus, cognitive scientist at NYU and CEO and Founder of Geometric Intelligence.

“Ultimately, we would probably like a machine with a very sound basis to be able to learn for itself, and maybe even exceed our abilities to reason morally.”

koreaFor the time being, however, exactly how to get to that point remains obscure:

“There are multiple approaches to trying to develop ethical machines, and many challenges,” Marcus continues. “We could try to pre-program everything in advance, but that’s not trivial – how for example do you program in a notion like ‘fairness’ or ‘harm’?”

Anders Sandberg, a senior researcher at the Future of Humanity Institute Oxford Martin School, says that the very systems that humans create for robots to be able to learn ethics for themselves could be the systems that lead robots to engage in erratic behavior:

“A truly self-learning system could learn different values and theories of what appropriate actions to do, and if it could reflect on itself it might become a real moral agent in the philosophical sense,” Sandberg explains. “The problem is that it might learn seemingly crazy or alien values even if it starts from common-held human views.”

In the event that a computer finds it to be morally acceptable to kill a person, for example, a real robot takeover might become possible. Hopefully we won’t see such a terrible outcome in our internet age.

Leave a Reply

Your email address will not be published. Required fields are marked *