Could we learn how to compromise better from machines? Two Brigham Young University (BYU) computer science professors, Jacob Crandall and Michael Goodrich, seem to think so, thanks to a new algorithm they developed with colleagues at MIT and other international universities.
The algorithm was developed to teach machines not just to compete and win games, but to also cooperate and compromise, according to a study published in Nature Communications.
“The end goal is that we understand the mathematics behind cooperation with people and what attributes artificial intelligence needs to develop social skills,” says Crandall. “AI needs to be able to respond to us and articulate what it’s doing. It has to be able to interact with other people.”
Researchers programmed machines with the algorithm, which is called S#, and then had the machines play a variety of two-player games to see how well they could cooperate in different relationships. Researchers tested machine-machine, human-machine and human-human interactions. In most instances, machines programmed with S# outperformed humans in finding compromises that benefited both parties.
“Two humans, if they were honest with each other and loyal, would have done as well as two machines,” Crandall said. “As it is, about half of the humans lied at some point. So essentially, this particular algorithm is learning that moral characteristics are good. It’s programmed to not lie, and it also learns to maintain cooperation once it emerges.”