On Building a Moral A.I.

Much debate has erupted recently over fears that an “evil A.I” will soon emerge that threatens humanity.  Although it’s hard to assess the likelihood of the A.I. risk, it’s undeniable that the vast majority A.I. researchers are pushing for the most powerful intelligence they can build, while giving little attention to also producing moral intelligence.

Personally, this worries me.  In fact, it’s one of the reasons why Unanimous A.I. has been pioneering a different path. As explained in a prior post (“The Case for Collaborative A.I.”) there are many advantages to building artificial intelligence using “social swarms” as the underlying engine. Most importantly, people are an intrinsic ingredient in the Collaborative A.I. approach, ensuring that human creativity, sensibility, and morality are embedded in the emergent intellect.

Of course, there’s one fact we can’t ignore – people aren’t always moral.  Don’t get me wrong, individual persons are highly moral in the vast majority of situations, but groups of people have a long history of behaving far worse than individuals would act alone. We call this a “mob mentality” and it emerges when the members of a group relinquish their personal sense of right and wrong, instead adopting the sentiments of others around them.  Put simply, if everyone pockets their moral compass, assuming that others are calling the moral shots, a group can quickly lose its ethical way.1

Such “mob mentalities” need to be avoided when designing a Collaborative A.I.  One way to do this is to architect the system so that all participants feel consequential throughout the process.  When all of the participants are consciously engaged, nobody is inspired to put away their moral compass.  This was a major motivation when developing the social swarming paradigm, for it employs dynamic feedback loops where each user’s actions have direct and immediate impacts on everyone else in the system. But does it work the way we expect?

During recent testing of social swarms in the UNU platform, we asked participants if they felt like they had a significant personal impact on the collaborative decisions reached. The feedback is usually a firm yes.  This is compelling considering we’ve tested swarms up to 32 users in size, all collaborating in synchrony. Even more compelling is that so many users report feeling like they were the ones “leading the group” during the interactions. We believe that building social swarms in which the majority of users view themselves as leaders rather than followers is a critical first step towards promoting morality in a emergent collective intelligence.

Next we considered each individual’s sense of personal responsibility.  According to recent research at MIT and Carnegie Mellon, when people lose themselves in large we_are_all_good_eggsgroups, they sometimes “lose touch” with their personal morals, becoming more likely to take actions or make decisions they might otherwise think is wrong.2 The researchers believe this happens when individuals stop reflecting about their own personal beliefs during the group experience, instead focusing on the other people around them. Of course, if everyone is looking left and right, nobody reflecting on their own beliefs, a group can quickly devolve into a mob.

Thus one of our interface challenges has been to get users to collaborate flexibly, while still encouraging everyone to focus on their own input rather than the views of those around them.  To achieve this, we created a unique interface paradigm in which networked users jointly control a graphical puck as it synchronously glides across all their screens, every user contributing a pull on the puck with a graphical magnet.  To ensure that users don’t lose touch with their own sense-of-self during the collaboration, each individual only sees their own magnet as it pulls, having no direct knowledge what the other magnets are doing.  Users can sense the crowd through the real-time feedback loops, but everyone still feels like distinct individuals.  We believe this swarming methodology makes it far less likely for a mob mentality to emerge.

At the end of each group decision, users can view an animated REPLAY of all the magnets in unison. By viewing the full swarm in this way, users can easily appreciatean_unum_social_swarm how the group pushed and pulled on the boundaries of the decision-space and found collaborative consensus. During these replays users can also see how own personal magnet moved with respect to the others in the swarm, allowing for further reflection on their personal contribution to the overall collaborative process.

But what about speed? The real-time nature of Collaborative A.I. forces groups to a make decisions under significant time pressure.  Does this hinder the ability of individuals to make moral contributions? It turns out, time-pressure has the opposite effect.  Recent research shows that being forced to make rapid decisions actually promotes moral decisions. In a fascinating study by Harvard researchers, it was shown that when driven to make decisions quickly, people are more cooperative, empathetic, and altruistic.3 As the researchers put it, greedy self-interest takes mental effort and therefore takes extra time, while decisions that support the well-being of others come more naturally as fast gut reactions.  In other words, because human morality is innate and reflexive, the real-time input collected by Collaborative A.I. is less likely to lead to a “mob mentality” than slower-paced online environments that collect input from groups.

The importance of morality cannot be overlooked when designing A.I. technologies.  By ensuring that humans are an intrinsic part of computer moderated intelligences,  the emerging technology of Collaborative A.I. takes a big first step in this direction.  Furthermore, by encouraging all participants in a Collaborative A.I. to view themselves as consequential, and by pushing the swarm to reach decisions under time-pressure, we believe the technology of Collaborative A.I. will help drive moral decisions.  Over the months and years to follow, we plan to further study this issue, looking for additional ways to encourage the emergence of Collaborative Intelligences that are not just creative and insightful, but genuinely empathetic and moral.

To participate as part of a Collaborative Intelligence, click here.

SOURCES

1:  MIT NEWS, “When good people do bad things.”  June 12, 2014.  Anne Trafton.

2: Cikara M, Jenkins A, Dufour N, Sax R. Reduced self-referential neural response during intergroup competition predicts competitor harm. Neuroimage. 2014.

3: Rand, Greene, Nowak, “Spontaneous Giving and Calculated Greed”, Nature, Sept 2012, VOL 489, pg 427 – 430.

ABOUT the AUTHOR:  Dr Louis Rosenberg received his PhD from Stanford University specializing in Robotics and Human Computer Interaction.  He was previously the founder and CEO of the public VR company IMMERSION Corporation as well as the 3D Digitizer company Microscribe.  Rosenberg is currently CEO the A.I. company Unanimous A.I. which is focused on building human values, sensibilities, and morality into intelligent system.