How Enhanced AI Could Be Achieved Through Crowdsourcing Morality

Written by Peter Buttler

Nov 20, 2017


With the rapid acquaintance of artificial intelligence (AI) the qualms and questions about whether robots could act immorally or soon choose to harm humans have also been raised. Some people are calling to put bans on robotics research while others are calling to conduct more research to be aware of how AI might be controlled. But how can robots learn ethical and moral behavior if there is no “user manual” for being human?

This question of robotic ethics is making everyone apprehensive. We are concerned about the lack of understanding and empathy in machines like how so-called ‘calculating machines’ are going to know that what is wrong and how to do the right thing, and even how we are going to judge and penalize by beings of steel and silicon.

As we are ceding more and more control to artificial intelligence, it is getting foreseeable that those machines will need to make choices, hopefully, based on human morality and integrity. But from where AI’s get its ethics? From its own logic? Rules set by programmers? Company management? Or could it be crowdsourced, basically decisions chosen by everyone?

Tesla CEO Elon Musk even said that artificial intelligence could be more hazardous than North Korea. If only AIs could obtain the same kinds of ethical principles that humans are familiar with, perhaps these worries could be subsided. Engineers offer that crowdsourcing morality could be the answer or solution.

Researchers from MIT launched the Moral Machine in order to check out their hypothesis that the self-governing process of instilling morality in machines could work. It is fundamentally a website where people could go and serve as a guest to answer questions on the subject of difficult choices that self-directed vehicles might have to make on the road.


According to a report of Futurism, the questions are very relative and contextual, which comprises examples of opting between people of two groups with unlike ages, numbers, and genders. The comeback of the audience was so massive that the researchers were able to gather an enormous amount of data.

The information collected was then used to make a paper that was recently published by Carnegie Mellon University’s Ariel Procaccia and one of the researchers behind the Moral Machine, Iyad Rahwan. In the paper, the researchers explain how they used the data collected to educate and instruct an AI as the sets of ethics imitated.

It is worth memorandum that the outcomes of the research are by no means ready to be organized. It is still testimony of concept and is planned to find out if the morality points achieved through a self-governing process will truly work as a base for AIs.

In an interview with The Online, Proccacia explained: “The research was supposed to address one of the fundamental problems in developing AI. Democracy has its flaws, but I am a big believer in it. Even though people can make decisions we don’t agree with, overall democracy works,”

People also point out critically; a few means crowdsourcing morality and ethics could go erroneous. For a thing, it engages ‘people’. Just because a moral choice comes first on the poll does not mean it is really ethical but only popular.

As Cornell Law professor James Grimmelmann tells The Outline, “It composes the AI ethical or unethical in the similar way that huge numbers of people are ethical or unethical.” We have all been fixed or caught on the erroneous side of a vote; still sure we are the ones who are true and right.

There is also the matter of concern that who the crowd is. There is a good chance of sample partiality stands on whose estimations a sample reflects.

But finally, there is potential favoritism or bias in the people who alter raw crowdsourced data in decision-making algorithms, with the clear possibility for different analysts and programmers to appear at different outcomes from the similar data.

One of the authors of the research, Carnegie-Mellon computer scientist Ariel Procaccia, informs The Outline:

“We are not saying that the system is geared up and ready for operation. But it is a testimony of notion, showing that democratic system can help address the impressive challenge of moral decision making in AI.” Maybe who knows.

About Author

About Author

Peter Buttler

Peter Buttler an Infosecurity Journalist and Tech Reporter, Member of IDG Network. In 2011, he completed Masters in cybersecurity and technology. Also, he has worked for leading security and tech giants as Staff Writer. Currently, he contributes to a number of online publications, including The Next Web, CSOOnline, Infosecurity Mag, SC Magazine, Tripwire, GlobalSign CSO Australia, etc.

You may also like

Unleashing the Power of the Crowd for CPG and FMCG Brands

Unleashing the Power of the Crowd for CPG and FMCG Brands

Packaged grocery brands operate in highly competitive markets where innovation and customer engagement are key. In various markets these products are often referred to as CPG (Consumer Packaged Goods), or FMCG (Fast Moving Consumer Goods). Whatever the product sector...

Humanizing your brand through crowdsourcing

Humanizing your brand through crowdsourcing

Crowdsourcing is a powerful tool to help humanize a brand, which enables a better customer experience. By involving audiences in a range of business processes and initiatives it’s possible to build a sense of community, foster a more personal connection, and boost...

The largest superpower? Crowds, and here is why

The largest superpower? Crowds, and here is why

With greater diversity also comes a greater collective intelligence Crowds can be a powerful force and can influence the outcomes of many events. Harnessing crowd power in business can bring disproportionate benefits. Unlike employees, a randomly drawn crowd does not...

Speak Your Mind


Submit a Comment

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.

Join Our Global Community

You have Successfully Subscribed!