Ethical AI Application

Posted by Xiaozhe Yao on May 27, 2020

In recent days, I read several posts about ethical artificial intelligence, and most importantly, a list of scary AI usage by David Dao. Since then, I was thinking what should be defined as an awful usage of AI? Is there any kind of set of principles that can help us identify the bad usage of AI? Therefore I decided to write a list of ethical principles of artificial intelligence application. Please be aware that I am open to any kind of AI research and I don’t think any kind of research should be regarded as unethical. Having said that, I believe that people should consider more thoroughly when they want to apply their research into application.

Note that for me it is really hard to say that something is ethical, so I will try to only write something that is UNethical for me.

  • Targeting a specific group, or individual.

For example, some algorithms themselves has a tendency towards a specific group, such as colored people, people with a religion, LGBTQ, etc. Usually, people are not aware of this before they actually apply the algorithm. Then the algorithm will show its preferences in unnecessary situations.

  • Misleading.

  • Systems without human intervention.

Some people believe that the machine learning models is more neutral than human being, and developers should avoid interventing its process. Unfortunately, I do not think it is true as the input data might biased. Even some big companies and top-tier research institutes are producing “bad” models, for example, when Google are using machine learning to identify malware on Chrome Webstore, they accidentally remove some from their list. Without human intervention, the developers of those chrome extensions will not be able to restore their extension on Chrome Webstore.

It may reduce cost for companies to adopt machine learning as part of their customer service, but please make sure that there are always real human available to handle outliers.