Developing algorithms for machines to solve specific problems is the easiest part of AI, compared to teaching them manners. At the end of the day, it is all about machines doing their jobs in a way to never be biased, racist, mean, harmful, etc. The case here is more about the human developing the machine and not the machine itself, mostly that he is the one feeding it with data, for it to behave accordingly. A mistake in the development process of the machine, regardless of whether on purpose or not, may vary from just not being able to differentiate between a scooter and a bicycle, to mistaking black people with gorillas like the case that was with google photos.
There may be a lack in the concrete definition of fairness and morals of course, especially when it comes to the case with the self-driving car with the old lady and the baby, but feeding the machine with data about crimes, 95% of which are committed by people with similar race and religion, will surely make the machine bias and racist.
Isn’t sexist to have both Siri and Alexa females assistants? Isn’t racist to feed the machine with data about Latinos less likely to pay debt than Americans?
The problem with bias is that it comes from humans. We’re all horribly biased. We all have our biases, all our existing data sources are built on everyone before us and their historical biases. Jen Gennai; Google’s head of ethical machine learning.
On the other hand, it is all about diversity. Machines should be fed with the most diverse data possible, and developed by diverse people from different races, cultures, and religions. The more diversity, the better the machine at performing its tasks.
How AI is developed and used will have a significant impact on society for many years to come. What can be worse than a machine deciding that the best way to end cancer in the world is by killing all humans having cancer?
Computer Engineer • Entrepreneur • Blogger