The Golden Rule for Artificial Intelligence
If Microsoft’s “Tay-bot” artificial intelligence (AI) experiment can turn an innocent chatbot into a racist in less than 24 hours on Twitter, how long can we expect a soft, illogical humanoid to last?
In his article, “Artificial intelligence doesn’t have to be evil. We just have to teach it to be good,” Ryan Holmes, the founder and CEO of Hootsuite, raises questions about the philosophies and basic values that are (or, more accurately, are NOT) being learned by the bots that now permeate every aspect of our lives, from customer service phone queues to self-driving cars.
But since bots learn through their interactions with humans, isn’t the real question whether we are falling short in teaching basic humanities to our children–and at what cost?
AI offers a kind of laboratory that provides interesting parallels for studying the outcomes of value-free (value-light?) learning. It takes upwards of 16-18 years to teach a child, but AI learns at the speed of the computer. So, in the blink of an eye, we can see what happens when a bot is exposed to the hardness of society without any background for understanding or relating to the condition.
Happily there is a response to such a bleak scenario. It requires a vigilant offense of the kind found when parents open their children’s minds by reading together, when teachers invite philosophy and ethics into daily classroom discussions, and when society stands up for the less fortunate among us.
AI is not, by nature, biased to be either good or evil and neither are children. Our challenge, then, is not so much teaching the robots as it is teaching the humans.