Assignments are keeping me very busy, so this post is going to be a little more unstructured and conversational than my previous. I just wanted to start a discussion about one aspect of AI ethics that I’ve been finding incredibly fascinating as I’ve been reading.
When we talk about the future of AI, the overall view tends to be dark and dystopian. We picture a world in which anthropomorphized “robots” take over, making the human race redundant. However, as we saw lamented by Floridi (2019) in the piece we looked at in class – the future might not be quite that dramatic. From the examples of AI we have currently, it is hard to imagine these types of technologies becoming widespread. Looking at the current uses we have for AI, it is much more likely that we will be seeing more machine learning technologies being introduced – systems that can create and catalogue could end up being fundamental to the LIS profession. With these systems, there are ethical issues that could arise, albeit slightly less dramatic. We might not be completely wiped out by robots, but we already are seeing supermarket workers being replaced by machines, issues around privacy and data collection, and the issue of implicit bias, which is what I wanted to discuss today.
Because AI is built by us our implicit biases can be programmed into them. Through choice of training data, we are seeing gathering evidence of biased assumptions being built into AI systems (Cox, Pinfield and Rutter, 2019). Recently, the media has reported on several instances of AI’s committing “moral violations” (Shank and DeSanti, 2018).
First, the example of the website beauty.ai – hosting the first international beauty contest that is judged by an AI. The results of this contest saw women with lighter skin tones placing higher in the ranking, despite women of all skin tones entering the competition.
A second example we’ve seen in the media is the Microsoft built twitter bot Tay. Within 24 hours of the project, Microsoft had to shut the twitter for Tay down as its tweeting had gradually become racist, homophobic and anti-Semitic.
For both of these examples it is important to note that these AI’s are not committing moral violations because the AI itself was built to be ‘evil’ or ‘bad’, but because it is learning from the data it is given. This is why I find the topic so interesting, because it is largely a sociological issue. It suggests that we can use AI to reflect society – Tay was only able to access offensive and immoral ‘data’ because that kind of data is so rampant on social media. The AI is simply mimicking the behaviours it sees from humans based on the algorithm programmed into it. As put nicely by Floridi and Taddeo (2016) “it is not the hardware that causes ethical problems, it is what the hardware does with the software and the data that represents the source of our new difficulties”.
I guess my question is this – is the AI itself immoral if it is simply a reflection of society or is it society itself that is presenting us with the moral dilemmas? Is there a place for twitter bots like Tay or do they simply add fuel to the flame that they’re feeding off of?
References:
Cox, A.M., Pinfield, S. & Rutter, S. (2019) ‘The intelligent library: Thought leaders’ views on the impact of artificial intelligence on academic libraries.’ Library High Tech, 37(3), pp. 418-435.
Floridi, L. & Taddeo, M. (2016) ‘What is data ethics?’ Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), pp. 2-5.
Floridi, L. (2019) ‘What the near future of artificial intelligence could be’ Philosophy and Technology, 32(1), pp. 1-15.
Shank, D.B. & DeSanti, A. (2018) ‘Attributions of morality and mind to artificial intelligence after real-world moral violations.’ Computers in Human Behaviour, 86, pp. 401-411.