AI as a reflection of society

Assignments are keeping me very busy, so this post is going to be a little more unstructured and conversational than my previous. I just wanted to start a discussion about one aspect of AI ethics that I’ve been finding incredibly fascinating as I’ve been reading.

When we talk about the future of AI, the overall view tends to be dark and dystopian. We picture a world in which anthropomorphized “robots” take over, making the human race redundant. However, as we saw lamented by Floridi (2019) in the piece we looked at in class – the future might not be quite that dramatic. From the examples of AI we have currently, it is hard to imagine these types of technologies becoming widespread. Looking at the current uses we have for AI, it is much more likely that we will be seeing more machine learning technologies being introduced – systems that can create and catalogue could end up being fundamental to the LIS profession. With these systems, there are ethical issues that could arise, albeit slightly less dramatic. We might not be completely wiped out by robots, but we already are seeing supermarket workers being replaced by machines, issues around privacy and data collection, and the issue of implicit bias, which is what I wanted to discuss today.

Because AI is built by us our implicit biases can be programmed into them. Through choice of training data, we are seeing gathering evidence of biased assumptions being built into AI systems (Cox, Pinfield and Rutter, 2019). Recently, the media has reported on several instances of AI’s committing “moral violations” (Shank and DeSanti, 2018).

First, the example of the website beauty.ai – hosting the first international beauty contest that is judged by an AI. The results of this contest saw women with lighter skin tones placing higher in the ranking, despite women of all skin tones entering the competition.

A second example we’ve seen in the media is the Microsoft built twitter bot Tay. Within 24 hours of the project, Microsoft had to shut the twitter for Tay down as its tweeting had gradually become racist, homophobic and anti-Semitic.

For both of these examples it is important to note that these AI’s are not committing moral violations because the AI itself was built to be ‘evil’ or ‘bad’, but because it is learning from the data it is given. This is why I find the topic so interesting, because it is largely a sociological issue. It suggests that we can use AI to reflect society – Tay was only able to access offensive and immoral ‘data’ because that kind of data is so rampant on social media. The AI is simply mimicking the behaviours it sees from humans based on the algorithm programmed into it. As put nicely by Floridi and Taddeo (2016) “it is not the hardware that causes ethical problems, it is what the hardware does with the software and the data that represents the source of our new difficulties”.

I guess my question is this – is the AI itself immoral if it is simply a reflection of society or is it society itself that is presenting us with the moral dilemmas? Is there a place for twitter bots like Tay or do they simply add fuel to the flame that they’re feeding off of?


References:

Cox, A.M., Pinfield, S. & Rutter, S. (2019) ‘The intelligent library: Thought leaders’ views on the impact of artificial intelligence on academic libraries.’ Library High Tech, 37(3), pp. 418-435.

Floridi, L. & Taddeo, M. (2016) ‘What is data ethics?’ Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), pp. 2-5.

Floridi, L. (2019) ‘What the near future of artificial intelligence could be’ Philosophy and Technology, 32(1), pp. 1-15.

Shank, D.B. & DeSanti, A. (2018) ‘Attributions of morality and mind to artificial intelligence after real-world moral violations.’ Computers in Human Behaviour, 86, pp. 401-411.

How public libraries are using makerspace technology to allow their users to create and innovate

Makerspace (definition):

a place in which people with shared interests, especially in computing or technology, can gather to work on projects while sharing ideas, equipment, and knowledge

Lexico Dictionaries

For my second blog post of the term, I have had a look back on my notes from the previous sessions, and I have realised that every lecture I leave with several questions and ideas written down to follow up, some of these big and complex, some small. The first few DITA lectures left me with a lot of questions around the idea of information literacy – how can this be taught and what are we doing to teach it? I have also found that much of our class discussions around the idea of information literacy end up at this question specifically – what are we doing now to help young people and adults alike to gain fluency?

For those who aren’t yet familiar with the concept: a makerspace (also known as a hackerspace or Fab Lab) is a service run by schools, universities and perhaps most notably – libraries. These programs give patrons access to the space and resources to create and learn. Sometimes these resources are high tech (kits for coding, robotics and circuitry), but many libraries also offer kits for activities such as crochet, Lego, and even learning to play the ukulele. For reference, here is just one example of the types of kits provided from Duxbury Free Library in Massachusetts.

Sadly, although makerspaces are extremely common in US libraries, we have very little in UK libraries – although there are a few rare examples – mostly programs involving 3D printing technology (see here).  However, I predict that these will be cropping up a lot in the future as the US programs tend to be incredibly popular. Also, introducing more makerspace programs into public libraries would be incredibly beneficial for the institutions themselves. As librarians of this time, we now must look beyond the concept of a public library as a home for books – we are needing to change and go beyond “physical item storage” – libraries are far more than that – they are study spaces, meeting places, and a space for computing and printing. However, as Burke (2014) puts it this “trend” of reshaping library spaces still has one fundamental turn to take – “one that tilts the work of libraries from information consumers and providers to information creators”. The work that is being achieved from these makerspace programs is helping this turn to come into play. If you are interested in knowing more about the topic, Williams and Willett (2019) offer an interesting perspective in their research paper that examines the role of the librarian in a landscape where they are required to take on the kind of role[s] that makerspaces and similar programs require. Through interviews with library staff they seek to gain understanding on how librarians are adjusting to their role shifting to that of an information professional or even a ‘teacher’ type role.

The Forge at Ela Public Library, https://www.demcointeriors.com/

Many of these Makerspace programs give patrons access to technologies such as 3D printing, coding software, and AI technology. Access is the first barrier to gaining information literacy and this eliminates that first barrier straight away. Much of the research into the impact of these programs shows an incredibly positive impact on its users. Bers, Strawhacker, and Vizner’s 2018 study found, for the environments studied – children had shown “collaboration”, “communication”, “competence”, “innovation”, and “[gained] confidence using digital tools” after partaking in a makerspace program. Keune, Peppler, and Wohlwend (2019) offered an incredibly interesting perspective by looking in depth at a young, female engineering major who felt she had overcome the limitations of the STEM field which “acknowledges women’s expertise less than men’s” with the support of her makerspace program. Keune et al. note how the makerspace program allowed this young woman to build her portfolio, as well as share and build relationships and connections with other learners.

Lastly, Finley (2019) looked at Frisco Public Library’s Artificial Intelligence maker kits – their “most complicated [by far]” which utilizes Google’s AIY Voice project kit, an entry level computer, and a small speaker which results in “a stripped-down version of an Amazon Echo”. With a small amount of Python coding expertise (which the library also offers makerspace classes to teach) these kits enable “mass participation” in Artificial Intelligence. The introduction of maker kits of this advanced level to public libraries would be revolutionary – and much needed in order to meet the demand of a world where employers are desperately seeking information and technology minded employees. Finley also notes that the feedback to these kits has been extremely positive – with patrons stating they are grateful to have access to these “power[ful] tools” and are pleased with the results and the range of things they have been able to create.

The feedback from these makerspace programs seems to be overwhelmingly positive. However, as examined in Williams and Willett’s (2019) work, the introduction of makerspaces may lead the librarian to question their role. Historically, we have been known as the ‘keepers of knowledge’ – however, the introduction of makerspaces and similar programs flips that idea on its head, are we now also expected to be teachers as well as ‘knowledge keepers’? Whatever the case, for the patrons of public libraries, I believe that makerspaces can only be a positive thing. They are a much needed way of facilitating creativity, innovation, and a great first building block in learning key skills – whether that’s programming or playing the ukulele.


References:

Bers, M.U., Strawhacker, A. & Vizner, M. (2018) The design of early childhood makerspaces to support positive technological development: Two case studies. Library Hi Tech, 36(1), pp. 75-96.

Burke, J. J. (2014) Makerspaces: A Practical Guide for Librarians. US: Rowan & Littlefield Publishers.

Keune, A., Peppler, K. A. & Wohlwend, K. E. (2019) Recognition in makerspaces: Supporting opportunities for women to “make” a STEM career. Computers in Human Behaviour, 99, pp. 368 – 380

Williams, R. D., & Willett, R. (2019). Makerspaces and boundary work: the role of librarians as educators in public library makerspaces. Journal of Librarianship and Information Science, 51(3), pp. 801–813


Further reading:

Halbinger, M. A. (2018) The role of makerspaces in supporting consumer innovation and diffusion: An empirical analysis. Research Policy, 45(10), pp. 2028-2036

Halverson, E. R. & Sheridan, K. (2014) The maker movement in education. Harvard Educational Review, 84(4), pp. 495-504

Luce, D. L. (2018) The Makerspace Librarian’s Sourcebook. Journal of Web Librarianship, 12(4), p. 262

Yes, we do need a data revolution. But maybe we shouldn’t throw our computers away just yet.

Apple may have you believing your data is stored away in the cloud[s] but it’s a lot closer to home than you think, and it could be seriously damaging our planet…

Ebay data server hall, usa.skanska.com

For my first DITA related blog post, I wanted to follow up and expand on some of the ideas raised in Ben Tarnoff’s (2019) To decarbonize we must decomputerize. In his piece, Tarnoff brings to light both the ecological and more ‘privacy and security based’ moral issues that arise due to the growing need for data centres. In this post, I have chosen to focus more on the ecological side of his argument as, like most of us who have watched Greta Thunberg’s speech at the UN or have read Extinction Rebellion’s This is not a drill – climate change is something that is at the forefront of my consciousness.

Tarnoff describes the demanding process that takes place inside the ‘cloud’ and explains that much of the electricity needed to fuel these data centres is generated by burning fossil fuels. It is predicted that by 2020 there will be 40 trillion gigabytes of data, a third of this being ‘big data’, such as surveillance footage and consumer images used for sophisticated algorithms (EMC, 2012). There is no doubt that this is a worrying statistic. However, I think it is important to note that the kind of data that Tarnoff refers to – machine learning (ML) requires an exorbitant amount of data, which he explains in the article.

What I think Tarnoff misses out of his piece is that there are examples of more ‘everyday’ data that are, in fact, more ecological than the alternative. For example, let’s compare the journey of an eBook to a physical book. An eBook is uploaded onto a website (let’s say Amazon) and is then downloaded and takes up, on average, 2.6MB of data (Lau, 2015). In comparison, if we look at the journey of a physical book from Amazon, you must consider the tree used to produce the book – Pastore (2009) claims that one tree yields enough paper for 62.5 books. As well as the energy used to pack and deliver the book, and the energy needed to dispose or recycle the book once it is no longer wanted or required. This is just one example of how data can be more efficient – we should also consider paper bank statements vs. electronic statements; physical receipts vs. e-receipts; emails vs. letters, and so on. In defence of Tarnoff’s piece, he does not take aim at these kinds of technologies specifically, however I believe that this does, albeit very slightly, derail his argument for a complete “luddite revolution”.

The main problem I have with Tarnoff’s argument is that he puts the responsibility and onus onto the consumer to ditch technology in favour of his ‘revolution’. The issue that really needs to be targeted is corporation’s irresponsible use of data for financial gain. I believe it is incredibly important to note that, above anything, this issue is very much a capitalist one. As long as it provides companies with a financial incentive, data will continue to be used in excess and these issues will only continue to worsen. We can of course vote with our feet and say no to digitization by embracing “luddism” as Tarnoff suggests, however, considering how ingrained technology is into our everyday lives – this philosophy seems a difficult one to embrace.

Instead, we should be putting pressure onto big companies to be more responsible with data. For example, the article mentions Greenpeace’s efforts in convincing big companies to embrace renewable energy. Of course there is no doubt that there are things we, as individuals, can do as well. I do think consumers should think more carefully before buying unnecessary products – do you really need an Amazon Echo? However, completely ditching technology just isn’t going to be a realistic option in 2019. The biggest, and most significant, thing any of us can do is what Greenpeace are doing – lobbying companies (and government) to make the real big changes.

References:

EMC (2012) The Digital Universe in 2020: Big Data, Bigger Digital Shadows, and Biggest Growth in the Far East. [Online] Accessed at: https://www.emc.com/leadership/digital-universe/2012iview/big-data-2020.htm [03/10/2019]

Lau, C. (2015) What is the File Size of Kindle Books? [Online] Accessed at: http://www.whichtogo.com/file-size-of-kindle-books [04/10/2019]

Pastore, M. (2009) EBooks save million of trees: 10 ideas for sustainable publishing. [Online] Accessed at: http://epublishersweekly.blogspot.com/2009/09/ebooks-save-millions-of-trees-10-ideas.html [04/10/2019]

Tarnoff, B. (2019) To decarbonize we must decomputerize: Why we need a Luddite revolution. The Guardian. [Online] Accessed at: https://www.theguardian.com/technology/2019/sep/17/tech-climate-change-luddites-data [02/10/2019]