Based on my background in Philology, I have personal and professional views on why artificial intelligence (AI) needs more Humanities. When people asked me what I was studying back when I was living in Seville (Spain), I said ‘Philology’ and many people didn’t know what this was about, let it alone what you can do with that subject professionally. When I said it was about linguistics and literature, they still didn’t get what this was about or what you can do with it. A few people knew if you studied English Philology, you would be an English teacher, if it was German Philology, then a German teacher. Those were the jobs you were supposed to do.
A few years later, I was given the opportunity to work in a software company. I was proud of getting a job in a company that theoretically had nothing to do with Humanities. I was in charge of helping to improve knowledge sharing within the company by administering one of the available knowledge management systems. Although I use the word administer and system, my main task was not technical. It was more of a ‘human’ task. I was trying to get the colleagues from all over the world to share their project experiences, the good and the not-so-good ones. This required social and pragmatic skills such as empathy, conversational skills, using humor, asking questions, or offering help.
Years later, we are experiencing huge technology developments. Everybody is talking about AI and we all use it, directly or indirectly. I noticed that when you talk about AI, many people think about the technology, the data, the developers, the tools.
Cristina Aranda Gutiérrez, Ph.D. in Theoretical and Applied Linguistic, founder and CEO of Big Onion and Co-founder of MujeresTech, says in a very inspiring chat that nowadays more experts in academic disciplines that study the human society are needed to cover important non-technological aspects of AI. She mentions aspects such as ethics and language processing system training. We need more philosophy, sociology, psychology and linguistics experts to improve the results we get from different tools we use on a daily basis.
Cristina also mentions how important it is that the data are not biased and puts the example of Joy Adowaa Buolamwini, a Ghanaian-American-Canadian computer scientist and MIT Media Lab researcher. Joy discovered that her face was unrecognizable in many facial recognition systems. She worked to find out why these systems failed and found that facial recognition programs only worked when she wore a white mask, it didn’t recognize dark-skinned faces accurately. In 2016, to promote equitable and accountable AI, Joy founded the Algorithmic Justice League (AJL). The AJL combines art and research to point out potential societal implications and harms of AI. It intends to raise public awareness of the impacts of AI, and promote further research in the area (wikipedia). The documentary Coded Bias (2020) investigates Joy’s experience and explores the “journey to push for the first-ever legislation in the U.S. to govern against bias in the algorithms that impact us all”.
How else artificial technology can affect minorities? I highly recommend Gloria Miller’s article AI Accountability — The Rite Aid Case – maxmetrics.
Getting back to the importance of Humanities in this area, technical developers should work very closely with these experts, as the Humanities disciplines provide frameworks for understanding ethical implications, address privacy, accountability, and bias in AI systems. They also provide valuable insights into the human language and communication, which includes understanding context, ambiguity, irony, metaphor, to name only a few.
Cristina Aranda names François Chollet, a French software engineer and AI researcher currently working at Google, as he published a tweet in 2017 saying “I used to think that ML [machine learning] people didn’t know enough math, that this was limiting the field. Now I think the field needs more humanities”.
And I fully agree.