Artificial Intelligence in Debating (Part 2)

ByMike Weltevrede

Artificial Intelligence in Debating (Part 2)

In the previous article, we discussed what artificial intelligence (AI) actually is. In short, AI is a method that uses input data to complete a certain task by imitating human thinking. In this article, I discuss applications of AI in the real world, such as deep fakes. Moreover, I will talk about black-box algorithms: why are they used and what are the developments around “explainable AI”?

I focus on two subfields of AI, namely image recognition and natural language processing (NLP). For each of these, I will briefly explain what it is and some examples of how it is applied.

Image recognition

Let’s start with image recognition. The word says it all: it tries to analyze certain images and extract useful information from them. These can be static images, such as photos, but certainly also videos. Because more and more information is entering our world in the form of images and videos, the applications are also very wide. For example, image recognition is used in healthcare to determine whether a tumour can be found in a scan, self-driving cars use camera images to analyze the situation on the road, and facial recognition is done by your phone to secure the phone.

Source

Personally, Wild Me really appeals to me. This is an effort by Microsoft where the goal is to recognize and track individual animals in order to be able to inform conservation decisions. This is done, among others, by photos that can be uploaded by travellers. If I take a picture of a cheetah and upload it, I may get the message: “This cheetah is already known. This is Harry!”

Natural language processing

Natural language processing is also a very broad field. It deals with all kinds of natural language modes, such as written and spoken text. For example, I once wrote a deep learning algorithm for a boot camp that had to assign an ESG (Environmental, Social, Governance) score to a company based on their annual reports. Another development that I personally find very interesting is the application of AI within mental healthcare. For example, a patient’s voice recordings are analyzed to automatically track their mental health and notify and advise their psychologist (Forbes, 2019).

Deep fakes

Danger lurks for both these fields, however: where there are technological advancements that can do really cool things, there are also bad people who use these techniques to commit crime. For example, Generative Adversarial Networks (GAN) can be used to construct images, called deep fakes. This means, for example, that a person can record a video in which they say anything and everything and make it appear as if a prominent politician said it, like this video of Barack Obama (Buzzfeed, 2018). On the website www.thispersondoesnotexist.com/, every time you refresh the page, you will see a photo that has been fully generated by such a GAN (Karras et al., 2019), such as the photo on the right.

Black-box algorithms and explainable AI

Finally, let’s talk about black-box algorithms. In the previous article, I explained that these are systems for which we don’t actually know very well how they came to a certain decision, often because the math behind them is too complex. The most prominent example is the field of deep learning (very complex neural networks). Neural networks (and therefore also deep learning algorithms) need huge amounts of data to find the patterns. Although they are not transparent, these models are very often applied in the business world, simply because in this age of big data they provide very accurate results. It is much more important for (many) companies to be able to predict very accurately, for example, which customers they will soon lose (and, therefore which ones they should contact to prevent this) than to know specifically why that is. However, we are seeing strong progress in the field of “explainable AI” (XAI), where the goal is to open up these black boxes and clarify how such a model came to a decision. These developments are accelerated now that legislation gives people the right to access (for example the GDPR, Article 15). For more information about XAI, see the Wikipedia page.

What’s next?

I hope that this two-part guide gave you insight into what AI actually is, how it is applied, and what the dangers may be. If you have any questions, additions, or other remarks, please put them in a comment or drop me a message. Next week, Gigi Gil will discuss a motion about the application of artificial intelligence in criminal justice. This has since been published. Read it here!

Source cover image: https://medium.com/datadriveninvestor/what-is-artificial-intelligence-eeb863bf3fee

References

Karras, T., Laine, S., & Aila, T. (2019). A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4401-4410).

Mike Weltevrede
TDV Cicero | + berichten

Mike is an alumnus of the Tilburg Debating Society Cicero and has served as the secretary of the Nederlandse Debatbond (2019-2021). He was vice-chair of Cicero (2015-2016) and in that function oversaw the newly set-up international branch. He also organized the Dutch Debating Winter School, a debating training week that attracted participants of over 20 nationalities.

About the author

Mike Weltevrede administrator

Mike is an alumnus of the Tilburg Debating Society Cicero and has served as the secretary of the Nederlandse Debatbond (2019-2021). He was vice-chair of Cicero (2015-2016) and in that function oversaw the newly set-up international branch. He also organized the Dutch Debating Winter School, a debating training week that attracted participants of over 20 nationalities.