The biggest AI risk: dehumanization

This talk, in a nutshell, discusses what I think is the greatest risk associated with AI: Potentially dehumanizing effects through suboptimal deployment and reception, and what can be done about it.

Is the effect AI is currently having on sports a real-world laboratory for upcoming societal change induced by AI?

Yesterday, I thought through this hypothesis in a keynote talk at the QUT Centre for Data Science.

So what is happening in sports? The game has changed, as one could say: Diverse AI methods, from data analysis on wearables’ data streams to computer vision methods for electronic line calling in Tennis, are improving many aspects (e.g., athletes’ health and fairness of decisions). But there are downsides, too: For example, the VAR (video assisted referee) in soccer is not received well by fans. It unnaturally interupts the game in a perceived random way, negatively affecting all persons involved: the authority of the referee on field, the experience of the audience, and the motivation/joy of players. In one word: It has dehumanizing effects.

Additionally, seeing people compete against AI systems shows another dehumanizing effect: The lossofhope on the side of the human competitor. In a similar vein, as Nate Silver tells the story, Garri Kasparov first lost his hope (his confidence in understanding a seemingly absurd move by Deep Blue) before being thrown off track by it and ultimately losing the tournament in the famous 1997 chess game.

As sports is a mirror of society – people from all walks of life are engaged – it is reasonable to assume that such effects could play out as well on the much larger playing field of society with inceasingly deployed AI. How can we prevent the potentially dehumanizing and hope-corroding effects?

My three points on this to have technologywithhope:

/1: Regard AI as a tool, not a personal counterpart.

/2: Improving AI demands a clearer view of what is the human, since this has traditionally been the source of human value (think our liberal democratic constitutions or the general decelartion of human rights). Hence, technomorphizing humans diminishes human value and perceived self-efficacy as much as anthropomorphizing AI contributes to fear.

/3: Hope is the most needed commodity today. We need to give people back a hopeful outlook on their future by showing them that they have agency in designing it. But we can only build what we can imagine.

Therefore, we need to find positive narratives about the future, e.g. by thinking about use cases of AI that would strengthen (instead of diminishing) what makes us distinctly human. For example, one characteristic of being human is to be limited. This is not just a weakness to be mitigated – it profoundly shapes our existence (e.g., we value things whichs availability is limited). Given this, AI applications that give us the illusion of unlimitedness (e.g., being able to write even more emails) do not support the atomichuman (see Neil Lawrence‘s book) – while the SPAM filter, shielding us from being overwhelmed, appears as maybe the most ethical AI application to date.