Hypothetical and real risks of AI

I talked about hypothetical and real risks of AI this week at the University of Cambridge’s Westminster College.
I would divide AI risks into 2 categories: First, manageable risks. To this I count many of the current issues, e.g., bias in AI systems or the problem of fake content, for which there exist emerging technical and/or organizational solutions (I published and talked on examples elsewhere).
Second, there are also those that one may call “higherorder risks”: Emerging risks to society, not inherent in any single application or shortcoming. Existential risk would be one, though a larger part of the AI community (including me) is convinced this is purely hypothetical. More real, to me, is the following:
Humans, intimidated by perceived machine competence and unable to resist the convenience offered by AI systems, may stop exercising agency over their own life, society, and future. They may stop voluntary embracing the “pain” necessary for us humans to grow as persons (consider all learning as some sort of pain), because the convenient easy solution by automation is so near. Example: you could do the exercise yourself, spend some hours of effort, and learn; or cheat by handing in the solution an AI system created for you (safe the time & effort but not learn). Our track record as a species in exercising this self control is not too good, if the convenience offered is easily available. We need to think and debate more on how to deal with this in the presence of powerful AI systems, in some sense ultimate “convenience tools”.
I discussed some ways of how technology itself could help here, and also how faith based traditions can be instrumental in strengthening human worth & value, leading to the necessary character growth.
Link to original article.