Why public servants should assess AI risks based on evidence, not science fiction.

Why public servants should assess AI risks based on evidence, not science fiction.

Don’t assess AI risks based on science fiction outlooks – especially as a public servant or manager! Based on my recent TEDx talk, the journal PMM: Public Money & Management asked me to make a case for a more evidence-based AI risk assessment, not influenced by science fiction ideas. It holds specific recommendations for public policy: Invest in education on what AI is and isn’t; it is the number one government intervention against the major risks of AI dependence and widespread anxiety, with their politically and economically destabilizing effects. Regulate AI business models with the goal of creating tech sovereignty, also through public procurement and open source; these are the most necessary interventions to sustain a fair market and equitable societies where free choice persists. Fund innovation in the direction of less energy and data-hungry, more common-sensical and fundamentally prohuman AI, which is to be found off the currently beaten path of scaling LLMs; this is the greatest service to higher degrees of transparency, less ethical dilemmas and higher privacy, and should also include incentives for the private sector to invest and build. Don’t fear hypothetical risks; act according to the plenty of evidence instead.