We believe that unregulated generative AI is a clear and present danger to democratic
sustainability. The imminent problem is not super intelligent robots taking over the world, but the
threats to human individual and political freedoms posed by the deployment of simultaneously
exciting and yet potentially dangerous new technologies. We need to address the full range of AI
challenges, and in so doing, the public’s voice must be at the table, not only those of the already
powerful. (Statement of the Digital Humanism Initiative 2023)
The last few months have been a bit of a whirlwind in terms of travel, meeting interesting people, exploring ideas and discovering insights.
In my previous post I talked about our Brussels Brave Conversations and some of the thoughts that came to me as I wandered around Brussels and began to explore the world that is the European Parliament. As a complement to this I went to the Digital Humanism Summit 2023 in Vienna at the invitation of George Metakides and Hannes Werther where many of the Computer Science and Artificial Intelligence luminaries from Europe and the United States came together to talk about Generative Artificial Intelligence and the sustainability of democratic societies.
The explosion of Large Language Models on to humanity in 2022 – 2023 has suddenly propelled the conversations around these technologies into the public domain and with this has come a sort of mild panic about existential risk, the decimation of communiites and the irrelevance of human beings (Harari 2023).
The question is that we now have within our grasp the most powerful technologies that human kind has ever developed so how can we ensure that they are used for good (the benefit of humankind and the planet) rather than evil, and how can people feel secure about the developments of such technologies which are way beyond the abilities of most people to understand?
It is paramount that AI developers and regulators are asking themselves the right questions about the potential impact of AI. She suggests a greater focus on ensuring people feel secure in a world with AI, rather than trying to convince them to trust it. (Joanna Bryson at ANMC23).
As these conversations around AI unfold I am often bemused that it has taken so long for the proverbial penny to drop. These technologies have been around for a very long time but as always it is the human condition not to really focus on things until they are right in front of us – we often seem to have little imagination about things that aren’t already around us, which is also why Science Fiction is so important a genre for people to engage with. It is also why we seem to get distracted with the next bright shiny thing that emerges and then become somewhat derailed in our common sense and perspective. As the Gartner® Hype Cycle™ so brilliantly illustrates we get excited, then we get disillusioned, then things start to calm down and we start to look at things from a more realistic perspective. See the Gartner AI Hype Cycle 2023.
Creative Commons CC BY-NC-SA: This license allows reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator. If you remix, adapt, or build upon the material, you must license the modified material under identical terms.
CC BY-NC-SA includes the following elements:
BY
– Credit must be given to the creator
NC
– Only noncommercial uses of the work are permitted
SA
- Adaptations must be shared under the same terms