Analogue leadership in a digital world

Museum of the Future

Museum of the Future

On my way back to London in May I decided to stop in via Dubai to catch up with Intersticia Fellow Osheen Arora.  Having some time to spare I popped in to visit the Museum of the Future.

Dubbed the most beautiful building in the world by its creators the building seeks

to confidently straddle the past and the future, applying advanced technology to traditional artforms. The building ‘speaks Arabic’: its facade is a canvas for the poetry of His Highness Sheikh Mohammed Bin Rashid Al Maktoum rendered in the calligraphy of Mattar bin Lahej.

It’s form is ‘futuristic’ and stands in stark contrast to the Dubai skyline with it’s geometric towers and multi lane highways.  It aims

to provide light in dark times: in an age of anxiety and cynicism about the future, we are showing that things can and must progress.  Our imagined futures are fundamentally hopeful, but honest about the dangers of the present.

These are noble aims and as I wandered around the pristine and beautifully presented immersive experience I was reminded of conversations that we had had with the young people who came to our Future Worlds Challenge Brave Conversations in Sharjah in September 2022.

One of the exercises we did with these young people was to ask them about Science Fiction and the role it had played in their lives. The response was that, apart from the relatively modern content available to most young people in the digital age, the main stories they had heard were those of fantasy such as The Arabian Nights and Sinbad.  According to this observer the genre of Science Fiction is Arabic cultures is relatively new and is linked to the post-colonial era and particularly the English language.  Arabic cultures have a long heritage of curiosity and knowledge building which can be experienced in a visit to the House of Wisdom in Sharjah, but it seems that true Science Fiction (based on science and technology and depicting scenarios that could be true one day) is relatively recent.  As with Chinese Science Fiction the integration of translations of western science sparked an interest in the ideas explored but this came in waves as local cultures adapted to their own changes and moves to become increasingly industrialised.

The House of Wisdom, Sharjah

As a child Science Fiction stories and both British and American television shows and movies were all around me and I grew up on a cultural diet of Lost in Space, My Favourite Martian, The Outer Limits, The Twilight Zone, Star Trek, Dr Who, Thunderbirds, Superman, and anything else I was able to watch.  I would be glued to the television as I came home from school and had afternoon tea, and if I was allowed at any other time.  I read Azimov, H. G. Wells, John Wyndham, William Gibson, Douglas Adams, Doris Lessing, Margaret Atwood, Kurt Vonnegut, Arthur C. Clarke, Aldous Huxley, Jules Verne, Philip K. Dick and later Cixin Liu from cover to cover.

What I have found as I’ve run our Brave Conversations workshops over the years is that this upbringing is not common, but I believe that the thinking processes around this type of storytelling and imagination are now crucial to how we face the rapid socio-technical developments which are all around us.

In this interview historian Yuval Noah Harari describes his thoughts on the importance of Science Fiction:

It shapes the understanding of the public on things … which are likely to change our lives and society more than anything else in the coming decades.

This is the essence of what I senses at the Museum of the Future, a desire and ambition to educate and demonstrate new ways of thinking about the future from a non-Western point of view.  As I wandered around there was nothing particularly outstanding or mind-blowing in the exhibitions themselves, but what was new and different for me was a fresh approach to technological development coming from the perspective of a young developing nation.

Dubai’s history is fascinating.  Not that long ago it was a fishing and pearling village on Dubai Creek which developed into a major shipping port. In 1966 everything changed with the discovery of oil and over the past fifty years Dubai has undergone nothing less than a radical transformation of which the Emirati are, and should be, rightly proud.

In the Museum of the Future a vision is presented of the world in 2071 largely based around Hope.  From the extraplanetary sky-lift taking visitors to the orbiting Space Station O.S.S. Hope; the Lunar Equatorial Solar Belt providing shared energy back to the Earth; the HEAL Institute with a Digital Amazon and DNA Library; the Alwaha Wellness Centre, finishing in Tomorrow Today, the visitor experience around the Museum is one of hope in the power of human ingenuity to increasingly understand and manage our planetary environment and beyond.

I sensed a freshness about how everything is presented as the Emirati culture experiences the transition of Science Fiction in to Science Fact.  But I also felt a certain naivety and almost childlike approach, something positive and expansionary with a limitless zeal for new bright shiny things.  This made me conscious of my own biases and cultural conditioning which is much more suspicious and hesitant, rehearsing scenarios in my mind about what could go wrong instead of working towards what could go right.

The breadth of the ambitions of people in this part of the world are staggering as can be evidenced by the Saudi Arabian led NEOM, (Neo-Mustaqbal – New Future) an entirely new model for sustainable living, innovation and advanced manufacturing and eco tourism.  Whilst NEOM has its own share of challenges there is something about the sheer audacity of the project which echoes something of the courage that drove other great construction projects throughout history or the Moon landings more recently.

The Museum of the Future is a bold statement by the Emirati government that they are thinking long term and want to be a player in the modern world we are all co-creating.  This just doesn’t come from throwing money at things and hoping for the best; it is something that evolves through taking responsible steps building on each previous one and understanding the deeply intertwingled relationship between technical tools, social systems and human nature.

I am very excited by the emergence of these different and diverse ways of seeing, alternative ways of working and fresh approaches to the challenges of being human as we co-evolve with socio-technologies.  We need these diverse approaches and different ways of thinking.

I just hope that non-Western cultures don’t feel that they have to copy and emulate what we in the West have done to succeed.  We need to learn from them as much as the other way around, the world we are building is for all of humanity not just those of us who happen to live in the more developed parts of the planet which have so successfully exploited natural resources for our own benefit.

The way forward is together celebrating the richness of human cultures and the hope that together we can create something positive for future generations.

Brave Conversations Stuttgart 2024

Brave Conversations Stuttgart 2024

Students from The School for Talents, University of Stuttgart.

We are the Web, and the Web is Us/ing Us.  (Professor Michael Wesch, 2007).

When I first saw this video in 2007 I found it totally captivating.

Michael Wesch presents the transition from Web 1.0 (the read only Web, once referred to as printing on the screen) to Web 2.0 (the Read/Write Web) where we have witnessed the emergence of Toffler’s Prosumer where humanity purposefully creates the online world rather than just passively consuming it.

This was turbo charged by the iPhone in 2007 and, as they say, the rest is history.

When Hannah Stewart and I were musing on what and how to present at our 2024 Brave Conversations for both attendees of the 16th ACM Web Science Conference and the students of the Stuttgart University School for Talents we felt that in order to cut through the noise about AI and Large Language Models it would be useful to go back to basics.

Where did all of this come from?

Whilst we always do some of this at Brave Conversations the more embedded digital interaction technologies become to our everyday lives the more important I believe it is to teach and explain the history of their development, particularly in order to remember how things have changed and challenge what we may see as the status quo.

In his video Michael Wesch begins with the WayBackMachine which has been archiving the Web since 1996. It is fascinating to look back on our own www.braveconversations.org website and our first events in Metadata and see that what we were saying in 2017 we are still saying now.  A decade or more ago it was all too easy to ignore the hard questions and just let the technology take its path; with the emergence of much smarter machines we can no longer afford to remain ignorant and naïve.

www.braveconversations.org in January 2017

All of this is built on the concept of Hypertext, itself inspired by the marginal gloss – the simple act of annotation or commentary that is written on a page which, which collated, becomes the Glossary.  Humans have been annotating documents (information within specific boundaries) for millennia – the difference now is that much of our information is in digital form and thus has digital affordances.

XML + You + Me create a database backed Web – tagging and adding metadata – we are teaching the machine.  Linking data, linking people.

We need to rethink a few things … copyright, authorship, identity, governance, privacy, commerce, love, family, ourselves.

Nothing could be more important at the minute as we rely more and more on these systems, and begin to forget the older ways of doing things.

This is what we focus on at Brave Conversations and it was wonderful to have people fully engaged but most of all curious and ready to challenge and learn.  In particular the students at the School for Talents challenged us through their own explorations and the pedagogy of group projects based on the principles of the “Stuttgarter Weg“ which focuses on a systematic cooperation between complementary disciplines to creates unique opportunities to ask new questions and find answers.

These young people are those who will go on to work in many of the technical companies in Germany, be they automotive, sustainable energy, manufacturing or computer technologies.  Most came from a technical background, something that is to be expected in Stuttgart, a city known as the cradle of the automobile and high tech industry.  But, as the latest edition of The Economist investigates this is an industry that is in need of radical reinvention.

As we increasingly bring the digital and physical worlds together the need for those with technical expertise to be educated and schooled in the softer skills of critical thinking and emotional intelligence is paramount, and those with social expertise need to rapidly develop both a digital as well as critical literacy.

As we see more and more that the companies developing AI and smart machines compete for market share, for technical dominance often at the expense of safety and ethical concerns this combination and need to reflect and question is crucial.

A decade ago with social media, the world took a wait-and-see approach to how that technology would change society.  The results have been devastating.  With AI, we cannot afford to nod along with taglines and marketing campaigns.  What’s driving AI research, development and deployment is already clear:  a dangerous incentive to race ahead.  If we want a better outcome this time, we cannot wait another decade—or even another year—to act.  (Tristan Harris, The Economist)

In his essay In Search of a Better World: Lectures and Essays from Thirty Years Karl Popper stated that our future is not deterministic, we have to make it and to approach it with care and optimism.

All things living are in search of a better world. Men, animals, plants, even unicellular organisms are constantly active. They are trying to improve their situation, or at least to avoid its deterioration… Every organism is constantly preoccupied with the task of solving problems. These problems arise from its own assessments of its condition and of its environment; conditions which the organism seeks to improve… We can see that life — even at the level of the unicellular organism — brings something completely new into the world, something that did not previously exist: problems and active attempts to solve them; assessments, values; trial and error.

We have made great mistakes — all living creatures make mistakes. It is indeed impossible to foresee all the unintended consequences of our actions. Here science is our greatest hope: its method is the correction of error.

Our great mistake now would be to forgot this through our human arrogance and hubris and to dismiss the lessons of our history.

This is why Brave Conversations are so necessary and why we continue to bring them to whichever audience of people will give us their time, focus and attention.

We owe it to ourselves, to each other, and to future generations to at least pause and ask the three fundamental questions posed by Aristotle and of crucial importance to us now:

Ethos – What may we do?

Logos – What can we do?

Pathos – What ought we do?

 

WebSci24 and the emerging Agent Society

WebSci24 and the emerging Agent Society

Jie Tang presenting his Keynote: The ChatGLM’s Road to AGI

Last week Intersticia Fellow Hannah Stewart and I attended the 2024 ACM Web Science Conference hosted by the University of Stuttgart and IRIS (Interchange Forum for Reflecting on Intelligent Systems).

The conference programme included an interesting mix of research presentation sessions ranging from Digital Art to Hate Speech, and a diverse number of Keynotes addressing topics such as China in the Global Information Ecosystem, Digital Humanism, Older Adults being Tech Savvy and Chat GLM’s Road to AGI.

The conference attracted over 100 people from all parts of the world to come together and discuss the questions posed by Web Science – those which don’t necessarily fit neatly in to one discipline or another but require a cross-disciplinary research focus, attitude and skills.

There were a couple of key moments in the conference that stood out to me amidst all the talk about LLMs and access to data (something that the researches were particularly preoccupied with).

These key moments were:

Hannes Werther’s reiteration that We Create the Web, the Web Creates Us – the key focus that has always been Web Science.  Linked to this he raised the issue of Business Models – how did human driven initiatives and policies help technical innovations scale and reach the human market?

This point is all too often forgotten, particularly in the research community, and I feel that this neglect often leads to somewhat irresponsible and naïve technology developments and deployments which have unforeseen and significant human social consequences.  (OpenAI’s recent comment that it’s technologies have been used to deceptively manipulate public opinion around the world and influence geopolitics is an astounding case of both. Once released in to the ‘wild’ what did they think was going to happen!!!)

The Internet, which began its life on 29th October 1969 as Arpanet, and has evolved to become the TCP/IP driven network we know today with the World Wide Web sitting atop it, began as  a government sponsored academic initiative.  In 1994 the commericial race was launched when Netscape Communications and Microsoft began the Browser Wars which led to the creation of the first Search Engines, publicly available online communities such as America On Line, online dating and the Dot.Com Bubble.

All of this had massive consequences in terms of the ways that human beings interacted with information and each other, not the least of which has been the creation of a digital divide and the need to fight for digital human rights and freedom of expression.

Enabled by the Internet, developments from the Read-Only to the Read-Write Web and the iPhone, there emerged digital Social Media platforms which took human online interaction to a whole new level.

We are only now starting to acknowledge and more fully understand that, whilst this has been the greatest communications and information revolution since the printing press, the social consequences (as with the printing press and its role in the Protestant Reformation) are profound.  There is a growing awareness of some of the harmful effects such as digital addiction, a negative impact on critical thinking skills (particularly through short form content and declining attention spans) and an increase in (cyber) bullying particularly amongst young people.

Jonathan Haidt believes that there is a Youth Mental Health Crisis largely attributable to Social Media and the business models that underpin them.

Another angle from which to consider this is with regard to many in the older population whose reliance on online platforms has become their primary sources of news and truth.  Americans, who are now in the midst of one of the most important election campaigns in modern history, largely turn to online news sources, with many Gen Z relying on Chinese owned Tik Tok for their information.

All of this is driven by the business models that support the companies providing the services, and all sit within the socio-technical environment of their corporate headquarters governed by the values of their founders and Board.

With the rapid move towards Artificial Intelligence as technology companies scramble to integrate machine learning and language models into their products we are at a key inflection point.

Most of what I hear and read by the key technology players is, in my opinion, a race to the bottom – an AI Arms Race that has been kicked off by OpenAI in a quite irresponsible manner purely to pursue dominant market share and human attention.  Since then the tech firms have put ethics on the backburner in order to capture predominantly business customers through promises of greater human productivity and a reduction in costs.  We’ve heard this all before, but as with last time

Everyone is asking how and why.  No-one seems to be asking should.

Rarely do I hear statements about the benefits to humanity or the protection of vulnerable people, access and equity or how we can use these quite incredible machines to help us cope with the myriad of existential threats presenting themselves in the 21st Century.

Some, such as Elon Musk with Neuralink, talk about benefits to patients with neurological conditions, but my suspicious mind immediately links this to the race to control and dominate human thoughts for commercial gain as Nita Farahany warns and the Council of Europe and European Union is beginning to recognise with its investigations in to Neuro Technologies and Human Rights.

This brings me to the second key moment of the Conference, Jie Tang’s presentation and his key slide of The Web as a Linked-Agent which is the feature image of this post.

Jie Tang described research which provides a comprehensive and systematic overview of LLM-based agents and postulates a Simulated Agent Society where

agents exchange their thoughts and beliefs with others, influencing the information flow within the environment. (Ref Zhiheng Xi et al, 2023)

The mere thought of this sends chills down my spine.

Again – here is the focus on the how and the what, but where is the should?

What does an Agent Society look like for us meat-based humans?  What Agency do we retain in such a world?

Whilst we may feel that the current technologies are still in their infancy and are prone to hallucinations and making stuff up, we need to continually remember Roy Amara’s Law that

Humans tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. (Roy Amara)

When it comes to our Tech Overlords we should finally wake up and realise that they are commercial entities, Corporate Psychopaths which were created to maximise profits and shareholder value – nothing wrong with this, this is their purpose.  Our mistake is to be naive if we think that in their remit is the good of humanity nor fair and equitable societies which focus on human dignity.  Given this it seems plainly obvious to me that they should not be allowed to determine the future direction and development of artificial intelligence technologies and systems, nor should they be treated like other companies and left to govern themselves.

This is where the story of OpenAI, a firm ostensibly set up as a non-profit organisation with a public interest mission is salutary.

OpenAI was created

“to ensure that AGI, or artificial general intelligence—AI systems that are generally smarter than humans—would benefit “all of humanity”

In a recent article two of OpenAI’s former Board Members, who were charged with that mission, and have now been ousted by the commercial forces that dominate the company, write that

Our experience is that even with every advantage, self-governance mechanisms like those employed by OpenAI will not suffice. It is, therefore, essential that the public sector be closely involved in the development of the technology. Now is the time for governmental bodies around the world to assert themselves. Only through a healthy balance of market forces and prudent regulation can we reliably ensure that AI’s evolution truly benefits all of humanity.  (Helen Toner and Tasha McCauley were on OpenAI’s board from 2021 to 2023 and from 2018 to 2023, respectively.)

This is why forums such as Web Science should be much bolder in including business and government driven research, which in the earlier days from memory was far greater. The message of Web Science as a platform and community would be greatly enhanced by broadening beyond purely academic research and working to encourage greater dialogue between corporate research and government initiatives.

In addition I would like to see something like a Brave Conversations be more fully integrated in to the Web Science Conference programme so that all attendees, not just those who notice the event or show and interest, together with random people who turn up, are forced to focus on the thorny societal, ethical and moral questions which arise about the common technology driven future we are all co-creating.

As Anthropology Professor Michael Wesch so rightly said in 2007:

The Web is Us/ing Us.

We need to make sure that we humans continue to remember this.

 

Simulated Agent Society, From “The Rise and Potential of Large Language Model Based Agents: A Survey”, https://arxiv.org/pdf/2309.07864

Socio Technical, Socio Digital, Techno Social …

Socio Technical, Socio Digital, Techno Social …

We believe that unregulated generative AI is a clear and present danger to democratic
sustainability. The imminent problem is not super intelligent robots taking over the world, but the
threats to human individual and political freedoms posed by the deployment of simultaneously
exciting and yet potentially dangerous new technologies. We need to address the full range of AI
challenges, and in so doing, the public’s voice must be at the table, not only those of the already
powerful.  (Statement of the Digital Humanism Initiative 2023)

The last few months have been a bit of a whirlwind in terms of travel, meeting interesting people, exploring ideas and discovering insights.

In my previous post I talked about our Brussels Brave Conversations and some of the thoughts that came to me as I wandered around Brussels and began to explore the world that is the European Parliament.  As a complement to this I went to the Digital Humanism Summit 2023 in Vienna at the invitation of George Metakides and Hannes Werther where many of the Computer Science and Artificial Intelligence luminaries from Europe and the United States came together to talk about Generative Artificial Intelligence and the sustainability of democratic societies.

The explosion of Large Language Models on to humanity in 2022 – 2023 has suddenly propelled the conversations around these technologies into the public domain and with this has come a sort of mild panic about existential risk, the decimation of communiites and the irrelevance of human beings (Harari 2023).

The question is that we now have within our grasp the most powerful technologies that human kind has ever developed so how can we ensure that they are used for good (the benefit of humankind and the planet) rather than evil, and how can people feel secure about the developments of such technologies which are way beyond the abilities of most people to understand?

It is paramount that AI developers and regulators are asking themselves the right questions about the potential impact of AI. She suggests a greater focus on ensuring people feel secure in a world with AI, rather than trying to convince them to trust it. (Joanna Bryson at ANMC23).

As these conversations around AI unfold I am often bemused that it has taken so long for the proverbial penny to drop.  These technologies have been around for a very long time but as always it is the human condition not to really focus on things until they are right in front of us – we often seem to have little imagination about things that aren’t already around us, which is also why Science Fiction is so important a genre for people to engage with.  It is also why we seem to get distracted with the next bright shiny thing that emerges and then become somewhat derailed in our common sense and perspective.  As the Gartner® Hype Cycle™ so brilliantly illustrates we get excited, then we get disillusioned, then things start to calm down and we start to look at things from a more realistic perspective.  See the Gartner AI Hype Cycle 2023.

So by the time we had our 2023 London Brave Conversations at Newspeak House people were beginning to become a bit more balanced in their approach, many had actually used many of the tools and there were many thoughtful and insightful conversations around the benefits of AI whilst appreciating the need to take responsibility for how and when they are used and for the benefit of whom.

As these conversations mature it will be wonderful to see people embrace the tools to help them and help others, and I hope people will be brave whilst also being wise.

As a species we are called homo sapiens – the wise humans.  Now more than ever we need that to be the case.

Gartner, What’s New in Artificial Intelligence from the 2023 Gartner Hype Cycle, 17 August 2023, https://www.gartner.com/en/articles/what-s-new-in-artificial-intelligence-from-the-2023-gartner-hype-cycle. GARTNER and HYPE CYCLE are registered trademarks of Gartner, Inc. and/or its affiliates and are used herein with permission. All rights reserved.’

Digitally Savvy

Digitally Savvy

A few weeks ago I had the distinct pleasure to do an interview with Simon Western on his Edgy Ideas podcast.

As always in a real human-to-human conversation it enabled me to think through some ideas which have been percolating for quite a while.

Thank you Simon and for Aodhan Moran for introducing us.

Listen to the “Edgy Ideas” Podcast with Simon Western.

Surfing the digital wave

Surfing the digital wave

The best thing we can do is build surfboards and ride the wave. (Scott Davis)

It seems that we, as humanity, are at an inflection point, a period in human history where quite literally anything could happen!

Some, like Yuval Noah Harari, believe that unless we regulate and control the evolving artificial intelligence it could well be the end of human history as we know it.

What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures?” The answer, he believes, casts a dark cloud over the future of human civilisation. 

 

We should regulate AI before it regulates us.  (Yuval Noah Harari)

Others, like Scott David, believe that if we synthesize human and Artificial Intelligence and augment our thinking we may finally have the tools we need to cope with the other major challenges of the 21st Century.

Some, like Jaan Tallinen and those at the Future of Life Institute (FLI), believe that we need to pause the giant AI experiments in order to take time to more fully understand the risks.

Others, like Pedro Domingoes, criticise this call and want to forge ahead because, as Alan Kay said

The best way to predict the future is to invent it.

Regardless of which side one takes what this all demonstrates is that it is not the ‘it’, the AGI, that we should be worried about, it is us, the humans, and how we are going to deal with whatever emerges.

The one good thing about the FLI letter is that it has been a catalyst for debate and has finally brought the issue of AGI to the public forum.  The reality is that regardless of what the technologies can or can’t do it is our social systems that will generate the real social change, for instance companies like IBM pausing its hiring to replace 7,800 jobs with AI and Microsoft’s development of Co-Pilot.  Companies are not hiring graduate developers, they are caught up in the hype around the tech and this is causing major ripples in the labour market.

Whilst it may take a decade or more for AGI to emerge (in whatever form that may be) there is no doubt that in the short term the hype around it will impact peoples’ lives and this is what will create more risk than the potential of the machine.

One way of looking at the current situation is that following the disruption of the Covid19 Pandemic there are now the accompanying advances in AI, and the other technologies which are now converging, which have shaken up and unfrozen much within our social and economic systems.  This can be illustrated by considering two models:  Kurt Lewin’s Freeze-Unfreeze (Lewin 1947 Frontiers in group dynamics) and William Bridges Transition.

It is the unfrozen state (the Interstice) which provides the potential and opportunity for change and renewal before the new-normal is established. This is a time of excitement and energy but it is also a time of fear and potential unrest because change can be frightening as the old ways die and the new is not yet clear (see Kubler Ross).

In facing any change we humans need to feel a sense of agency in order to craft a path forward and accept the change being presented to us, and this is what we sought to explore in our recent Brave Conversations in Brussels.

We presented our participants with three case studies, each of which posed a number of questions around personal choices in response to specific situations in which Artificial Intelligence was a key determinant.  The first was based on a challenge posed by a large language model released on to the Internet; the second related to AI and a health care issue, and the third related to the development of government policy.  In each case we armed our participants with Mark Moore’s Strategic Triangle asking three questions to determine how they as individuals could potentially respond to each case:

  • Ethos – What should we do?  What do our values, ethics and morals guide us to do?
  • Logos – What can we do?  What resources to do we have?
  • Pathos – What may we do?  What authority are we acting on?

For much of the last fifty years advances in information systems have been made by scientists, such as Geoffrey Hinton and his peers, who have developed technologies because they could – they were able to do it, they could solve whatever problem they were addressing, and they charged forward.  They didn’t necessarily ask if they should – the combined outcome of the Ethos and Pathos in the trilogy.

As a complement to this investors, particularly in Silicon Valley, helped thrust these technologies in to the commercial realm because they understood the value of digital information and digital disruption – Uber, Air BnB and all the companies participating in what Shoshana Zuboff has termed Surveillance Capitalism.

Funnily enough when I met Zuboff at a signing of her book in 2019 I asked her if she had seen all of this coming when she wrote The Support Economy in 2002 – she said, “Yes we did, we just hoped it wouldn’t happen.”  Sadly it did.

The commodification of personal data for commercial gain has created a marketplace that trades exclusively in human futures (Zuboff), feeding the Social Machines we have today, exploiting our innate human need to connect with each other.  What is worth considering here is that whilst there has been enormous focus on the issues of privacy and surveillance what has not been much discussed are the ways that these platforms view the emergence of communities as a byproduct rather than the driver of their success.

The Web was created by Tim Berners-Lee as a tool to facilitate communication and information sharing between people within a community and it was the trust within that communities that enabled the sharing to occur.  Companies then sought to commercialise the Web, which had been given to humanity for free by Tim, and as a result sought to create monopolies by closing elements of it down – the walled gardens of social media.

Just over a decade ago the Open Data Movement gained traction, particularly due to the election of the tech-savvy Barrack Obama as US President in 2008 and the Parliamentary Expenses Scandal in the UK in 2009.  There was huge hope for this movement which changed the paradigm around public sector information from being closed and hidden to that of being a public asset to be harnessed and exploited for public good – this resulted in the ‘open by default‘ principle.  Sadly, despite the excitement and early wins achieved by Government Digital Services around the world the truth is that they managed to pick the low hanging fruit but the challenges of true digital transformation have proved to be painstakingly difficult – governments are still talking about it in the same way that they were over a decade ago.

It seems to me, having lived through and closely observed these events, that each phase of opening up emerges from within communities that are seeking to solve real problems that affect them.  Someone then has the bright idea of commercialising which then encourages the sharks to start circling with their growth and profit mindset and next thing whatever was shared and open became monetised and closed, no longer focused on the needs of the community but geared to exploiting ‘consumer’ behaviour to generate advertising and retail sales revenue.

We are now witnessing this once again in the AI space as the hype is driving the investors to scramble.   Someone like Cathy Wood, CEO of Ark Invest sees a massive industry emerging where currently there is virtually none, and this is what happened with the Web and with Open Data.  The digital disruptors understood the affordances of digital information and companies like Facebook and Google hoovered up whatever they could to both ingest new technologies and also to close down competition.  Because Governments had absolutely no idea of how digital information works they didn’t see what was obvious and right in front of them – because there is no market now doesn’t mean that there won’t be soon! This marketing myopia is responsible for the mega corporations we have today which dominate the online world existing as the most valuable companies of all of recorded human history.

There is hope that perhaps the release of ChatGPT and the ability of the general public to use the systems may wake people up to its potential, and build some sort of momentum towards either regulation and/or Anti-Trust action, something that people like Zephyr Teachout are fighting for (see Break ‘Em Up!).

There is also hope that there may be global communities who can use the very technologies themselves to craft some new phase of openness in partnership with governments and the Third Sector with the objective of serving humanity rather than big corporations (I won’t hold my breath for this one as they will most likely be too slow).

Finally, starting with the European Union’s AI Act, governments may not repeat the mistakes in the first two digital waves of leaving regulation too late and may listen to the chorus of voices calling for it (Sam Altman, CEO of OpenAI for one).  I say may because thus far their track record is not good.

I think that the real kicker will be when the smart devices we wear on our bodies are embedded within our bodies – smart contact lenses as an obvious example – because then they will be required to meet the standards of Medical Device regulation, although by that time it will most likely be too late.

What there is no doubt about though is that there is a shift happening on all fronts, and I believe it is the younger generation, the Millennials and younger, who need to take the lead now in determining how humanity proceeds.  They have grown up in the digital soup and as I have written before I believe this is their time.  They are the ones who are now crafting careers, bringing up families and they are the ones who will be supporting us Elders as we age. They are much more connected with their peers globally than we ever were and they don’t seem to be as binary in how they see the world.

As I reflect on this I am brought to consider The Strauss-Howe Generational Theory, and the idea of the Fourth Turning. My late 20’s aged son made an interesting comment to me a few months ago – he said “I can feel a change coming Mum, I’m not sure what it is, but it’s big“.

One thing is for sure is that when, not if, some form of artificial sentience emerges it will shake humanity to the core – we will need to reconsider everything we think is ‘normal’ in our daily lives from how we learn, how we work and even to our concepts of God. History tells us that when humans go through major change it is often with violence and aggression as we lash out to apportion blame or seek redress.  This is not going to do any good as the machines won’t care, and this is why, above all

The lesson of AI and of formidable breakthroughs to come, such as quantum computing is that we may now be reaching the point where something most unnatural to humans is the only thing that can save us: humility. (Howard French)

One thing that surfing teaches you is to be humble and to respect the ocean and realise that it and its’ waves can swamp you at any moment.  It teaches you to read the tides and the wind and work with the environment, not try to fight against it.  Much of this comes from experience but also from being open and having the basic skills (such as knowing how to swim).

I believe that this is where we are now and that what we have to do is to nurture, educate and empower people to harness the good in the digital realm, to learn to craft our surfboards, to learn to ride the waves well and use that knowledge to help future generations always focus and remember their humanity and their part of the greater whole on this Pale Blue Dot of a planet we all inhabit.

June 2024
M T W T F S S
 12
3456789
10111213141516
17181920212223
24252627282930

-->