May 22, 2023 | Brave Conversations, Digital Gymnasia, Imagination, Social Machine, Stewardship, Trust, Web Science
The best thing we can do is build surfboards and ride the wave. (Scott Davis)
It seems that we, as humanity, are at an inflection point, a period in human history where quite literally anything could happen!
Some, like Yuval Noah Harari, believe that unless we regulate and control the evolving artificial intelligence it could well be the end of human history as we know it.
What would happen once a non-human intelligence becomes better than the average human at telling stories, composing melodies, drawing images, and writing laws and scriptures?” The answer, he believes, casts a dark cloud over the future of human civilisation.
We should regulate AI before it regulates us. (Yuval Noah Harari)
Others, like Scott David, believe that if we synthesize human and Artificial Intelligence and augment our thinking we may finally have the tools we need to cope with the other major challenges of the 21st Century.
Some, like Jaan Tallinen and those at the Future of Life Institute (FLI), believe that we need to pause the giant AI experiments in order to take time to more fully understand the risks.
Others, like Pedro Domingoes, criticise this call and want to forge ahead because, as Alan Kay said
The best way to predict the future is to invent it.
Regardless of which side one takes what this all demonstrates is that it is not the ‘it’, the AGI, that we should be worried about, it is us, the humans, and how we are going to deal with whatever emerges.
The one good thing about the FLI letter is that it has been a catalyst for debate and has finally brought the issue of AGI to the public forum. The reality is that regardless of what the technologies can or can’t do it is our social systems that will generate the real social change, for instance companies like IBM pausing its hiring to replace 7,800 jobs with AI and Microsoft’s development of Co-Pilot. Companies are not hiring graduate developers, they are caught up in the hype around the tech and this is causing major ripples in the labour market.
Whilst it may take a decade or more for AGI to emerge (in whatever form that may be) there is no doubt that in the short term the hype around it will impact peoples’ lives and this is what will create more risk than the potential of the machine.
One way of looking at the current situation is that following the disruption of the Covid19 Pandemic there are now the accompanying advances in AI, and the other technologies which are now converging, which have shaken up and unfrozen much within our social and economic systems. This can be illustrated by considering two models: Kurt Lewin’s Freeze-Unfreeze (Lewin 1947 Frontiers in group dynamics) and William Bridges Transition.

It is the unfrozen state (the Interstice) which provides the potential and opportunity for change and renewal before the new-normal is established. This is a time of excitement and energy but it is also a time of fear and potential unrest because change can be frightening as the old ways die and the new is not yet clear (see Kubler Ross).
In facing any change we humans need to feel a sense of agency in order to craft a path forward and accept the change being presented to us, and this is what we sought to explore in our recent Brave Conversations in Brussels.
We presented our participants with three case studies, each of which posed a number of questions around personal choices in response to specific situations in which Artificial Intelligence was a key determinant. The first was based on a challenge posed by a large language model released on to the Internet; the second related to AI and a health care issue, and the third related to the development of government policy. In each case we armed our participants with Mark Moore’s Strategic Triangle asking three questions to determine how they as individuals could potentially respond to each case:

- Ethos – What should we do? What do our values, ethics and morals guide us to do?
- Logos – What can we do? What resources to do we have?
- Pathos – What may we do? What authority are we acting on?
For much of the last fifty years advances in information systems have been made by scientists, such as Geoffrey Hinton and his peers, who have developed technologies because they could – they were able to do it, they could solve whatever problem they were addressing, and they charged forward. They didn’t necessarily ask if they should – the combined outcome of the Ethos and Pathos in the trilogy.
As a complement to this investors, particularly in Silicon Valley, helped thrust these technologies in to the commercial realm because they understood the value of digital information and digital disruption – Uber, Air BnB and all the companies participating in what Shoshana Zuboff has termed Surveillance Capitalism.
Funnily enough when I met Zuboff at a signing of her book in 2019 I asked her if she had seen all of this coming when she wrote The Support Economy in 2002 – she said, “Yes we did, we just hoped it wouldn’t happen.” Sadly it did.
The commodification of personal data for commercial gain has created a marketplace that trades exclusively in human futures (Zuboff), feeding the Social Machines we have today, exploiting our innate human need to connect with each other. What is worth considering here is that whilst there has been enormous focus on the issues of privacy and surveillance what has not been much discussed are the ways that these platforms view the emergence of communities as a byproduct rather than the driver of their success.
The Web was created by Tim Berners-Lee as a tool to facilitate communication and information sharing between people within a community and it was the trust within that communities that enabled the sharing to occur. Companies then sought to commercialise the Web, which had been given to humanity for free by Tim, and as a result sought to create monopolies by closing elements of it down – the walled gardens of social media.
Just over a decade ago the Open Data Movement gained traction, particularly due to the election of the tech-savvy Barrack Obama as US President in 2008 and the Parliamentary Expenses Scandal in the UK in 2009. There was huge hope for this movement which changed the paradigm around public sector information from being closed and hidden to that of being a public asset to be harnessed and exploited for public good – this resulted in the ‘open by default‘ principle. Sadly, despite the excitement and early wins achieved by Government Digital Services around the world the truth is that they managed to pick the low hanging fruit but the challenges of true digital transformation have proved to be painstakingly difficult – governments are still talking about it in the same way that they were over a decade ago.
It seems to me, having lived through and closely observed these events, that each phase of opening up emerges from within communities that are seeking to solve real problems that affect them. Someone then has the bright idea of commercialising which then encourages the sharks to start circling with their growth and profit mindset and next thing whatever was shared and open became monetised and closed, no longer focused on the needs of the community but geared to exploiting ‘consumer’ behaviour to generate advertising and retail sales revenue.

We are now witnessing this once again in the AI space as the hype is driving the investors to scramble. Someone like Cathy Wood, CEO of Ark Invest sees a massive industry emerging where currently there is virtually none, and this is what happened with the Web and with Open Data. The digital disruptors understood the affordances of digital information and companies like Facebook and Google hoovered up whatever they could to both ingest new technologies and also to close down competition. Because Governments had absolutely no idea of how digital information works they didn’t see what was obvious and right in front of them – because there is no market now doesn’t mean that there won’t be soon! This marketing myopia is responsible for the mega corporations we have today which dominate the online world existing as the most valuable companies of all of recorded human history.
There is hope that perhaps the release of ChatGPT and the ability of the general public to use the systems may wake people up to its potential, and build some sort of momentum towards either regulation and/or Anti-Trust action, something that people like Zephyr Teachout are fighting for (see Break ‘Em Up!).
There is also hope that there may be global communities who can use the very technologies themselves to craft some new phase of openness in partnership with governments and the Third Sector with the objective of serving humanity rather than big corporations (I won’t hold my breath for this one as they will most likely be too slow).
Finally, starting with the European Union’s AI Act, governments may not repeat the mistakes in the first two digital waves of leaving regulation too late and may listen to the chorus of voices calling for it (Sam Altman, CEO of OpenAI for one). I say may because thus far their track record is not good.
I think that the real kicker will be when the smart devices we wear on our bodies are embedded within our bodies – smart contact lenses as an obvious example – because then they will be required to meet the standards of Medical Device regulation, although by that time it will most likely be too late.
What there is no doubt about though is that there is a shift happening on all fronts, and I believe it is the younger generation, the Millennials and younger, who need to take the lead now in determining how humanity proceeds. They have grown up in the digital soup and as I have written before I believe this is their time. They are the ones who are now crafting careers, bringing up families and they are the ones who will be supporting us Elders as we age. They are much more connected with their peers globally than we ever were and they don’t seem to be as binary in how they see the world.
As I reflect on this I am brought to consider The Strauss-Howe Generational Theory, and the idea of the Fourth Turning. My late 20’s aged son made an interesting comment to me a few months ago – he said “I can feel a change coming Mum, I’m not sure what it is, but it’s big“.
One thing is for sure is that when, not if, some form of artificial sentience emerges it will shake humanity to the core – we will need to reconsider everything we think is ‘normal’ in our daily lives from how we learn, how we work and even to our concepts of God. History tells us that when humans go through major change it is often with violence and aggression as we lash out to apportion blame or seek redress. This is not going to do any good as the machines won’t care, and this is why, above all
The lesson of AI and of formidable breakthroughs to come, such as quantum computing is that we may now be reaching the point where something most unnatural to humans is the only thing that can save us: humility. (Howard French)
One thing that surfing teaches you is to be humble and to respect the ocean and realise that it and its’ waves can swamp you at any moment. It teaches you to read the tides and the wind and work with the environment, not try to fight against it. Much of this comes from experience but also from being open and having the basic skills (such as knowing how to swim).
I believe that this is where we are now and that what we have to do is to nurture, educate and empower people to harness the good in the digital realm, to learn to craft our surfboards, to learn to ride the waves well and use that knowledge to help future generations always focus and remember their humanity and their part of the greater whole on this Pale Blue Dot of a planet we all inhabit.

May 20, 2023 | Brave Conversations, Communications, Digital, Imagination, Literacy, Social Machine, Stewardship, Trust, Web Science
Last week we held our 21st Brave Conversations event at Atelier 29 in Brussels and the first in partnership with the Digital Enlightenment Forum (DEF).
We began on a wet, cold Brussels morning but garnered a group of intelligent, engaged and curious individuals keen to converse with other humans in the room about our digital lives in 21st Century.
Since our last events in 2022 much seems to have shifted within the digital landscape, particularly with the release “in the wild” of ChatGPT and other generative AI and large language models. It took ChatGPT just five days to gain 1 million users following its release in November 2022 and before long thousands of very noted people had signed the Future of Life Institute Open Letter to Pause Giant AI Experiments.
By the time we got to Brussels even the Smart Humans who had invented the tools themselves (such as people like scientist Geoffrey Hinton) were worried and struggling to keep up and the major tech companies were scrambling to maintain some sort of competitive edge by rushing to integrate the tools in to their mainstream offerings (for example Microsoft’s launch of Co-Pilot).
So what is this all about? For anyone who has been watching the tech space the events of the past few months were entirely predictable, as was the human excitement / panic / reaction / confusion that followed. We’ve been here before, although not necessarily with a suite of technologies with the impact to profoundly change human society as these ones. Ever since the invention of writing people have warned about it’s dire consequences – Socrates of writing; Gessner of the printing press; Carr of Social Media.
In all the hype swirling around at the minute, and particularly that driven by the major tech companies, we need to remember that the success of humanity as a dominant species comes from our ability to to co-operate with each other, to transmit and build on the knowledge of our forebears, and to develop and utilise tools that have become increasingly sophisticated.
Human beings have a unique ability to cooperate in large, well-organized groups and employ a complex morality that relies on reputation and punishment. (Fraans de Waal
The tools we are currently developing are merely the latest in a very long line which have helped us survive and thrive, and these tools too will become necessary in order to help us meet the challenges we currently face.
But as Roy Amara states
Technology is neither good nor bad, but nor is it neutral.
So what did all these mean for the conversations we had in Brussels on 12th May?
After the years of Covid one of the things we feel is most important with Brave Conversations is to get the humans in the room, and a number of people made a big effort to get to Brussels to be with us in person. This meant that there were human-to-human interactions, unmediated by any technology, and the ability for each person to explore their ideas within the physical confines of a human space.
We had a blend of participants which included the Board of the Digital Enlightenment Forum, academics, some people working in policy with the European Union, Students, and a couple of creatives. A fabulous blend of minds and perspectives to craft interesting insights and a nuanced approach to how everyone was feeling about the current technology onslaught. Some of the comments below give a flavour of the conversation but perhaps the most important was when one participant told me that she came along because she can’t find anywhere else to have these conversations in a safe space without judgement or a predetermined agenda.
This is what we seek to create in Brave Conversations and which our partnership with the Digital Enlightenment Forum promised to bring.
I would like to thank as always Leanne Fry for her continuing partnership, it was wonderful to work with Thanassis Tiropanis yet again and thanks to him for helping facilitate. To the Board of DEF thank you for your support of the event and to the inimitable Myriam de Greef an enormous thanks because without Myriam no conversations would have been had!

May 7, 2023 | Analogue, Brave Conversations, Data, Digital, Imagination, Social Machine, Stewardship, Web Science
Title adapted from Shoshana Zuboff’s ground-breaking 1988 book
Last week I attended a Group Relations Conference in India. These events are always intense (this one even more so!) but they provide a unique opportunity to consider oneself with a human social system.
One of the things that occurred to me as we were exploring the role of the unconscious as it was playing out in the here and now (all psychobabble terms but in fact hugely important) was that there are multiple unconsciouses which operate as we live our dual analogue-digital lives. Carl Jung described what he called the collective unconscious which complements and influences all of our conscious thinking and actions as we participate within the human system. I believe that there is now in addition a digital unconscious which is emerging in the digital realm as the result of our digital interactions within the Social Machine and an even more powerful machine unconscious which is evolving in the artificial intelligences we are building. I drew the image below to try to illustrate my conjecture to the group – needless to say most didn’t understand.

In What Technology Wants co-founder of Wired Magazine and co-Chair of the Long Now Foundation Kevin Kelly talks about The Technium: A Living System of Technology which encompasses the entire system around technology – culture, art, social institutions, through to “the extended human”. In his latest blog post Kelly states that
For a while I’ve been intensely exploring generative AI systems, creating both text and visual images almost daily, and I am increasingly struck by their similarity to dreams. The AIs seem to produce dream images and dream stories and dream answers. The technical term is “hallucinations” but I think they are close to dreams. I’ve come to suspect that this similarity between dreams and generative AI is not superficial, poetic, or coincidental. My unexpected hunch is that we’ll discover that the mechanism that generates dreams in our own heads will be the same (or very similar) to the ones that current neural net AI’s use to generate text and images.
The foundational mode of the intelligence is therefore dreaming.
Don’t get me wrong – I’m not necessarily agreeing with Kevin Kelly here nor am I buying in to the hype about machines hallucinating. What I am pointing out is that the machines are analyzing human data using human crafted algorithms and therefore there is something of our unconscious that is embedded in their emanations which is now being made explicit and visible. We can only refer to concepts and ideas in human terms (hence we anthroporphosize) and to describe what the machines are doing is almost like taking us in to our own unconscious (this is where the concept of Azimov’s Psychohistory comes in to play).
One way of accessing the collective human unconscious is through Social Dreaming, the practice of sharing, associating to and working with dreams in a matrix in order to identify social trends and social dynamics. As our machines are coming together and bringing our data with them it may well be that what we are seeing is a manifestation of the collective human unconscious expressed through the output of the machines – which may seem like hallucinations – but how can we know given the opaque nature of how they operate? And, if they have begun to go down that path then they are already moving beyond our realm of understanding.
The real challenge will come when they become able to acknowledge and recognise this unconscious as something different from a probabalistic algorithm, or are embodied, as the work of people like Rodney Brooks and so much of our Science Fiction (Humans, Blade Runner, Ex Machina) has shown us,
So what does this mean for us as humans?
Up until the recent advances brought about by the large language models such as ChatGPT talking with the average person about the advancing machine intelligence was like describing an elephant. Every person sees things that directly relate only to them just like the story of the Blind Man and the Elephant.
This relates as much to technologists as to everyone else as I’ve witnessed countless times. The most obvious to me was when
I heard a very notable “father” of the digital world speak at a conference and when asked what he would recommend about how to address the rise of pornography on the Web he responded “well just don’t look at it!”
Many of the people I’ve met who have built the machinery of the digital world are extremely naïve, building the tools because they can, not asking whether they should. When Geoffrey Hinton resigned from Google last week he commented
I console myself with the normal excuse: If I hadn’t done it, somebody else would have,
As with all kids in the candy shop scenarios if you give a scientist a problem and lots of funding they will develop new tools and techniques regardless of the potential consequences. Hinton and others like him saw only part of the Elephant without considering it as a whole animal let alone part of a herd.
Which brings in the question of ethics. Whilst some of the big companies have created Ethics Advisory Boards the reality is that much of the development work in the field of AI is now happening in the open source space where there is no supervision or oversight. These people still want to move fast and break things and the very nature of Ethics is designed to slow things down by asking difficult and challenging questions.
Governments and regulation are also designed to slow things down because politics and policy operates on human time which is analogue, messy and the very opposite of an efficient machine. Humans need time to process, and our relationships are based on what people like Anna Machin and Rachel Botsman call Trust Friction – the stickiness and the glue that underpins how human systems operate.
The whole point of human relationships is that they are not efficient, because they take time and brain power to develop and maintain. Trust needs friction. (Anna Machin)
Human systems are analogue and analogue takes time. In the analogue world:
- You can’t fire off a letter you need to write and post it
- you can do an online transfer you need to go to the bank
- you can’t immediately alter a design you need to redraw it
- you can’t just be friends with everyone you need to build trust through shared experiences which takes time.
Machines don’t want friction – it slows them down, makes things break and ruins their power to work ratio – i.e. “productivity”. The ultimate idea of this is the Paperclip Problem where smart machines instructed to make paper clips will consume all the resources in the universe (including us) to just make paper-clips.
With the advent of ChatGPT and it’s brethren the removal of friction within our human-machine interactions has now gone to the next level and smart AI is now being embedded in to pretty much all of our digital processes – just think of how many conversations your have and hear which involved technology of some sort.
So now I’d like to bring in a new analogy, the frog in the pot of soup as the temperature is gradually turned up.
Our human need to process and understand means that we as humanity have been sitting in the digital soup for at least half a century but in the first half of 2023 suddenly it is feeling a little uncomfortably warm.
As the soup heats up there are some who are going to want to jump out of the soup – there are some who going to boil and there are those who will adapt.
The questions now seem to me to be who each of these will be and what will happen in each case.
Let’s consider some options:
Firstly, those who want to leave. It may be too late but, as with the Luddites in the Industrial Revolution, there is much wisdom in what they have to say and perhaps an alternate reality has much to offer as it always has throughout the ages. There is something of this in Hari Seldon’s concept of building a Foundation on the furthest planet in order to separate itself from the chaos of the main system – an opportunity to isolate, slow down, reboot and recreate.
Secondly, those who are trapped. Sadly there is always a high cost to any radical change and many will find the “new world” frightening and overwhelming. Just one example is the rate of teenage girl suicide already. Along with many others I have spent the past three decades of my life working to understand the transition that is upon us and help people prepare for the change with minimal effect. Some have heeded the lessons, most have sat and enjoyed the warmer water oblivious to the dangers. I’m not sure anything can help these people any more as I think the rate of change is going to be too fast.
I think both of these groups will struggle and push back through both fear and anger and the manifestation of this could be dangerous.
Finally, there will be those who adapt, survive and thrive.
With all the noise about the technology and how fast it’s progressing or whether it should be paused or stopped the real point is what are the humans going to do about it? Therefore it is the third group I am most interested in and I believe that it is being led by the younger generation but needs to be supported and mentored by the 21st Elders who have memories of the analogue world and the value of its friction and temporal nature.
Some fear the AI Apocalypse and that non-Western (WEIRD) cultures may gain a technological advantage. This is problematic on so many levels particularly given that it is the minority-population WEIRD West that has created the culture of growth and the technologies themselves. Some alternative thinking might be precisely what is needed now and some less privileged cultures may, in fact, be better prepared for what is to come.
The history of automation is that we humans have invented machines to take away the dirty, dangerous and dull jobs … now we are taking away a whole host of others. These technologies can be used to solve the very challenging problems which confront us in the 21st Century and the sooner we learn to work constructively and creatively with the machines the sooner we will harness the power that is before us for good.
The more I feel people heading in one direction as a herd the more I want to go the other way and explore what is happening there – this is where the adaptive survivors will be.

Mar 1, 2023 | Brave Conversations, Ethics, Governance, Literacy, Philosophy, Social Machine, Stewardship, Trust, Web Science
Only an enlightened society can be aware. (Aristotle)
In July of last year I had a call with Professor George Metakides, with whom I serve on the Web Science Trust Board.
I first discovered Web Science when Armin Haller, who was a founding member of our Meta-Brave Conversations community, suggested I check them out which I did by attending the 2012 Summer School in Leiden. There I met the inimitable Professor Dame Wendy Hall and, thanks to Wendy, I have been involved with the Web Science community ever since.
I started exploring the Web as a socio-technical system in the early 2000s through the development of the Printing Industries’ Action Agenda, Print21, which sought to understand the impact of digital information on the skillbase and supply chain of what was then the world’s third largest manufacturing industry. This led to my work with Fuji Xerox Australia and the Australian and New Zealand School of Government (ANZSOG) which included:
Throughout all of this my colleagues and I constantly struggled to explain to people what digital technologies really were; how they, and the broader digital ecosystem, were evolving, and what sort of world might emerge as ‘smart machines’ become a reality. It was frustrating that time and time again people told us how important the work we were doing was, but no one was prepared to support its further development or champion it beyond narrow academic circles. This was what inspired us to create Brave Conversations but it also led others to create similar organisations, one of which is the Digital Enlightenment Forum (DEF).
DEF was co-founded by George Metakides and others of like mind in 2012 working within the European Union who sought to understand
“how current and future digital technology can best be used to express our identities in the digital world, taking into account the core values we cherish, we can support the rights of the individual in society” (see DEF Mission).
I attended my first DEF event in 2015 where I was most impressed by the calibre of the people, the core premise and DEF’s aspirations with its broad reach in to education, research, policy, and the commercial sector.
The conversations and debates around digital interaction technologies have come a long way since 2015, and there is now a rising public awareness and interest, which means that people may be ready to listen (maybe!)
During our conversation George and I discussed the synergies between DEF and Brave Conversations which, of course, sent me down a few rabbit holes.
The first was to consider the two words digital and enlightenment.
The word digital seems fairly straightforward coming from:
- having digits (fingers and thumbs of which humans usually have 10) and using these to express discrete numbers (0 to 9) as values of a physical quantity;
- something being binary – either on or off (1 or 0).
The word digital is, however, becoming more complicated as we digitise information and digitalise societies. Something that is complicated is where components can be separated out and dealt with in a systematic and logical way based on a set of static rules or algorithms, which largely describes expert systems which make predictions or classifications based on input data (IBM 2020), i.e ‘artificial intelligence’.
The word enlightenment is far more nuanced and complex because it is culturally contextual and there are no rules, algorithms, or natural laws to define it.
One definition is of the
European intellectual movement of the late 17th and 18th centuries emphasizing reason and individualism rather than tradition. It was heavily influenced by 17th-century philosophers such as Descartes, Locke, and Newton, and its prominent figures included Kant, Goethe, Voltaire, Rousseau, and Adam Smith. (Wikipedia)
Enlightenment thinking included a range of ideas centred on the value of human happiness, the pursuit of knowledge obtained by means of reason and the evidence of the senses, and ideals such as natural law, liberty, progress, toleration, fraternity, constitutional government, and separation of church and state.
At the time such ideas were dangerously radical because European thinkers were just beginning to throw off the yoke of Church authority and create the mindset of the Scientific Revolution which stressed the reliance on common sense and the power of direct observation over the unquestioning acceptance of traditional (often religious) explanations and ways of understanding the natural world. As a corollary to this European colonisation revealed the richness of other cultures and how they thought about things – consider the Islamic Golden Age and the value of the Meso-American and Indigenous cultures, something it appears we are only just beginning to rediscover (see The Dawn of Everything).
Perhaps the biggest challenge for us as WEIRD (Western Educated Industrialised Rich and Democratic) thinkers now is to realise that whilst we have been largely responsible for inventing and building the technologies which have become embedded in the lives of humans around the globe, the majority of those who interacting online are neither Western nor European (see this video and Our World in Data statistics).

Thus the combination of the words digital and enlightenment becomes even more complex!
If we take just two additional perspectives of the word enlightenment:
- for Buddhists and Hindus enlightenment may be translated as either the Japanese word satori (derived from the verb satoru, “to know”) usually referring to an experience of insight into the true nature of reality; or the Sanskrit and Pali word Bodhi meaning “awakening”, but there is also reference to the middle way of living a balanced life.
- For the Aboriginal Australians (and probably for many indigenous cultures) there may be no word as the concept would be embedded in the land and landscape as crafted in the Songlines (see this article and consider The Memory Code).
The theme which consistently emerges is that of knowledge, understanding and illumination, the concept of and the challenge of illuminating the path created by those who have insights from the path combined with some foresight as to what is to come.
I think it’s fair to say that this is what the European Enlightenment thinkers were doing as they sought to understand changing mindsets and revolutionary technology. It is also at the core of what humanity needs now as we move on from the Industrial Age and fully embrace the Age of Information ( Nouriel Roubini).
Scientific method, hell! No wonder the Galaxy was going to pot! (The Foundation Series)
Whenever we run a Brave Conversations we always stress the need for participants to engage with Science Fiction, and especially Isaac Azimov’s Foundation Series telling the story of Hari Seldon and his hopes that Psychohistory would prevent the horrors of a predictable future. He fails not because of the complications of the data and information, but because of the complex unpredictability of life and living systems.
Humanity’s desire to divine the future is as old as humanity itself – we have consulted the Delphic Oracle, Runes, Fortune Tellers and Time Machines, and now we are worshipping our nascent artificial machines as we see them as portents of the future or ways to increase productivity and maximise profits. These machines and systems are merely reflections of ourselves and are limited by our own frames of reference, our language, our value systems and our perspectives of the world.
This is the true challenge of 21st Century Digital Enlightenment – to bring to light our own biases and blind spots, to become more inclusive in our conversations and to embrace the diversity of humanity as we build tools to serve all of humanity and the broader planet.
The purpose of my conversation with George was that he asked if I would be prepared to join the Board of the Digital Enlightenment Forum and help it navigate this next phase of its mission, which is something I wholeheartedly and enthusiastically accepted.
Thank you George and all the DEF Board for this opportunity to serve.
Dec 11, 2022 | Digital Gymnasia, Doughton, Education, Founders and Coders, Gaza Sky Geeks, Leadership, Scholarships, Social Machine, Stewardship
The emerging generation is one of hope, awakened and will reboot the way we live – regenerate society as you gain voice, implicitly awakened choices (Professor Lisa Miller).
Yesterday our UK Trustee Louise Sibley and I attended the World Premier of “Lost Histories“, a show based on the family history of Biripi and Gamillaroi musician Troy Russell, created for Musica Viva Australia’s In-Schools Program. Troy, together with Leila Hamilton and Susie Bishop entertained a small group of families from all walks of Australian life as they visited the Art Gallery of New South Wales to celebrate the opening of it’s new Library and Archive as part of the Gallery’s opening of its’ major new development.
This project came about due to Zoë Cobden-Jewitt, now one of our Intersticia Advisors, with whom we started working on our very first Australian project with the Bell Shakespeare Company when they created their Writers’ Fellowship in 2015.
As I sat and listened to Troy, Leila and Susie explore the memory-box holding these Lost History stories I began to think about the stories within Intersticia that we have all created over the past decade. It was at the end of 2012 after Sam, Lock and I had been to the UK in the Summer with my Aunt Joan Doughton (whom our Doughton Scholarship is in memory of) that I first began seriously exploring the concept of creating a family Foundation through which to undertake our philanthropic activities, and the Intersticia Foundation Australia became a reality on 23rd July 2023. Intersticia UK became a Registered Charity on 7th January 2019.
The idea of being able to support younger people as they journey through life began when I was a student at Goodenough College in 1985, but this was further stimulated by a conversation I had with John O’Neil, founder of The Good Life, in 2016. The Good Life evolved from the Aspen Institute, which was created to enable business leaders to take time out to discuss philosophy, ethics and literature as a key part of their own leadership journey. In 2006 two Aspen teachers – John O’Neil (ex AT&T) and Pete Thigpen (ex Levi Strauss) realised that the lessons of Aspen needed to be brought to the tech-leaders and community of Silicon Valley, and so they created Good Life.
When I visited John in Sausalito in 2016 we talked about how important it was for elders with resources to enable and empower emerging stewards and he challenged me to create a Fellowship – a group of people of like mind that I could support in the work that they do to make the world a better place. This is what has guided our thinking throughout the past decade and has informed the people we have chosen and the organisations we work with.
Here is an overview of our major activities in that time.
2013:
2014
- We began working with the Web Science Trust and in partnership with them developed Brave Conversations in March 2017.
- We supported our second Rowland Scholar, Hamish Laing.
2015:
2016:
2017:
- We created our first Leadership Scholarship with Negar Tayyar as the first recipient.
- We created Brave Conversations held at the Australian National University in Canberra. From this has evolved into Future Worlds Challenge.
- We continued our partnership with Bell Shakespeare supporting Teresa Jakovich in their Education Programme.
- We supported our fifth Rowland Scholar, Osheen Arora.
- Bel Campbell worked as Intersticia’s Creative Director co-creating Brave Conversations and the development of Future Worlds Challenge and became our third Leadership Fellow.
2018:
- We held our first Intersticia Fellowship Retreat at Goodenough College which ten Fellows, both Australian and UK Boards, which was facilitated by Sam Crock and John Urbano.
- We began working with Founders and Coders and Gaza Sky Geeks to create the Founders Programme which supported eight FAC Graduates and fifteen GSG Graduates to work on Tech for Better projects. Our first Founder Fellows being Joe Friel.
- We supported our sixth Rowland Scholar, Timothy Wong.
- We welcomed Nick Byrne as our second Leadership Fellow.
2019:
2020:
- The Covid 19 Pandemic hit the world severely curtailing travel and all social activities.
- As part of our adaptation forced by the Covid19 Pandemic we created our Digital Gymnasia Series of workshops, initially run through Goodenough College aimed at their Alumni and Student communities, but run out more broadly in 2021.
- The Digital Gymnasia material was integrated in to the Founders and Coders Social Machine curriculum with the help of FAC Graduate Hannah Stewart.
- We held our second Intersticia Fellowship Retreat online with 17 (seventeen) Fellows and 4 (four) advisors (Sam Crock, Marianne Darre, John Urbano, Louise Sibley and Dan Sofer).
- We supported our eighth Rowland Scholar, Sean McDiarmid.
- We supported our third Doughton Scholar, Marco Valerio.
- Louise Sibley replaced Alison Irvine as a Trustee of Intersticia UK.
- We welcomed Marianne Darre and Philip Hayton as Intersticia Advisors.
- Jacquie Crock began working with Intersticia as our first Intern.
- Doughton Fellow Berivan Esen became a Trustee of Intersticia UK.
2021:
- For the first time we funded two concurrent Goodenough Scholars, Farahana Cajuste and Sergio Mutis.
- We continued our work with Yalla through creating the Yalla Apprenticeship Programme which supported two GSG Graduates to work full time with Yalla for six months. One is now a full time employee of Yalla.
- We held our first hybrid Intersticia Fellowship Retreat with 8 (eight) UK Fellows and Advisors attending in person at Schumacher College, Devon, and 15 (fifteen) Fellows and Advisors attending via Zoom.
- We began working with Abeer Abu Ghaith, Founder of MENA Alliances.
- Leadership Fellow Nick Byrne joined the Intersticia Foundation Board.
2022:
- We supported Gaza’s first Rock Band Osprey V with some funding towards sound and recording equipment.
- We delivered our first Brave Conversations to the Solstrand Leadership Programme, Norway.
- We supported our second Newspeak Scholar, Ardavan Afshar.
- We began supporting the development of a First Nations’ Ensemble through the Musica Viva Australia “In Schools Programme” resulting in “Lost Histories”created by Troy Russell.
- We supported the development of a new theatrical performance “Darkness” with Five Eliza Street, Newtown, Sydney to be premiered in January 2023.
- We supported our eleventh Rowland Scholar, Yujui Li.
- Abeer Abu Ghaith became a Leadership Fellow.
- We welcomed Hannah Stewart as Founder Fellow.
- We welcomed Zöe Cobden-Jewitt as an Intersticia Advisor.
Reflecting on the last ten years it seems fitting that we close the decade by once again working with Zöe on a project in Australia whilst also exploring new initiatives globally as we have always done.
From the beginning our aspirations for Intersticia were always global. Ten years on we have now achieved this working with a range of organisations and supporting a Fellowship of individuals from all walks of life who are all contributing to the crafting of the story of Intersticia.
I would like to thank each and every person who has been a part of this.
An individual human existence should be like a river: small at first, narrowly contained within its banks, and rushing passionately past rocks and over waterfalls. Gradually the river grows wider, the banks recede, the waters flow more quietly, and in the end, without any visible break, they become merged in the sea, and painlessly lose their individual being (Bertrand Russell).
Oct 22, 2022 | Brave Conversations, Communications, Digital, Governance, Social Machine, Web Science
In November 2021 we finally realised our Future Worlds Challenge with the assistance of the MIT App Inventor Research team and a group of wonderful young people from around the globe.
In September 2022 we had the opportunity to further develop this thanks to the invitation of the Government of Sharjah to integrate both Future Worlds Challenge and Brave Conversations in to the 2022 International Government Communications Forum. The opportunity was created by Ibrahim El Badawi who has been supporting Leanne Fry and me with Brave Conversations since our first event in 2017 and has helped craft and present numerous Brave Conversations events for an Arabic speaking audience over the past few years.
From the outset both Leanne and I realised that Sharjah was going to be something a bit different. The events were to be integrated into a major conference within a completely different cultural context and, to be honest, we had no idea who was going to turn up or when! Uppermost in our minds was the need to be mindful of cultural values and English proficiency, let alone a familiarity with technology beyond just retail use. And, we had to keep our energy up for four full days with the two events overlapping on the third day. As a bonus we were thrilled that Professor Dame Wendy Hall agreed to join us in Sharjah to help us anchor our events within the broader context of the conference and also to link it to the very important work that she is doing around digital governance and Artificial Intelligence.
From the moment we arrived in to a very hot and humid Dubai we were greeted with superb Emiratee hospitality thanks to Ohood Al Aboodi and her team of the IGCC. In addition we had our own private tour guide with Ibrahim driving us around in his red Mustang. This gave us some valuable insights in to the Emirate particularly with a visit to University City and the very impressive House of Wisdom, one of the most beautiful learning centres in the world. To give some context Sharjah is the third largest city in the UAE and capital of the Emirate of Sharjah. It seeks to position itself as the centre for Islamic culture and knowledge within the UAE and the IGCC Forum is an event which focuses on government communication as central to this.
What became clear to us was that the IGCC Forum provided a perfect opportunity to explore some of the themes of Brave Conversations within this Arabic cultural context and specifically to engage with young people through Future Worlds Challenge. In this we were ably supported by some delightful young Emirate interpreters and facilitatators, but most of al the MIT App Inventor team of Claire Tan, Maura Kelleher and Nghi Nguyen who quite literally worked their tails off with us reorganising the programme and having to innovate on the fly when it came to teaching the code.
We arrived to the venue on Monday 26th September for Day One not really knowing what to expect. Gradually the room filled and over the four days we were joined by students from the local university, groups of school children aged between 15 to 17, a contingent from the UAE Military, and a number of Directors of Government Communications from the Government of Sharjah. Apart from the fact that we were never quite sure when people would arrive or how many of them there would be, everyone was fully engaged and enthusiastically threw themselves in to both the coding tasks, the Challenge and the conversations.
Both Brave Conversations and Future Worlds Challenge are designed to get participants to use their imagination and creative thinking and one way we seek to stimulate this is to highlight the importance of Science Fiction. When the Chinese wanted to find out why the West was so far ahead with their development of technology they discovered it was that the West has a deep history of Science Fiction. When we posed this question to our Arabic audience it was curious that there was so little Arabic work of this genre despite some encouraging early shoots (Larissa Sanour’s work in particular). This is one thing we encouraged our young audience to explore more particularly as it opens the mind to possibilities, the core of which is at the heart of Future Worlds Challenge.
The Challenge built on the work we had done in 2021 and asked one simple question – How do you build a Future World ten years hence (i.e. 2032) that you would actually want to live in that can sustain human life on this planet?
There are three aspects to the world that you propose based on:
- How do we think? What do we need to change about our values and expectations?
- How do we live? How do we live sustainably within the planetary ecosystem?
- What technologies can support this? Technology needs to serve not lead.
We divided the participants into seven groups of mixed ages and genders and each one chose to focus on one aspect of designing a better Future World. Each was given time to work on their presentations and then give a five minute presentation with five minutes of questions.
How did we judge these Future Worlds? We asked three judges – volunteers Prashathi Reddy and our facilitator Hussein plus Claire Tan, to consider the worlds based on these criteria:
- Does your world make sense?
- Is it realistic?
- How would Conversational AI support your World?
- Do you believe in it?
Following on from this first round three ‘winners’ were chosen who then presented to the IGCC Judges Panel at the end of the day and this lead to a final ‘winning team’ announced at the Closing Ceremony Dinner of the Forum.
The teams were:
- Ahlam – Your Sleeping Matters
- Bioare – Sustainability for Life
- Fast Move – Accessibility for Blind People
- FWPW – Future Without Plastic Waste
- HRPI – Healthcare, Renewable, Printing and Inequality
- MOCAP – Project Charity Becomes Human
- Sooma – Zakat Calculator

To be honest there was no winning team.
Despite the nerves and hours of waiting around each and every person who was with us worked hard, contributed ideas and energy and helped make the event a success, and it is a huge complement to them that we were able to push the boundaries of Future Worlds Challenge and develop the programme into something that is now fully formed and a complement to Brave Conversations, which at Sharjah, was merely the supporting act!
The most precious thing for us was in being able to give these young people insights in to the dual analogue-digital worlds that are emerging and in this we were truly blessed to have the inimitable Dame Wendy Hall. Wendy, as always, gave selflessly to our groups and they gained insights from her more intimate session with us that she then further expanded in the main conference.
There is so much talk at the minute about the Metaverse and Wendy explored some of the challenges of these metaverses (which is much more correct). She very cleverly explained the issues of privacy by focusing on digital clothes shopping and what we will be exposing as we shop online. Wendy always has this gift for bringing crucial messages home – within a largely male audience it was the women who were the most wide-eyed and concerned.

This was really brought home during our final session of Brave Conversations when I looked at one of the main stands in the exhibition hall where one company was encouraging people to ‘get scanned and create your digital twin’. How much did people think about this before they eagerly participated and what questions should they have been asking?

As is happening in so many aspects of our lives we have absolutely no protection from companies such as this who are encouraging us to give our data with no respect for privacy or accountability back to us. This is exactly the same as companies such as Ancestry.com taking peoples’ DNA which strikes me as not just fraudulent but downright exploitative.
As Mark Zuckerberg is finding out there is a risk to rushing in to these new frontiers and gradually governments are beginning to wake up to their naivety of the past two decades and finally grapple with these issues. Too slowly of course, but they are beginning. This is the message that I would have like to see at Sharjah and hopefully some of the attendees listened.
As all societies keenly embrace the world of digital and see it is as the key to the future it is events such as these where we can bring savvy young people together with the not so savvy elders to really question the future world that are crucial to having some semblance of control and we are hugely grateful to the Government of Sharjah for providing one such opportunity.
Our thanks to Ibrahim El Badawi for creating this opportunity, and especially to Ohood Al Aboodi for all the hard work she did in getting us to Sharjah and making us feel so welcome.
