Back arrow
May 7, 2023

The Age of the Smart Social Machine

Image Creative Commons
Title adapted from Shoshana Zuboff’s ground-breaking 1988 book

Last week I attended a Group Relations Conference in India.  These events are always intense (this one even more so!) but they provide a unique opportunity to consider oneself with a human social system.

One of the things that occurred to me as we were exploring the role of the unconscious as it was playing out in the here and now (all psychobabble terms but in fact hugely important) was that there are multiple unconsciouses which operate as we live our dual analogue-digital lives. Carl Jung described what he called the collective unconscious which complements and influences all of our conscious thinking and actions as we participate within the human system.  I believe that there is now in addition a digital unconscious which is emerging in the digital realm as the result of our digital interactions within the Social Machine and an even more powerful machine unconscious which is evolving in the artificial intelligences we are building.  I drew the image below to try to illustrate my conjecture to the group – needless to say most didn’t understand.

In What Technology Wants co-founder of Wired Magazine and co-Chair of the Long Now Foundation Kevin Kelly talks about The Technium:  A Living System of Technology which encompasses the entire system around technology – culture, art, social institutions, through to “the extended human”.  In his latest blog post Kelly states that

For a while I’ve been intensely exploring generative AI systems, creating both text and visual images almost daily, and I am increasingly struck by their similarity to dreams. The AIs seem to produce dream images and dream stories and dream answers. The technical term is “hallucinations” but I think they are close to dreams. I’ve come to suspect that this similarity between dreams and generative AI is not superficial, poetic, or coincidental. My unexpected hunch is that we’ll discover that the mechanism that generates dreams in our own heads will be the same (or very similar) to the ones that current neural net AI’s use to generate text and images.

The foundational mode of the intelligence is therefore dreaming.

Don’t get me wrong – I’m not necessarily agreeing with Kevin Kelly here nor am I buying in to the hype about machines hallucinating.  What I am pointing out is that the machines are analyzing human data using human crafted algorithms and therefore there is something of our unconscious that is embedded in their emanations which is now being made explicit and visible.  We can only refer to concepts and ideas in human terms (hence we anthroporphosize) and to describe what the machines are doing is almost like taking us in to our own unconscious (this is where the concept of Azimov’s Psychohistory comes in to play).

One way of accessing the collective human unconscious is through Social Dreaming, the practice of sharing, associating to and working with dreams in a matrix in order to identify social trends and social dynamics. As our machines are coming together and bringing our data with them it may well be that what we are seeing is a manifestation of the collective human unconscious expressed through the output of the machines – which may seem like hallucinations – but how can we know given the opaque nature of how they operate?  And, if they have begun to go down that path then they are already moving beyond our realm of understanding.

The real challenge will come when they become able to acknowledge and recognise this unconscious as something different from a probabalistic algorithm, or are embodied, as the work of people like Rodney Brooks and so much of our Science Fiction (Humans, Blade Runner, Ex Machina) has shown us,

So what does this mean for us as humans?

Up until the recent advances brought about by the large language models such as ChatGPT talking with the average person about the advancing machine intelligence was like describing an elephant.  Every person sees things that directly relate only to them just like the story of the Blind Man and the Elephant.

This relates as much to technologists as to everyone else as I’ve witnessed countless times. The most obvious to me was when

I heard a very notable “father” of the digital world speak at a conference and when asked what he would recommend about how to address the rise of pornography on the Web he responded  “well just don’t look at it!”

Many of the people I’ve met who have built the machinery of the digital world are extremely naïve, building the tools because they can, not asking whether they should. When Geoffrey Hinton resigned from Google last week he commented

I console myself with the normal excuse: If I hadn’t done it, somebody else would have,

As with all kids in the candy shop scenarios if you give a scientist a problem and lots of funding they will develop new tools and techniques regardless of the potential consequences. Hinton and others like him saw only part of the Elephant without considering it as a whole animal let alone part of a herd.

Which brings in the question of ethics.  Whilst some of the big companies have created Ethics Advisory Boards the reality is that much of the development work in the field of AI is now happening in the open source space where there is no supervision or oversight.  These people still want to move fast and break things and the very nature of Ethics is designed to slow things down by asking difficult and challenging questions.

Governments and regulation are also designed to slow things down because politics and policy operates on human time which is analogue, messy and the very opposite of an efficient machine.  Humans need time to process, and our relationships are based on what people like Anna Machin and Rachel Botsman call Trust Friction – the stickiness and the glue that underpins how human systems operate.

The whole point of human relationships is that they are not efficient, because they take time and brain power to develop and maintain. Trust needs friction.  (Anna Machin)

Human systems are analogue and analogue takes time.  In the analogue world:

  • You can’t fire off a letter you need to write and post it
  • you can do an online transfer you need to go to the bank
  • you can’t immediately alter a design you need to redraw it
  • you can’t just be friends with everyone you need to build trust through shared experiences which takes time.

Machines don’t want friction – it slows them down, makes things break and ruins their power to work ratio – i.e. “productivity”.  The ultimate idea of this is the Paperclip Problem where smart machines instructed to make paper clips will consume all the resources in the universe (including us) to just make paper-clips.

With the advent of ChatGPT and it’s brethren the removal of friction within our human-machine interactions has now gone to the next level and smart AI is now being embedded in to pretty much all of our digital processes – just think of how many conversations your have and hear which involved technology of some sort.

So now I’d like to bring in a new analogy, the frog in the pot of soup as the temperature is gradually turned up.

Our human need to process and understand means that we as humanity have been sitting in the digital soup for at least half a century but in the first half of 2023 suddenly it is feeling a little uncomfortably warm.

As the soup heats up there are some who are going to want to jump out of the soup – there are some who going to boil and there are those who will adapt.

The questions now seem to me to be who each of these will be and what will happen in each case.

Let’s consider some options:

Firstly, those who want to leave.  It may be too late but, as with the Luddites in the Industrial Revolution, there is much wisdom in what they have to say and perhaps an alternate reality has much to offer as it always has throughout the ages.  There is something of this in Hari Seldon’s concept of building a Foundation on the furthest planet in order to separate itself from the chaos of the main system – an opportunity to isolate, slow down, reboot and recreate.

Secondly, those who are trapped. Sadly there is always a high cost to any radical change and many will find the “new world” frightening and overwhelming. Just one example is the rate of teenage girl suicide already.  Along with many others I have spent the past three decades of my life working to understand the transition that is upon us and help people prepare for the change with minimal effect.  Some have heeded the lessons, most have sat and enjoyed the warmer water oblivious to the dangers. I’m not sure anything can help these people any more as I think the rate of change is going to be too fast.

I think both of these groups will struggle and push back through both fear and anger and the manifestation of this could be dangerous.

Finally, there will be those who adapt, survive and thrive.

With all the noise about the technology and how fast it’s progressing or whether it should be paused or stopped the real point is what are the humans going to do about it?  Therefore it is the third group I am most interested in and I believe that it is being led by the younger generation but needs to be supported and mentored by the 21st Elders who have memories of the analogue world and the value of its friction and temporal nature.

Some fear the AI Apocalypse and that non-Western (WEIRD) cultures may gain a technological advantage.  This is problematic on so many levels particularly given that it is the minority-population WEIRD West that has created the culture of growth and the technologies themselves.  Some alternative thinking might be precisely what is needed now and some less privileged cultures may, in fact, be better prepared for what is to come.

The history of automation is that we humans have invented machines to take away the dirty, dangerous and dull jobs … now we are taking away a whole host of others.  These technologies can be used to solve the very challenging problems which confront us in the 21st Century and the sooner we learn to work constructively and creatively with the machines the sooner we will harness the power that is before us for good.

The more I feel people heading in one direction as a herd the more I want to go the other way and explore what is happening there – this is where the adaptive survivors will be.


Creative commons CC BY-NC-SA

Creative Commons CC BY-NC-SA: This license allows reusers to distribute, remix, adapt, and build upon the material in any medium or format for noncommercial purposes only, and only so long as attribution is given to the creator. If you remix, adapt, or build upon the material, you must license the modified material under identical terms.

CC BY-NC-SA includes the following elements:

BY

Creative commons BY

– Credit must be given to the creator

NC

Creative commons NC

– Only noncommercial uses of the work are permitted

SA

Creative commons SA

- Adaptations must be shared under the same terms