Analogue leadership in a digital world

WebSci24 and the emerging Agent Society

WebSci24 and the emerging Agent Society

Jie Tang presenting his Keynote: The ChatGLM’s Road to AGI

Last week Intersticia Fellow Hannah Stewart and I attended the 2024 ACM Web Science Conference hosted by the University of Stuttgart and IRIS (Interchange Forum for Reflecting on Intelligent Systems).

The conference programme included an interesting mix of research presentation sessions ranging from Digital Art to Hate Speech, and a diverse number of Keynotes addressing topics such as China in the Global Information Ecosystem, Digital Humanism, Older Adults being Tech Savvy and Chat GLM’s Road to AGI.

The conference attracted over 100 people from all parts of the world to come together and discuss the questions posed by Web Science – those which don’t necessarily fit neatly in to one discipline or another but require a cross-disciplinary research focus, attitude and skills.

There were a couple of key moments in the conference that stood out to me amidst all the talk about LLMs and access to data (something that the researches were particularly preoccupied with).

These key moments were:

Hannes Werther’s reiteration that We Create the Web, the Web Creates Us – the key focus that has always been Web Science.  Linked to this he raised the issue of Business Models – how did human driven initiatives and policies help technical innovations scale and reach the human market?

This point is all too often forgotten, particularly in the research community, and I feel that this neglect often leads to somewhat irresponsible and naïve technology developments and deployments which have unforeseen and significant human social consequences.  (OpenAI’s recent comment that it’s technologies have been used to deceptively manipulate public opinion around the world and influence geopolitics is an astounding case of both. Once released in to the ‘wild’ what did they think was going to happen!!!)

The Internet, which began its life on 29th October 1969 as Arpanet, and has evolved to become the TCP/IP driven network we know today with the World Wide Web sitting atop it, began as  a government sponsored academic initiative.  In 1994 the commericial race was launched when Netscape Communications and Microsoft began the Browser Wars which led to the creation of the first Search Engines, publicly available online communities such as America On Line, online dating and the Dot.Com Bubble.

All of this had massive consequences in terms of the ways that human beings interacted with information and each other, not the least of which has been the creation of a digital divide and the need to fight for digital human rights and freedom of expression.

Enabled by the Internet, developments from the Read-Only to the Read-Write Web and the iPhone, there emerged digital Social Media platforms which took human online interaction to a whole new level.

We are only now starting to acknowledge and more fully understand that, whilst this has been the greatest communications and information revolution since the printing press, the social consequences (as with the printing press and its role in the Protestant Reformation) are profound.  There is a growing awareness of some of the harmful effects such as digital addiction, a negative impact on critical thinking skills (particularly through short form content and declining attention spans) and an increase in (cyber) bullying particularly amongst young people.

Jonathan Haidt believes that there is a Youth Mental Health Crisis largely attributable to Social Media and the business models that underpin them.

Another angle from which to consider this is with regard to many in the older population whose reliance on online platforms has become their primary sources of news and truth.  Americans, who are now in the midst of one of the most important election campaigns in modern history, largely turn to online news sources, with many Gen Z relying on Chinese owned Tik Tok for their information.

All of this is driven by the business models that support the companies providing the services, and all sit within the socio-technical environment of their corporate headquarters governed by the values of their founders and Board.

With the rapid move towards Artificial Intelligence as technology companies scramble to integrate machine learning and language models into their products we are at a key inflection point.

Most of what I hear and read by the key technology players is, in my opinion, a race to the bottom – an AI Arms Race that has been kicked off by OpenAI in a quite irresponsible manner purely to pursue dominant market share and human attention.  Since then the tech firms have put ethics on the backburner in order to capture predominantly business customers through promises of greater human productivity and a reduction in costs.  We’ve heard this all before, but as with last time

Everyone is asking how and why.  No-one seems to be asking should.

Rarely do I hear statements about the benefits to humanity or the protection of vulnerable people, access and equity or how we can use these quite incredible machines to help us cope with the myriad of existential threats presenting themselves in the 21st Century.

Some, such as Elon Musk with Neuralink, talk about benefits to patients with neurological conditions, but my suspicious mind immediately links this to the race to control and dominate human thoughts for commercial gain as Nita Farahany warns and the Council of Europe and European Union is beginning to recognise with its investigations in to Neuro Technologies and Human Rights.

This brings me to the second key moment of the Conference, Jie Tang’s presentation and his key slide of The Web as a Linked-Agent which is the feature image of this post.

Jie Tang described research which provides a comprehensive and systematic overview of LLM-based agents and postulates a Simulated Agent Society where

agents exchange their thoughts and beliefs with others, influencing the information flow within the environment. (Ref Zhiheng Xi et al, 2023)

The mere thought of this sends chills down my spine.

Again – here is the focus on the how and the what, but where is the should?

What does an Agent Society look like for us meat-based humans?  What Agency do we retain in such a world?

Whilst we may feel that the current technologies are still in their infancy and are prone to hallucinations and making stuff up, we need to continually remember Roy Amara’s Law that

Humans tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. (Roy Amara)

When it comes to our Tech Overlords we should finally wake up and realise that they are commercial entities, Corporate Psychopaths which were created to maximise profits and shareholder value – nothing wrong with this, this is their purpose.  Our mistake is to be naive if we think that in their remit is the good of humanity nor fair and equitable societies which focus on human dignity.  Given this it seems plainly obvious to me that they should not be allowed to determine the future direction and development of artificial intelligence technologies and systems, nor should they be treated like other companies and left to govern themselves.

This is where the story of OpenAI, a firm ostensibly set up as a non-profit organisation with a public interest mission is salutary.

OpenAI was created

“to ensure that AGI, or artificial general intelligence—AI systems that are generally smarter than humans—would benefit “all of humanity”

In a recent article two of OpenAI’s former Board Members, who were charged with that mission, and have now been ousted by the commercial forces that dominate the company, write that

Our experience is that even with every advantage, self-governance mechanisms like those employed by OpenAI will not suffice. It is, therefore, essential that the public sector be closely involved in the development of the technology. Now is the time for governmental bodies around the world to assert themselves. Only through a healthy balance of market forces and prudent regulation can we reliably ensure that AI’s evolution truly benefits all of humanity.  (Helen Toner and Tasha McCauley were on OpenAI’s board from 2021 to 2023 and from 2018 to 2023, respectively.)

This is why forums such as Web Science should be much bolder in including business and government driven research, which in the earlier days from memory was far greater. The message of Web Science as a platform and community would be greatly enhanced by broadening beyond purely academic research and working to encourage greater dialogue between corporate research and government initiatives.

In addition I would like to see something like a Brave Conversations be more fully integrated in to the Web Science Conference programme so that all attendees, not just those who notice the event or show and interest, together with random people who turn up, are forced to focus on the thorny societal, ethical and moral questions which arise about the common technology driven future we are all co-creating.

As Anthropology Professor Michael Wesch so rightly said in 2007:

The Web is Us/ing Us.

We need to make sure that we humans continue to remember this.


Simulated Agent Society, From “The Rise and Potential of Large Language Model Based Agents: A Survey”,

May 2024