I caught up with a former member of one The Conversation’s boards (we have a quite a few) last week, and he shared some thoughts on how journalists deal with new technology. The problem, he said, was that they often tried to get technology to perform tasks that allowed them to keep doing their jobs in the way they’d always done them.

There’s something in that, I think, and it’s been a source of some contentment to me that at The Conversation we’ve taken an approach to the creation of content that is different to how other media go about things. While some new journalism outlets might have used technology in fancy ways, with whizzy graphics or new video products, the essential structure of journalist seeking “the story” and filtering it for the audience has largely remained the same.

It is more than a decade now since we sought to do something a little different, by bringing academics together with editors to forge a product out of a bond of trust between those partners in the collaboration and with readers. And we’re still going, with that partnership at the heart of all we do.

So where will generative artificial intelligence fit in that delicately balanced three-way relationship? It probably won’t come as a surprise to you that we’ve been thinking about it a great deal in recent months. Naturally we want to do all we can, not just to protect the bond of trust at the heart of our unique form of communication, but to build upon it. That means being aware of the dramatic changes impacting the higher education and media sectors from where we derive our content, and thinking about the risks those changes present, but also the opportunities they may bring to perhaps be more creative and bountiful in the stories we tell.

In a rapidly changing world, we will constantly monitor and update our policies related to AI, by consulting with editors, contributors and readers across the network and beyond. For now, we are asking contributors to let our teams know if they have used AI in any way in the production of content. In our most recent discussions it remains the collective view that an article, for instance, produced by a generative AI product would not be appropriate for publication on The Conversation. However, had an author used such a tool to organise or truncate their thoughts early in the process it may be deemed reasonable by the editor in question. Our editors meanwhile, are not using generative AI tools to edit authors’ content.

So for now, we operate a flexible model, but the degree of that flexibility may shift over time. Returning to my original point though, it seems unlikely that many sectors at all will be able, in the longer-term age of AI, to simply use the tools to keep doing tasks the way they always had, just more efficiently. The changes to creativity, coding and possibly even sentience appear likely to be too fundamental. As you can see, these themes are explored below, among the many articles we’ve produced recently on AI.

They are also looked at in a book I was lucky enough to discover after an event I attended in London earlier this year. Now published in the UK and the US, Unsupervised: Navigating and Influencing a World Controlled by Powerful New Technologies is by Daniel Doll-Steinberg and Stuart Leaf. It’s well worth a read, and, as the title suggests, it charts a course between the extremes of excitement and deep pessimism that often colour the debate around AI and other new technology.

We’ll continue to keep you updated here on how The Conversation addresses issues of technological change and, of course, how they shape the world around us.

Stephen Khan

Global Executive Editor, The Conversation

Amid the Hollywood strikes, Tom Cruise’s latest ‘Mission: Impossible’ reveals what’s at stake with AI in movies

Sarah Bay-Cheng, York University, Canada

The movie offers both Hollywood history and Cruise’s presence as weapons of human resistance to the hazards of AI in filmmaking.

As climate change warms rivers, they are running out of breath – and so could the plants and animals they harbor

Li Li (李黎), Penn State

When water warms, it holds less oxygen, and this can harm aquatic life and degrade water quality. A new study finds that climate change is driving oxygen loss in hundreds of US and European rivers.

Machine learning can level the playing field against match fixing – helping regulators spot cheating

Dulani Jayasuriya, University of Auckland; Jacky Liu, University of Auckland; Ryan Elmore, University of Denver

A new machine learning model can pinpoint anomalies in sports results – whether from match fixing, strategic losses or poor player performance. It could be a useful tool in the fight against cheating.

Why ChatGPT isn’t conscious – but future AI systems might be

Colin Klein, Australian National University

The science of human consciousness offers new ways of gauging machine minds – and suggests there’s no obvious reason computers can’t develop awareness.

Pantanal fisherman. WALDECk SOUZA/Shutterstock

Sustainable use of natural resources: lessons from Pantanal communities

Rafael Morais Chiaravalloti, UCL

Riverside communities in the Pantanal make sustainable use of natural resources within an unpredictable system.

Protests happened all over the world, calling for change in Iran, after Mahsa Amini’s death. Philip Yabut/Alamy

Mahsa Amini: a year into the protest movement in Iran, this is what’s changed

Afshin Shahi, Keele University

People are gearing up for a potential resurgence of protests, while the state is preparing to suppress any sign of dissent.