Skip to content
Reuters_1

The Reporter's Notebook

Jane Barrett, Head of AI at Reuters, tells Brunswick’s Wolfgang Blau how AI is changing the media landscape.

In July of 2024, Jane Barrett was appointed the Head of Reuters AI Strategy, leading the 174-year-old company’s implementation and oversight of AI systems. Reuters, one of the world’s largest and most trusted news agencies, is part of the larger Thomson Reuters, a multinational company based in Canada that provides data and information to the Legal, Finance and News & Media industries.

Barrett graduated from Oxford University before joining Reuters as a correspondent in 1999. She later attended the Sulzberger Executive Leadership Program at Columbia University Graduate School of Journalism, becoming Global Editor of Media News Strategy in 2019.

She spoke with Brunswick’s Wolfgang Blau about the growing role of AI in newsrooms.

Let’s start with a positive: What excites you most about AI at Reuters?

What I’ve seen is that a world of possibility has opened up. I’ve been a journalist for 25 years and so often you wish that you could do something, but you’ve got to get on your company’s product roadmap first. With Generative AI, it is exciting to see journalists solving their own problems and getting things moving much, much faster.

In the early days of online journalism, we were happy about the barriers of entry to the industry getting lower. At the same time, many newspapers did not survive the disruption of their business models. What effect will AI have on today’s business models?

This is a very interesting moment. There are the obvious problems of “hallucination,” of AI conflating different stories that have nothing to do with each other, or of pretending that there’s somebody behind a piece of journalism when there isn’t. As AI news experiences satisfy people enough that they don’t come to publisher sites, that could also widen the gap between publishers and users, making it harder to know what their readers need and creating a financial impact. My concern is exactly what we saw with the barrier to entry going down through social media, which is that the more non-factual content is out there, the less trust there is, the less cohesion there will be in society.

One of the things we’ve said at Reuters is that the more we invest in AI, the more we’ll free up the resources to invest in our news gathering.

We’re big. Thomson Reuters has 26,000 people around the world, with 2,600-plus people in the newsroom. But you can never have enough reporters. The hard core of journalism consists of going out there, knocking on doors, building sources and becoming an expert in your beat. That is something that AI can assist you with, but it can’t replace you.

Can you describe how AI will help your journalists free up time?

When we look at what we’re seeing in the newsroom, we have broadly three areas to apply AI: “reduce,” “augment” and “transform.” A lot of what we’re currently doing is focused on the themes of “reduce” and “augment.” In a newsroom as large as ours, there’s a lot of rote work, pattern-based work that people are doing every single day. Going through press releases, going through videos to find the exact moments you need. Adding meta-data to stories—a time-consuming but important job that journalists themselves are notoriously not the best at. That is all quite stressful work.

You can reduce that stress and reduce the work by getting AI to do the majority, and for the human to be in control and check the AI output before it goes out. That is the “reduce” bucket.

“There’s no doubt that AI could create many exciting new and more personalized ways of interacting with news.”

Then you have the “augment” bucket, which has all the stuff that we wish that we could do but we just don’t have time. Translation into different languages is one example of that. We publish in 13 languages and with Generative AI we can empower our teams to do more—both languages and formats. For instance, we can take our English-language videos, translate them, voice the new versions with a synthetic voice and create new content. Of course with our human experts checking the translations and voicing. This use of AI allows you to get your content out to a much broader audience, and to satisfy the needs of many more clients around the world.

At the moment, we are mostly focusing on solving today’s problems. That approach helps getting people comfortable with AI. We’ve already about 400 people using AI tools every day. But there’s no doubt that AI could create many exciting new and more personalized ways of interacting with news. That’s the “transform” bucket.

We are asking ourselves: What is the news experience of the future? What would a much more personalized, trustworthy news product look like that caters to your specific needs? And is the audience ready for that yet? I don’t know. I’m sure that in 2025 we’re going to see a lot of the big tech companies coming out with more AI-enhanced, AI-curated news experiences. We just don’t want them to be only companies doing this.

What advice can you give AI leads in other industries when it comes to responding to any cultural resistance against using AI at work? 

Reuters is part of Thomson Reuters, which has been investing a huge amount into AI. Our largest group of customers are lawyers, our second largest are tax and accounting professionals, and the third largest are corporates. We share a lot of information between different parts of Reuters. And the concern we have all come up against is, “Well, what if it gets it wrong? What if it misses something or just ‘hallucinates’?”

One of the pieces of advice that I always give to people is that they should start using AI tools on a project, or on a workflow, that they know well. This approach will give you a very realistic view of both the potential and the risks and shortcomings of AI tools. You’ve got to use the tools yourself. You’ve got to test them yourself.

Second, try and solve a real problem that you’ve got, one that is actually going to impact quite a few people if you can solve it. And what I find really interesting is that when people start to use the tools, they realize both the power of the tool, but also that it is not going to take away their job, certainly not yet. It’s going to help them do their job better. The tools allow you to put your energy where being a human really matters.

What does good AI governance look like in this case?

We have the Reuters Trust Principles, which hold us to reliable and unbiased news. That’s the critical gold standard. Alongside that, we put in some AI principles back in May 2023. The core ones are accountability and responsibility. As a journalist, you are accountable and responsible for what you put out there, whether that’s because you’ve got your byline on it, or because your fingerprint was on it throughout the process. You can’t say, “Oh, the AI got it wrong.” You’re responsible for it.

In practice, the way we start is by matching a very experienced journalist, who is a real expert in a particular field, together with a data scientist. Then they start to jam together. The data scientist brings a very different way of looking at a problem. And together, they start to build a prompt, essentially.

At this stage, the main governance issue is making sure we are keeping to our responsible data policies. Then we test the prompt, and we see if it works or not. And then we go through a rigorous evaluation process.

Once we’re through the proof-of-concept phase, we then have a governance committee that meets on a monthly basis, and we look at the tools that are in development. Are there red flags that should be raised before we take it any further? That committee includes some very senior editors, our editor-in-chief and our executive editor, our general counsel, a senior Thomson Reuters strategy colleague, me as the Head of AI, and others as needed. And we really kick the tires on the tools to determine whether we should move forward or not.

If ever we’re going to turn anything on that’s just going to be AI direct to the clients, we will have oversight of all the models and maintain human-in-the-loop testing behind the scenes. We’ll regularly test them for accuracy and freedom from bias, and we disclose that that piece of content has been AI-generated. It is a matter of being very transparent with your users.

“Could Generative AI take an unstructured press release … and determine [its usefulness] for Reuters’ clients? Our very first prompt scored 95%.”

Do you sometimes feel like you’re trying to outrun the enemy when it comes to the growing amount of near real-time, personalized, credible-looking disinformation available from other sources?

That’s not just AI. That has been a problem for a long time. It gets amplified by AI. In the world of echo chambers and bubbles, we’ve seen this so much. The algorithm just keeps on feeding you exactly what’s going to get your engagement and your likes and all your dislikes for whatever it happens to be.

In the recent US elections, our verification team was seeing a large volume of misinformation about supposed election fraud. This went on until the first two swing states were called for Trump, and then it suddenly stopped. It just went silent.

As journalists we need to be keeping our focus on fact-based reporting, correcting when we’re wrong, continuing to challenge things and question things even if it’s considered a sacred cow. You’ve got to challenge the sacred cow as a journalist.

We have seen the World Wide Web and then social media arrive as the great democratizers, enabling everyone to be a journalist, at least in theory. The business of journalism, however, has only seen a few global winners and many more losers. Will AI accelerate that trend?

It’s an excellent question. We run a series of events called the “Future of News” where we gather publishers to talk about how we’re using AI, not just ourselves, but also as an industry, and to tackle these big questions. We see it very much as a role that Reuters can play with the backing of Thomson Reuters.

In terms of size, we have 174 years of data. We have a big archive that’s digitized and that’s continuing to grow exponentially. That allows us both to do deals with AI companies, but also to train our own models. We produce between 3,000 and 5,000 stories a day. We also have the financial strength to be able to hire data scientists who are like gold dust out there at the moment, to really help us to create our own models and solutions.

That said, if you’re a small company, the barrier to entry on Generative AI is quite low. Even the enterprise version of ChatGPT is just a few hundred dollars a month, nothing like the expense of a new hire. The opportunity is there. And I find that some of my colleagues at smaller companies can be quite nimble. They can try things that aren’t encumbered by systems and processes and the stuff that big companies have to do. There are advantages and disadvantages to both. The thing is to move. Don’t stand still.

A lot of us have had that one moment we will remember, that moment where you used an AI tool and were just floored by what it can do. Let’s call it the “holy shit” moment. For me, it was using the image generator DALL-E for the first time. I asked it to carve a Tesla out of a lump of coal; the result was stunning. What was your moment?

For me, it was in the middle of 2023 when two of us started to think, “OK, well, how could we use this in the newsroom?” And we just did a first proof-of-concept: Could Generative AI take an unstructured press release and get the key facts from it and determine how they would be useful for Reuters’ clients?

Our very first prompt scored 95%. That was my “holy shit” moment. I realized then, “OK, we’ve got to really start running on this.”

For AI’s impact in the longer term, there’s still the hurdle of the common consumer. My mum—a bright woman, highly intelligent, very digitally minded—she hasn’t found a use for AI yet. So there is still a tipping point to come of people finding AI relevant in their daily lives.

We are underestimating that long-term impact of AI at the moment. I do expect that there’s going to be something that happens, probably in 2025, maybe 2026. Those of us who have understood this very specialized value of AI will then see AI go mainstream and find use cases we can’t even imagine yet. I think we’re underestimating the impact this will have.

The Authors

wolfgang-blau
Wolfgang Blau

Global Managing Partner, London

Wolfgang is the Global Managing Partner of Brunswick's Sustainable Business practice and an expert in climate communications. He is the Co-Founder of the Oxford Climate Journalism Network at Oxford University, an advisor to the UN climate division UNFCCC, and a visiting fellow of the University of Pennsylvania on issues of corporate climate strategies.