Skip to content
Disinformation-web

Turning Back the Disinformation Tide

Defensive measures businesses can take right now can reduce vulnerability in the event of the sudden appearance of false information. Brunswick Review speaks with cybersecurity experts Lisa Kaplan and Preston Golson.

The sudden, startling rise of generative AI tools has raised new fears about the spread of false information in news and social media. Misinformation and disinformation—the latter a deliberate attempt to confuse, manipulate or destroy an opponent’s reputation—have been threats as long as humans have trafficked in untruths. However, the modern phenomenon of the viral spread of false narratives on social media platforms threatens businesses with potential damage not just to reputation, but to profits or effectiveness. For executives, false narratives can easily turn from a distraction or nuisance to a critical issue. AI improvements seem to potentially increase the harm and scale at which that can be done.

The influence of false information on public opinion became too large to ignore following the 2016 presidential race. Many studies, including one published in the Journal of Economic Perspectives in 2017, found false news regarding the campaign proliferated during the months leading up to the election, potentially even affecting the outcome of the election. The same study correlated the rise of social media, where such false content and misleading narratives were most evident, with the fading influence of traditional media and a decline in trust in media generally, creating the conditions for disinformation to flourish.

Since then, businesses have frequently emerged as targets of false information—often unintended, but sometimes deliberately untrue—that can lead to damaged assets such as an organization’s reputation, customer base or stock price. In 2018 for instance, Broadcom was forced to respond when circulation of a forged Department of Defense memo impacted support for its planned acquisition of CA Technologies.

Yet as such threats have grown, so has awareness of how to counter them. To learn about how attacks and defenses are changing in the disinformation landscape, we spoke with Lisa Kaplan, founder of the technology company Alethea, and Preston Golson, a former CIA communications director and now a Director for Brunswick specializing in cybersecurity, geopolitics and corporate reputation. Kaplan and Golson are friends and have previously worked together on behalf of companies. As well as a sense of realism about the threats organizations face, they share a confidence in the tools to combat them.

Lisa, how did you come to create Alethea?

Lisa Kaplan: I started the company in 2019 after the experience of being Digital Director on a Senate campaign. We knew at that point that we needed to figure out what to do with disinformation and misinformation because it was going to impact every public and private sector entity with an online presence.

We built a technology platform that’s able to identify misinformation, disinformation, social media manipulation at scale. A lot of organizations and companies face a variety of different risks that exist outside their system that can cause serious business continuity challenges and real-world consequences. Our technology platform pulls in all sorts of different data sources and uses machine learning and artificial intelligence to be able to bucket the content into narratives. And then within those narratives, we look for what we call signs of coordination. Where are there actors using fake accounts or nefarious tactics in order to achieve their goals at the expense of companies?

We do the work of detecting and assessing what’s out there, and then we partner with everyone from communications teams to security teams to give them the insights that they need to be able to mitigate any of these efforts.

How has the threat of disinformation changed? And how aware of those threats are the organizations you talk to?

LK: They’re definitely more at risk now. One of the things people have realized over the last several years is that this is much larger than just a political issue. This is an issue that is actually going to impact their ability to be able to communicate with their key stakeholders—talent, customers, shareholders and the others.

Those that are proactive are the ones that end up doing the best. If you do the early detection, then you don’t necessarily need major surgery. You don’t necessarily need to let something turn into a crisis.

Preston Golson: Yes, in 2018, people thought it was about politics. Increasingly now it’s more about the bottom line. People may boycott your company as a result of a misinformation or disinformation narrative, or you could see your stock price fall, or you could even see physical threats to your personnel and facilities. The range of real-world impacts has expanded.

LisaKaplan_web

Lisa Kaplan, Founder and CEO of Alethea

Social media platforms have increased anti-disinformation activity. Do you think companies are trusting those platforms too much as a line of defense?

PG: My answer is, yes. And meanwhile it’s becoming increasingly difficult, both technically and politically, for social media companies to really monitor and police disinformation.

We also cannot ban our way out of disinformation, nor should we even try. Free speech is paramount. The key to countering false or misleading narratives online is better, more authentic communications, not bans, in my opinion. Organizations that have clearly articulated values, that are backed by real world examples—they can build reservoirs of goodwill and resilience.

LK: The social media companies’ trust and safety teams really grew for a while and they could do more proactively. Unfortunately, there were a lot of layoffs, meaning that the platforms just have fewer people to be able to protect against this type of threat.

Another reality is that not every false or damaging narrative is necessarily going to be a violation of terms of service. So even if the social media companies were doing things like proactively looking at every company’s mentions or posts—which, by the way, is not their business model, not scalable and never going to happen—and getting content taken down, you still have the potential for false information getting through.

If you see something that violates the platform’s policies, you can of course work with the platforms. Even then, the platform has to do their own investigations. They have to agree with your assessment. And all of that can take time. The internet moves fast, and so by the time any action happens, the damage may already be done. So that wouldn’t necessarily be as effective as a multi-pronged approach.

What would you suggest the companies be doing to be proactive?

PG: First of all, you’ve got to know the environment. I like the analogy of a forest fire: you have to understand the landscape, the conditions and where your risks are. And when you understand that, you can start a proactive campaign to deal with those.

If you know there is an element of your organization or business that is beset with some sort of legitimate criticisms—maybe past controversial practices—you can make changes so that if those things ever were to become a news item, you could point to the progress you’ve made. Get ahead of those things. Your stakeholders should know what you’re doing in areas where you could be vulnerable if people were to criticize you. Communicate to people so they understand. Create crisis playbooks for disinformation and misinformation situations. Misleading narratives are most often based on some legitimate criticism that’s been distorted in a way to make it seem even worse than what it was originally. So understanding where you might be criticized and how that criticism might be twisted, that will give you an advantage when you have to respond quickly.

What really helps with audiences is having other people, who are independent of you, validating and vouching for what you stand for. That’s something you can develop now. Bring in people from the outside, show them what you’re doing, show them what you stand for, show them your values, and build that bench of third-party validators right now, right away. And then you have them as advocates if you were to be attacked by false or misleading narrative.

LK: I agree with all that. There are multiple touchpoints that an organization has, as you expand the definition of communications and you expand the number of risk owners, beyond just what we think about as the traditional press and digital. The entire organization should be prepared to be able to fight against this type of threat. Mitigation is a team sport.

Can you elaborate on that? The expanded role of communications?

LK: This is really about bonds between caring people—all the risk owners. You want them to share in the implementation of the risk strategy of an organization.

But also, communications itself is being asked to take on a larger role. It’s no longer just responsible for talking to the press and making sure that the advertising campaigns are featuring the right products and getting the right ROI. They’re truly the front-line risk managers for the information that exists outside of a company.

Where we’ve seen organizations, including the Fortune 50, do best is when they are working with every single member of their team, whether that’s the folks in the call center, their security team, making sure the people at the front gate are aware of what’s happening and have the information and the support that they need to recognize and counter disinformation. It’s communications’ job to make sure that the entire organization is prepared to be able to deal with this type of risk, which is diffused to the entire organization.

Preston Golson: We cannot ban our way out of disinformation, nor should we even try. Free speech is paramount. The key to countering false or misleading narratives online is better, more authentic communications.

Technology creates the ability to start fires in a thousand places at once. How do you fight that?

LK: That early detection and early warning piece is really key. There is an opportunity to put out the brush fires and to make it so that you are more resilient to a forest fire. Regular stakeholder communications, building trust with your consumers, your employees, your investors, and making sure that you’re really cultivating and maintaining and investing in those relationships is just really important. If people trust what you say about your brand more than somebody else, you’re already in a better position. Often, even situations where the fires are happening all at once can be avoided with the right insights.

PG: Even now, with the generative AI possibilities out there, bad actors still need to find narratives that real people will connect with. The bots are often what get things started but they need real people to move the narrative on. You can still prepare for those narratives.

You can also increase public awareness of the disinformation tactics that are out there. Social media works to push people who are susceptible to conspiracies in a direction they’re already leaning and disinformation exploits that. Even a vigilant person has to take a pause and say, “Hey, wait, wait. Is this true?” For instance, there was a video of the Francis Scott Key Bridge in Baltimore, after the collision, juxtaposed against a video of a truck going off a bridge followed by a massive explosion. Two completely unrelated stories but they were placed side by side and people thought the explosion happened in Baltimore. The explosion was from 2022, the Kerch Bridge in Crimea. I had seen that video when it was first released, but other people are thinking, “Wow that explosion is just what happened in Baltimore.”

Something as simple as that—not generative AI, just two real videos just put next to each other out of context. As a result, even very well-meaning, honest people end up sharing false information. So it’s important to educate people to stop and think about what they’re looking at. We really have to work on how we examine and absorb information  as a society to get ahead of increasingly visual disinformation. What happens when you cannot believe your eyes?

How do you counter a false narrative when there are people deeply invested in believing it?

PG: One of the biggest misconceptions about counter-misinformation is that you can go out there and change someone’s mind who’s already bought into a narrative. The science shows the opposite, that the more you try to explain it to people, the more they kind of dig in.

You have to think of the audience in segments. There are the detractors—they’ve bought into the false narrative and it’s going to be very hard for them to come off that view.  There are susceptible audiences who are part of communities or belief structures closest to the detractors camp. Then there’s a persuadable audience, who have not fallen to one side or the other. Lastly, there are your advocates, people who believe in what you’re doing. The job is to furnish your supporters with the information that can allow them to go out and make their case to the persuadable and susceptible segments. If you can stop them from falling down the rabbit hole into the detractor camp, then you can conceivably disable the spread of the false narrative.

LK:  Yes, I agree. That can help you to also identify the issues that you care about the most. I always recommend a robust traditional market research plan, to really understand the audiences that you’re trying to reach.

In moments where companies have let something go too far and now there’s a certain segment of the population that doesn’t trust your vaccine or won’t buy coffee at your establishment—the question becomes, “OK, here we are. How do we start to build back? What are the things that we can do, and how do you start to take smart bets to counteract the narrative?” If you have people supporting you already, that becomes easier.

Lisa, you use AI in Alethea’s work. How does that balance against AI as a threat?

LK: There’s a set of behaviors behind the spread of disinformation to make it look as though real people are spreading it. So when we think about those behaviors, we’re starting to see how LLMs and generated images and deep fakes—all of these things that have come onto the scene—are being used in the content itself. But you still need to have an actor who’s launching a campaign, and the behaviors to front it.

We’ve really focused on building AI to detect this type of behavior. That’s how we’re able to really help organizations understand who’s behind a narrative, how it is spreading, what laws or policies are being broken, and then what are the options to do something about it.

More organizations are going to be targeted because this has become so fast, cheap and easy to do. But there are technologies that you can use to be able to detect these types of behaviors and those are getting better too.

What message do you want to leave with our readers?

LK: The big takeaway is, you should know that attacks like this may be coming and that they pose a major risk. And you are able to do something in the meantime; you’re able to be proactive. And at this point, it’s frankly negligent to not be thinking about this.

PG: In soccer they use the term “own goals”—scoring against yourself. If you don’t know the landscape out there, you don’t understand the risks, you can easily fall into a disinformation or misinformation scenario with just a certain offhand comment or certain type of action. Having an understanding and awareness of how misleading narratives originate can help you avoid being an easy target.

The Authors

Carlton Wilkinson
Carlton Wilkinson

Director, New York

Carlton Wilkinson is a Director and the Managing Editor of the Brunswick Review.

PrestonGolson
Preston Golson

Director, Washington, DC

Preston has served in a variety of national security positions, including as an analyst at the Central Intelligence Agency (CIA) and an aide to the first two Directors of National Intelligence.