Skip to content

A Privacy Expert on Profits, AI and ... Dystopian Sci-Fi

A wide-ranging conversation with ZoomInfo’s Chief Compliance Officer, Simon McDougall.

Our interview with Simon McDougall started out pleasantly enough, exploring his unconventional career path and discovering that he’d founded a privacy book club. The discussion finished on a somewhat heavier note, with McDougall wondering what privacy could look like in an age of neuro-technology and artificial intelligence. It was a wide-ranging conversation, in other words.

Such a multifaceted discussion seems fitting given McDougall’s background—among other positions, he’s been a privacy consultant, a leading privacy regulator in the UK (one focused on artificial intelligence years before the rise of ChatGPT), and now serves as a C-Suite executive for a publicly traded US tech company.

We asked McDougall about ZoomInfo (unrelated to Zoom, the video meeting company) and its unique approach to privacy. How does being a “privacy-first” company work in practice? If your business model leans heavily on data—ZoomInfo is a platform that helps companies find and connect with new customers, processing more than 1.6 billion data points a day as it does so—wouldn’t that focus on privacy hurt profits?

“That’s one of the reasons I joined,” McDougall recently told Brunswick Director Christina Spellman. “I wanted to be part of a company which can be very successful and also lead on privacy.”

From studying English at Oxford to Chief Compliance Officer of a multibillion-dollar US software and data company—that’s not a common career path. How did you wind up on it? 

I’ve always believed the most enjoyable careers zig and zag. I studied literature, I ran my student union for a year, so I was a student politician. I later became a chartered accountant. Then you had the dot-com boom. I wanted to get into technology, because I’m passionate about it and there was a lot of opportunity. My focus was privacy; I was drawn to how it intersected with human rights.

Until very recently, I didn’t see much of a link between what I learned studying literature and what I do now. But with language models, generative AI, all the coding happening in English, it’s kind of come full circle.

There's a link between good storytelling and helping people get their arms around what they should and shouldn’t be worried about.

Staying with your literary roots for a moment, when people think of novels that deal with questions of privacy, the same (bleak) titles invariably come up: 1984, Brave New World. What other books—fiction or non-fiction—would you suggest people read? 

I actually used to run a privacy book club, which was as much about the red wine and company as the books. The only rule we had was “no dystopian sci-fi.”

For future-watching, anything by Neal Stephenson is good. It’s not the most original answer, but it’s crazy how many things he imagined before they came to be [including the Metaverse].

Fiction is such a good way to help people visualize frontier technology. There’s a million and one academic papers and government white papers out there around the next emerging technology and all the risks and opportunities. But then we all go watch “Minority Report” and actually see the technology, empathize with it and start to engage with it. There’s a link between good storytelling and helping people get their arms around what they should and shouldn’t be worried about.

A technology some people are clearly worrying about is AI. You were working on privacy issues related to the technology before you joined ZoomInfo and before the launch of ChatGPT—what’s different about the conversation today compared with five years ago?

It’s actually why I became a regulator in the first place. My children are now in their late teens, but when they were much younger, I was already alarmed as to where the technology was going. That’s why I joined the ICO [Information Commissioner’s Office, a regulatory authority in the UK focused on data privacy and information rights] and helped set up its first AI team, which was possibly the world’s first AI privacy regulator. We didn’t predict everything that’s happening right now, but the direction of travel was pretty clear.

In this current wave of generative AI, I actually think privacy is one of the less “scary” elements. You train AI using these massive data sets, and those data sets have people’s information. That matters. But I think it’s a different magnitude than the questions around intellectual property or copyright. If my data is being used to train a model, assuming that personal data doesn’t get surfaced at the other end, then I haven’t really suffered any kind of harm—whether that use is fair is another question. But you compare that to a coder, or a poet, or a painter, who finds that they can’t sell their work anymore because GenAI has taken their job, and that GenAI was trained on their paintings or poetry—that’s a very direct impact.

That said, when AI starts making decisions about people, you do get some very real privacy concerns. If it’s a decision as to which meme shows up on your feed, that’s not so important. But if it’s whether you’re going to get access to a country, be given a job, or served political content, and there’s an algorithm behind that decision, then it’s relevant.

ZoomInfo_Simon McDougall_web

You left the regulatory world in 2022 to join ZoomInfo as its Chief Compliance Officer. What does that role entail, exactly? 

There’s the pieces you’d expect: regulatory compliance, risk management, how we engage with regulators and policymakers. But there’s also a big part of it around data ethics, a sense of stewardship: what should you do with data rather than what could you do with data?

But it’s not me sitting on the sidelines and being the conscience of the company; everybody is the conscience of the company, we’re all pulling in the same direction.

As we innovate, as we grow, as you get questions from people whose data we have, we need to be able to show we’re doing the right thing. My job is to advise the company on how to do it, and then actually have all the policies and procedures to make sure we do it right.

There’s a perception that privacy hurts profits and growth—particularly for a software and data company. How can access to data for your customers and data privacy for individuals best coexist?

That is absolutely one of the reasons I joined ZoomInfo. I want to demonstrate that you can have a successful commercial company and be ethical, be privacy-first. Some people are suspicious of anybody with a profit motive. I spent 20 years as a consultant working mainly with big companies across a lot of sectors, and I saw lots of people trying to do the right thing. I was keen to be at a company that bought into the idea of protecting privacy, and also had a culture, a business model, that supported the idea.

ZoomInfo is a business-to-business data company; we help companies find new customers and connect with those customers. Think of your sales and marketing teams working together. It’s a simple business model, and one that relies heavily on data. That data has to be accurate and actionable. But we’re not trying to collect lots of data points about people, or do in-depth profiling. It’s a simple—you could call it unintrusive—data set, in some sense.

When it comes to data privacy, the technology, regulations, and societal expectations all keep changing so quickly. Is it possible to keep up—let alone get ahead?

I’ve definitely had to let go of some of the detail as I’ve gotten more senior. You can’t know every line of every piece of regulation. Over 100 countries have some form of privacy regulation, and pretty much every government is talking about new AI regulation.

As a company, we’ve kept things simple by using GDPR [the EU’s privacy law] as our benchmark. It’s a high standard. And as new regulation comes in, we’ve often already complied with a lot of it by applying GDPR globally. We notify anyone in our database with a detailed permission notice, regardless of whether we’ve had to by law. We allow you to opt out, whether we are obliged by law or not to do so.

The companies that say, “We’re only going to comply with whatever the applicable law is,” they’re always chasing their tails because there’s always a new law out there.

As we innovate, as we grow, as you get questions from people whose data we have, we need to be able to show we’re doing the right thing.

Any concerns about those new laws, particularly around AI?

We had a number of years to evolve to where we are now with privacy. GDPR was enacted in 2016. Before that you had other rules; we got there fairly gradually. It feels like with AI regulation, we’re doing all of that evolution in about 18 months rather than 15 years. That’s a massive challenge. We all appreciate the urgency, but it’s hard to keep up.

I worry that if we rush into legislation, it’ll be a bit like the Dangerous Dog Act in the UK—in the early 1990s there was a string of awful dog attacks, and within a few weeks legislation passed that was poorly written, and did little good but quite a lot of harm.

We obviously need to have principles in place, otherwise we’re going to be too slow. I don’t think creating new regulatory bodies makes sense, because those take years to build. We have to lean on the existing apparatus, give regulators some new powers and new funding, and agree that this is unchartered territory, but recognize also that it’s technology affecting humans in many sectors that are already regulated—so if AI is discriminating against people, we have regulations for that, if it’s building weapons, or if it’s tampering with biology, likewise.

Is there a data privacy-related issue you’re surprised isn’t receiving more attention?

Years ago I would have said protecting children online. You don’t have to be a parent to feel strongly about protecting children, but when you see your kids interacting with screens all the time, that focuses the mind. And I was bewildered at how a lot of the tactics that I saw being used to captivate adult attention would be used on young people’s minds. We’ve caught up with that in the last few years—the ICO issued the Children’s code, or age-appropriate design code, while I was deputy commissioner, something I’m incredibly proud of.

Today, some areas getting a lot of focus in the privacy community are the ability to scan brainwaves without implants, and also then what you can do with implants. I think my generation might have the luxury of remaining organic; my children may have to decide if and how they are augmented.

If you look at where we’re going with neuro-technology, we’re quickly moving to a point where we can understand brainwaves and start to form views of people’s inner thoughts. Yes, this is future-watching, but we’re also in a period of rapid technological change. You can imagine the appeal if you’re a certain type of person, or if you’re a dictator …

One way to think of privacy is that it’s your right to have an inner life—if you don’t want to share your movements, emails, diary, or political views, you don’t have to. Privacy is there to make sure that’s respected. We need to get some principles in place to make sure that it continues to be respected.

photograph: courtesy of zoominfo

The Authors

christina-spellman
Christina Spellman

Director, San Francisco

Christina is a Director at Brunswick and serves as US Sector Manager for the firm’s Technology, Media, and Telecoms (TMT) group.