Skip to content

AI Is Exposing PII: What Security Leaders Should Know

AI has lowered the barrier for exposing sensitive executive data, putting PII within reach of anyone with a motive. In this Q&A, Wesley Flatt of 360 Privacy explains the threat and how security teams can respond.

Artificial intelligence is no longer just a productivity tool. It is quickly becoming a new attack surface. Systems like ChatGPT are now capable of surfacing executives’ personal information with a few simple prompts, making it easier than ever for malicious actors to exploit sensitive data.

To understand how this shift is changing the security landscape, we spoke with Wesley Flatt, Senior Client Service Support Specialist at 360 Privacy. He shared his perspective on the risks AI poses to executives, what kinds of PII are most exposed, and what security teams can do to stay ahead.

Below is a transcript of our conversation. It has been edited for clarity.

Can ChatGPT actually leak PII and what’s the best way to get it removed?

Yes, and we have seen multiple examples. With the right prompts, people can get ChatGPT or Google’s built-in AI to surface personal information. Opting out of ChatGPT helps, but that is not a complete solution. The most effective approach is to go straight to the data brokers and have the information removed at the source. That is what AI systems are pulling from in the first place.

What’s the most dangerous type of PII that AI is exposing?

The biggest threat is when home addresses are linked directly to valid phone numbers. That combination is extremely risky because it creates a clear, actionable path for anyone with bad intentions.

Has AI lowered the barrier to entry for attacks?

Absolutely. In the past, a skilled threat actor needed specialized knowledge to track down PII. They had to know where to look, how to bypass protections, and how to connect scattered data points. That kind of work took time, persistence, and expertise. Now AI has removed that barrier almost completely.

All it takes is one disgruntled person with a grudge and a few clever prompts, and suddenly they can surface information that puts you and your family at risk. The danger is not just nation-state actors or professional cybercriminals anymore. The threat can come from anyone with internet access and a motive. That is what makes this such a game-changer for security teams.

Where do AI tools like ChatGPT actually pull their data from?

It’s almost always the open web. These tools don’t typically have direct access to private databases — they’ve got strict policies against that. But if something is publicly available online, even on obscure sites, AI models can pull it in and resurface it.

Can AI models show incorrect information and how should security teams respond?

AI does not really care if the data is accurate. It is designed to provide an answer. That means it will often pull information whether it is right or wrong. If you discover your PII showing up, the first step is to trace it back to the source. Try removing it directly if you can. And if that fails, bring in experts like 360 Privacy to handle it for you.


AI is no longer a future concern. It’s reshaping executive risk right now. Our team at 360 Privacy recently tracked and dismantled two AI-built doxing platforms targeting more than 23,000 executives. The full investigation is detailed in our new technical brief: From Prompt to Platform: AI and the New Era of Executive Targeting. Download it today to see exactly how these platforms were built, why the threat is growing, and how leaders can get ahead of it.