Skip to content

How AI Is Changing the Rules of Executive Protection

From doxing sites to AI-assisted phishing, the risks are growing. In this interview, Chris Wingfield reveals why AI is changing the game for security teams and how leaders can turn it to their advantage.

It no longer takes an expert attacker to compromise your digital footprint. Today, anyone with curiosity and free access to an AI model can launch a doxing campaign, build phishing pages, or uncover sensitive personal details with just a few prompts.

For executives and high profile individuals, that shift changes the game. The barrier to entry for digital targeting has collapsed, and with it, the pool of potential threat actors has exploded.

To make sense of this new landscape, we spoke with Chris Wingfield, Senior Vice President of Innovation at 360 Privacy. In this conversation, Chris explains how AI is reshaping digital targeting, why leaders should care, and how security teams can flip the script by using these same tools defensively.

Below is a transcript of our conversation. It has been edited for clarity.

How is AI changing digital targeting?

AI is lowering the technical barrier.

Non-technical people can now use free or low-cost models to learn more advanced digital targeting techniques, from setting up phishing pages to building doxing sites to understanding step-by-step digital targeting workflows. Some models will not easily present personally identifiable information when prompted.

Most will, however, describe targeting tactics, generate advanced search strings, and present exposed information. In practice, that gives amateurs a level of process knowledge that used to be reserved for trained analysts.

Why does this matter for security leaders and high profile individuals?

Data exposures create vulnerabilities that impact overall risk.

In the past, there were gatekeepers, such as credentialed data providers that limited those who could access certain sensitive information. Today, due to the lack of privacy laws and ease of access to personal information, anyone with an internet connection can prompt an AI model to present advanced techniques to identify exploitable exposures.

That means every executive, board member, and family member is more accessible to threat actors. For leaders, the key variable is accessibility to this level of digital targeting as AI is lowering the overall technical barrier.

How could a low skilled individual launch a sophisticated attack using these tools?

It starts with understanding how models are trained and how they gate responses.

Each model has different strengths, guardrails, and blind spots. With the right prompts, a user can ask for safe, analytical guidance. The model might not run the specific techniques for you, but it may suggest specific queries and OSINT techniques that can surface personal phone numbers, addresses, and family information.

We have already seen doxing databases built by people with limited technical backgrounds who used AI to accelerate research and content generation.

The short version is this: If an advanced task can be decomposed into repeatable steps, then a model can guide a novice through how to perform those steps faster.

Are there any silver linings for security teams?

Absolutely! You can apply the same approach to your advantage.

Use the models to red team your principal and capture the output URLs as a task list. This helps you better understand the security posture and specific vulnerabilities associated with your principal.

Think of the links identified as identified exposures such as what data brokers are presenting your personal information through search engine results. Then imagine the links presented as a chain and remediate or mitigate those exposures to reduce the overall vulnerabilities created by the exposure of those datapoints. 

As AI may be used by threat actors to raise their technical baseline, this also raises the capability of physical protection teams that may not have extensive digital targeting backgrounds. The use of AI models to present exposures leads to a reduction of risk by requesting data opt outs and reducing the overall vulnerabilities associated with your principal.

In this way, AI becomes a force multiplier for the protection community. 

What mistakes should security professionals avoid when they start using AI?

The biggest mistake is treating the model like a vending machine. With vague input comes vague output.

Prompt engineering truly matters. Frame the request with deep context, exhaustive objectives, and stringent constraints the same way you would brief a junior analyst taking on a task outside of their normal operating picture.

Frame the prompt with who you are, what your role is, what you are trying to accomplish, and what format you want the final output. Ask it to critique and improve your prompt based upon the depth and breadth of your request.

The model understands its own filters, and if you can create a clear operating picture through your prompt, then it can suggest verbiage that produces higher fidelity results.

If you invest in the quality of your prompts, your results will improve dramatically.

What should leaders do first if they want to reduce risk right away?

Just start!

You do not need a college course or a deep technical background to get moving. Pick a reputable model with a free tier, give it a clear and descriptive prompt, and evaluate how it maps your own footprint or your principal’s footprint. Capture the citations, log the URLs, and begin the opt out and remediation process.

Threat actors are already experimenting, and the sooner your team gains hands on experience the sooner you can better understand what exposures threat actors are currently able to see and work to reduce overall vulnerability for your principal.

What final thought would you leave with security leaders who are still on the fence?

AI can be utilized as a tool to gain visibility and control of your security posture. If a non-technical person can use a model to map exploitable exposures, your team can use a model to identify the points of exploitation and work to mitigate risk.


AI is no longer a future concern. It’s reshaping executive risk right now. Our team at 360 Privacy recently tracked and dismantled two AI-built doxing platforms targeting more than 23,000 executives. The full investigation is detailed in our new technical brief: From Prompt to Platform: AI and the New Era of Executive Targeting. Download it today to see exactly how these platforms were built, why the threat is growing, and how leaders can get ahead of it.