By Rowena Lam, Senior Director of Product at IAB Tech Lab
Everyone is excited about AI agents that autonomously create campaigns, discover optimal placement, execute strategies, optimize performance, and report results. They do all this while communicating with other agents to complete transactions. It’s automation at scale. It sounds great. The one word almost completely absent in any meaningful way from this non-stop talk about agentic advertising is “privacy”.
Whether it’s “privacy-first,” “privacy-aware,” or the one I hate the most, “privacy-compliant,” it seems like the questions around privacy are limited to how we can mention the word in materials. It’s not the question you should be asking. When AI agents are autonomously transacting on media, managing audiences, and trafficking creative in the advertising ecosystem, how can we ensure consumer privacy doesn’t get ransacked by the robots as they start making decisions about personal data and control the flow of it?
The Privacy Questions Nobody’s Answering
Consent and Control
- How do we ensure agents respect consumer preferences about how their data is used?
- Can a consumer exercise rights like request to delete, request access to their data, request correction of their data?
Sensitive Inferences
- What happens when a model makes an inference that’s sensitive in nature, about health, finances, or other protected categories?
- When agents are optimizing campaigns, how do we prevent them from inadvertently creating proxy variables for protected characteristics?
Data Access and Sharing
- What data does each agent access or expose?
- Does the agent create new data, and if so, is it personal data and potentially subject to privacy laws?
- When an agent passes data to another agent, how are we checking that appropriate contracts are in place as some laws require?
- Do we know the lineage and chain of custody for the data so that we can effectuate consumer rights?
What We Can Do About It
These questions are challenging even without agents in the mix. Moving from manual processes to agentic systems gives us a real shot at building privacy controls that work better than what we have now, but only if we design with intention. Here’s how we can do it.
- Build Guardrails Into the Architecture. With frameworks like Model Context Protocol (MCP), agents access specific tasks and tools with well-defined parameters. This makes purpose limitation technically enforceable. We can build privacy policies into the protocol level, so agents only access data they have proper consent to use for specific activities. Humans forget things, misconfigure settings, take shortcuts. Properly architected agents won’t.
- Make Everything Auditable. Build logging into these systems from day one. Track what tools and data agents access or share. If an agent does something it shouldn’t, you need to know immediately, not six months later during a compliance audit. This beats most manual processes where nobody logged what happened, or the logs are incomplete, or nobody bothered to check them.
The Good News: We Don’t Need New Frameworks
The Privacy Taxonomy provides a standardized language for describing three critical dimensions of data.
- Data Elements: What type of data is being processed (e.g., user.contact.email vs. user.financial.account)
- Data Uses: The purpose for processing (e.g., provide.service.operations vs. advertising_marketing.first_party.targeted)
- Data Subjects: Who the data describes (consumer, employee, household)
This is great for agentic AI because it creates machine-readable labels that agents can check. Instead of vague “we need user data,” an agent would specify “I need user.behavior.purchase_history for advertising_marketing.third_party.targeted”, which can be programmatically verified against consent.
The existing user choice signal standards, the Global Privacy Protocol (GPP) and Transparency & Consent Framework (TCF) then provide the actual yes/no answers from consumers about what they’ve agreed to.
Put these together and you have a complete system. Agents can declare what they need (Privacy Taxonomy), check if they have permission (TCF/GPP), and operate within these already clearly defined boundaries.
The infrastructure already exists, and these standards and taxonomies are baked into the Tech Lab’s approach in its Agentic Advertising Initiative announced on Jan 28.
While We’re Optimizing for Everything Else, Let’s Get Privacy Right
Agentic AI in digital advertising isn’t only about making things faster or cheaper. It’s about whether we build systems that respect consumer privacy, or whether we just automate the same privacy problems we already have at a much larger scale.
But here’s what gets me, I constantly hear about what a heavy lift it is to implement privacy correctly, tracking consent, managing data lineage, ensuring purpose limitation. It’s complex, it’s error-prone, it requires constant vigilance. So with all the promises of agentic AI, why aren’t we building agents to make preserving consumer privacy more efficient? If agents can optimize ad campaigns and negotiate media buys, why can’t they verify consent chains and flag privacy violations before they happen? If the technology is good enough to handle billions in ad spend, surely it’s good enough to check whether we have permission to use the fact that I bought 5 cookbooks last week to advertise a new kitchen gadget to me today.
We’re racing to deploy agents that need consumer data, while the agents being built to protect it aren’t getting nearly the same attention. Consider this my industry call. How do we bring privacy-by-design to the forefront of agentic conversations?




