How to Set Account-Level Placement Exclusions Without Breaking Your Conversions
PPCGoogle AdsHow-to

How to Set Account-Level Placement Exclusions Without Breaking Your Conversions

UUnknown
2026-03-01
10 min read
Advertisement

How to implement account-level placement exclusions without blocking high-performing placements — a step-by-step 2026 playbook with testing and recovery.

Cut placements without killing conversions: why account-level exclusion is both a lifeline and a liability

You finally have one place to block junk inventory — Google Ads rolled out account-level placement exclusions in January 2026. That’s great for campaign hygiene, but it comes with a hidden risk: overzealous exclusions can silence high-performing long-tail placements and make your conversions fall off a cliff. This guide gives you a step-by-step playbook to implement exclusions safely, test the impact, and iterate so you clean low-quality inventory without accidentally blocking your revenue drivers.

What changed in 2026 and why it matters now

On January 15, 2026 Google announced account-level placement exclusions, letting advertisers block websites, apps, and YouTube placements across all eligible campaigns from one centralized list. This feature extends across Performance Max, Demand Gen, YouTube, and Display campaigns — formats increasingly automated and opaque.

Google Ads now allows advertisers to create one exclusion list at the account level to prevent spend across all eligible campaigns.

The big shift in 2026: advertisers are leaning harder into automation (Performance Max and Demand Gen dominate budgets), and the need for account-level guardrails rose with it. But centralizing exclusions increases the blast radius of any mistake. That’s why a measured, test-first approach is essential. Below is a practical playbook built for growth teams and SMB operators who must protect conversions while improving inventory quality.

Step 0 — Ground rules before you touch exclusions

  • Define success metrics first: Pick primary KPIs (conversions, CPA, ROAS) and secondary indicators (impressions, CTR, conversion rate, conversion latency).
  • Document current baseline: Export the last 30–90 days of placement data and conversion performance. You’ll need this for comparison.
  • Don’t do a big-bang apply: Treat the account-level list like a safety policy — roll out in stages and keep a rollback plan.
  • Expect lag and modeling: In 2026 conversion modeling and privacy-driven attribution mean short-term drops can be noisy. Wait full conversion windows before judging.

Step 1 — Audit placements and prioritize what to exclude

Before excluding anything, find the bad actors and the hidden winners. Use these signals to classify placements into action buckets.

How to discover placements to evaluate

  • Run a Placement report in Google Ads for the last 30–90 days and segment by conversions, cost, impressions, and conversion rate.
  • Cross-check with Google Analytics or BigQuery export (if you have it) to map assisted conversions and downstream revenue.
  • Filter by high spend + low conversion rate, high bounce, or suspiciously low session duration for web placements; for YouTube, flag channels with low watch time.
  • Use third-party verification (brand safety vendors, app store data, or DSP reports) for quality signals like ad fraud risk and viewability.

Classification framework (fast and repeatable)

For each placement, tag into one of four categories:

  1. Immediate exclude — Fraud, offensive content, malware, or repeat policy violations.
  2. High-risk monitor — High spend and high CPA but ambiguous quality signals.
  3. Adjust bids / observation — Low impressions but strong conversion rate; lower bids rather than exclude.
  4. Keep — Proven conversion drivers and long-tail winners.

Step 2 — Build your account-level exclusion list (safely)

Create a named exclusion list with a clear versioning convention and rollout strategy. Your list is living policy — treat it like code with version history and notes.

Practical checklist to create and manage the list

  • Name the list clearly (example: 'Account Exclusions 2026 Q1 — staged-1').
  • Include metadata: author, date, planned rollout (staged or full), and a rollback flag.
  • Start with a conservative seed: add only Immediate exclude items from your audit.
  • Apply the list to a subset of campaigns if your account and Google Ads allow campaign targeting for the list — keep a control group untouched.
  • Maintain a parallel staging list where you test candidates before moving into the main list.

Step 3 — Design an A/B test to measure conversion impact

To avoid killing conversions, never assume. Test. Set up a controlled experiment that isolates the exclusion list's impact.

Testing approaches

  • Campaign duplication (recommended): Duplicate top-spend campaigns. Apply the account-level exclusion list to the test copies and run both in parallel for the same time period.
  • Campaign experiments: Use Google Ads experiments where available to split traffic between control and variant. This reduces noise from budget changes.
  • Geo split: If duplication isn’t possible, run the exclusion in specific geos as a holdout while keeping other regions as control.
  • Time-based holdout: Run exclusions by daypart (A/B day split) when seasonal conditions are stable.

Statistical and practical rules of thumb

  • Prefer at least 50–100 conversions per variant for reliable directionality; for precise detection of small lifts you’ll need more.
  • Run for a full conversion window plus a buffer. If your conversion window is 30 days, run the test for 45–60 days to cover lag and weekend/seasonal cycles.
  • Monitor both absolute and relative changes: % change in conversions, absolute drop in conversion count, CPA movement, and conversion value.
  • Use a two-proportion test or built-in experiment reporting for significance, but prioritize business impact — a statistically significant 2% change might not be material.

Step 4 — Monitor KPIs and detect problems early

Once the test starts, watch for these early-warning signals and have a playbook to respond.

Key metrics to watch daily/weekly

  • Conversion volume — sudden drop in absolute conversions is the fastest signal you blocked a high-performing placement.
  • CPA / ROAS — changes show efficiency shifts even if volume is stable.
  • Impressions & spend distribution — did spend shift to other channels/campaigns?
  • Conversion rate and CTR — can reveal placement quality change.
  • Conversion latency — if modeled conversions lag, wait longer before making a decision.

Automate checks

  • Set a Looker Studio or internal dashboard with control vs variant KPIs and alerts for >15% drop in conversions or >20% increase in CPA week-over-week.
  • Use Google Ads scripts or an API job to log placement changes and who made them.

Step 5 — Interpret results and iterate

Interpreting test outcomes is as important as running them. Here’s how to avoid false conclusions.

Outcomes and actions

  • No material change: Expand the exclusion list to the next priority tier and retest.
  • Performance improves: Roll the list into more campaigns. Keep monitoring for long-term shifts.
  • Performance worsens: Identify placements that likely drove the conversions. Use Placement reports to find which items had disproportionate conversion share and consider removing them from the exclusion list or moving them to a lower-bid strategy.

For ambiguous results

If conversions decline but CPA improves, you may have cut low-value converters — test using conversion value or downstream revenue (LTV) before deciding. If attribution or conversion modeling changed in late 2025 / early 2026 on your account, reconcile data sources (CRM, server-side conversions) to confirm real business impact.

Troubleshooting common pitfalls and how to fix them

Pitfall: You excluded a long-tail winner

Symptom: Conversions drop with no clear change in CPA. Fix: Reintroduce the affected placements to the staging list, reduce bids, and add them back to a single campaign to observe. Keep them out of the account-wide exclusion until you confirm behavior.

Pitfall: Attribution noise masks impact

Symptom: Modelled conversions change or last-click numbers diverge. Fix: Use offline conversion uploads or server-side signals to validate. Run holdout experiments longer to let modeled conversions settle.

Pitfall: Excluding YouTube channels reduces upper-funnel metrics

Symptom: Impression-share and view-through conversions drop, possibly hurting funnel velocity. Fix: Measure assisted conversions and consider minimizing exclusions for upper-funnel Demand Gen while focusing on ads creative and targeting instead.

Iteration cadence and governance

  • Weekly: Check early warning KPIs and log any urgent rollbacks.
  • Monthly: Expand the list by moving one priority batch from 'High-risk monitor' to 'Staged exclude' after testing.
  • Quarterly: Do a full placement deep-dive and archive old exclusion lists. Version lists by quarter so you can roll back to prior policy if needed.
  • Governance: Require two approvals for account-level changes and maintain a change log for auditability.

Tools and templates (practical)

Dashboard metrics to include

  • Control vs variant: conversions, conversion value, CPA, spend, impressions, CTR, impression share.
  • Top placements by conversion contribution and by CPA.
  • Holdout lift calculation showing % change and absolute impact on conversions.

Exclusion rollout template

  1. Export last 90 days placement report.
  2. Classify placements into four buckets.
  3. Create 'staged' account-level exclusion list with Immediate exclude items only.
  4. Duplicate 20–30% of top-spend campaigns and apply list to duplicates for testing.
  5. Run for conversion window + buffer; analyze.
  6. Iterate and expand list monthly based on results.

Anonymized case study — staged exclusions saved conversions

Client: B2B SaaS company, $20k monthly display+YouTube spend. Objective: reduce wasted impressions from low-quality placements and lower CPA without losing sign-ups.

Action: We created a staged account-level exclusion list with policy violations and 75 app placements flagged by viewability vendors. Instead of full-apply, we duplicated the highest-spend campaigns (30% of budget) and applied the list to duplicates as a test variant. We ran the test through a 45-day window and monitored full conversion windows.

Result: Initial application showed a 9% drop in conversions for the test campaigns and a 14% improvement in CPA. Deep dive revealed two long-tail placements accounted for 22% of conversions in the control but had low attribution weight in first-click models. We removed those two placements from the exclusion list and reran a smaller variant test. Conversions returned to baseline while CPA stayed 8% better across the account. The staged rollout prevented a full-account conversion cliff and improved overall efficiency.

Future-proof your placement hygiene strategy (2026 lens)

As automated formats dominate, account-level guardrails are essential. But 2026 trends mean:

  • Automation increases opacity — more conversions will come from black-box algorithms; rely on experiments and holdouts for causal inference.
  • Privacy modeling persists — attribution will still be noisy; use offline signals and LTV to validate exclusion impact.
  • Cross-channel overlap rises — ensure you view placement changes in the context of overall funnel (search, social, direct) to avoid misattribution.

Quick checklist before you click apply

  • Have a documented baseline and success metrics.
  • Seed the list with only certified 'Immediate exclude' placements.
  • Run a controlled A/B experiment (campaign duplicates or geo split).
  • Allow full conversion windows plus buffer before final judgment.
  • Keep a rollback plan and versioned lists.
  • Automate monitoring and alerts for conversion drops.

Closing thoughts

Account-level placement exclusions are one of the most powerful tools Google released in early 2026 for scaling campaign hygiene. But with that power comes responsibility. By auditing first, testing deliberately, monitoring closely, and iterating slowly, you can eliminate low-quality inventory without choking off your best-converting placements. Treat your exclusion lists like a living policy — versioned, tested, and governed — and you’ll protect conversions while improving efficiency.

Ready to audit your account-level exclusions? If you want a practical second opinion, our team at go-to.biz runs a free 30-minute exclusion review for qualified SMBs and buyers. We’ll review your placement report, propose a staged exclusion plan, and outline a test that fits your conversion volume. Book a slot or download our exclusion checklist to get started.

Advertisement

Related Topics

#PPC#Google Ads#How-to
U

Unknown

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-03-01T01:44:27.398Z