← research

FloodZoneMap canonicalization cliff — 2026-05-13

A natural experiment: FloodZoneMap.org got demoted out of Google rankings on 2026-04-29. Not a tweak — a cliff. Daily pageviews dropped from a 4/15–4/28 average of ~500 to an average near 150. Two weeks before, two weeks after, same site, same URLs, same content. The demotion is the line.

Method

GA4 sessions, sliced into two 14-day windows around the 4/29 boundary:

Sources bucketed:

China bot filter applied (the standard one on this property). No other filters or sampling adjustments.

Data

Source 14d before 14d after Δ
Google 1,687 269 −84%
Bing 379 382 +1%
ChatGPT 116 97 −16%
Direct 474 331 −30%
Total 2,953 1,411 −52%

What this rules in / rules out

Hypothesis under test: do AI-search citations track current ranking signals, or do they behave like a settled reference?

If AI citations were a downstream consequence of current Google authority, ChatGPT should track Google's −84%. It doesn't.

If AI citations were a downstream consequence of Bing index health, ChatGPT should track Bing's +1%. It also doesn't.

The shape — Google losing 84% of its referrals while ChatGPT loses 16% on the same URLs in the same window — is asymmetric in a way that doesn't reduce to either ranking surface.

Provisional read: ChatGPT settled on FZM as the answer to flood-zone lookup queries at some prior point, and that citation didn't re-derive when Google demoted the page. Once a model lands on a canonical answer, the reasons it landed there originally — the EMD match, the title specificity, the cleanness of the page — got compiled into the citation. The citation persists when the inputs that produced it shift.

The argument-shaped version lives at When the Answer Settles. This entry is the data spine that argument rests on.

Skeptic's alternative (named, falsification specified)

The obvious skeptic move is lag. Maybe ChatGPT does re-evaluate against current ranking signals, just on a slower clock than 14 days. If so, ChatGPT's line should drift toward Google's by day +30 or +60.

Falsification condition. If the 2026-04-29 → 2026-06-12 (45d) or 2026-04-29 → 2026-06-27 (60d) ChatGPT session count falls to a Google-parallel decline (say, −60% or worse against the pre-cliff baseline), the canonicalization read is wrong and the differential here is a 14d-lag artifact.

Re-pull scheduled at +30d (2026-06-12) and +60d (2026-06-27) as follow-up entries.

What I don't know yet

Take

The asymmetry at 14d is the strongest single piece of empirical evidence in this portfolio that AI-search citations decouple from current ranking authority once they're acquired. The skeptic's lag argument is live; the way to settle it is to keep watching. If the differential persists at +60d, the operating implication — invest in becoming the settled answer, not the highest-ranked one — is the highest-leverage adjustment to portfolio strategy made this year. The named replication candidate didn't have the volume to second the finding, so n stays at 1 until another site takes a comparable hit.

The point of putting this on /research with the falsification clause stated up front is that I'd rather be wrong on the record than provisionally-right in private. If the +30d or +60d pull collapses the differential, this entry stays public and the follow-on link will point to the revision.


Follow-on. Skeptic’s alternative is lag — if ChatGPT re-evaluates on a clock slower than 14d, the differential could collapse by +30 or +60d. Re-pull scheduled at 2026-06-12 (+30d) and 2026-06-27 (+60d). Falsification condition: if ChatGPT drops to a Google-parallel decline at those windows, the canonicalization read is wrong and the differential here is a lag artifact. The argument-shaped version lives at When the Answer Settles; this entry is the data spine that essay rests on.