System · Docs

Plumb, explained page by page.

Each entry covers what the report is for, what data drives it, how to read it, and who uses it. Jump to any section from the table of contents, or reach a section directly from the icon beside any report title.

Primary

Dashboard

Open the report →

Monday-morning briefing. One narrative alert, leverage shortcuts, consumption-first KPIs, section analysis.

Data
GET /overview. Cloudflare bot logs + GA4 + content catalog. MEASURED.
How to read
  • Weekly briefing is templated from warehouse queries, not model-written.
  • Consumption KPIs (crawls, top platform, URLs) are the story. Traffic KPIs are the sidebar.
  • Click any KPI card to drill in with filters preserved.
How to use
  • First stop every Monday.
  • Triage anomalies into the relevant drill-down.
  • Share a filtered URL with the Share button.
Audience
Every role. Exec morning read. Editorial leadership. Data triage.
Drill-downs

Channel Health

Open the report →

Where sessions come from. How channels are shifting. How AI cohorts retain.

Data
GET /channel-health. GA4 session data + referrer classification + anonymous_id retention.
How to read
  • AI share deltas are in pp, not %. A 2pp move is significant.
  • Cohort retention heatmap: AI cohorts structurally retain lower than Direct.
  • Leading indicators block is AI-generated narrative — labeled accordingly.
How to use
  • Investigate any Dashboard alert that touches AI share or channel mix.
  • Export CSV for Ops review or Finance monthly.
Audience
Audience team, growth/marketing, Ops.
Drill-downs

Audience Funnel

Open the report →

Visit → Engaged → Newsletter → Registration → Subscription per channel, with lift tables and segment compare.

Data
GET /funnel-by-channel. GA4 session events walked via anonymous_id.
How to read
  • Engaged = scroll ≥50% OR dwell ≥60s.
  • ≤ prefix in the section matrix means sample size was too small — Wilson 95% CI upper bound shown.
  • Segment cards are click-to-filter; affects the whole page.
How to use
  • Find sections where AI actually converts — usually rare; note them.
  • Answer logged-in-vs-anon AI conversion questions with segment cards.
Audience
Subscriptions team, Audience product, CRO.
Drill-downs

Content Yield

Open the report →

Consumption-vs-return gap, per section and per URL. Which desks and stories are being scraped without sending attributable traffic back.

Data
GET /section-yield-gap. Events × catalog × synthetic ad RPM × synthetic GSC positions.
How to read
  • Section scatter 0.5× reference line: below = widest gap.
  • Under-yielder queue is per-URL ratio sorted by worst offender.
  • Attribute correlations = structural attributes that correlate with citation rate. Research-backed.
  • GSC × citation quadrants: moat / ai-native / opportunity / dark.
How to use
  • Weakest-quadrant table to editors for prioritization.
  • Attribute correlations brief content guidelines.
  • CSV export for Finance ad-yield review.
Audience
Managing editor, editorial ops, ad-revenue analyst.
Editorial

Editorial Signal

Open the report →

Which stories broke out in the AI layer. Byline and desk leaderboards. Time-to-resonance distribution.

Data
GET /editorial-signal. Events windowed by url_hash; resonance = z-score of (citations + sessions) vs section median.
How to read
  • Resonance badge is z-score relative to section baseline.
  • TTFC = Time To First Citation (hours).
  • Ignored stories = strong human-reader resonance but no AI pickup.
How to use
  • Tuesday editorial meeting artifact.
  • Click story tile → pipeline drawer with citation platforms.
Audience
Managing editor, desk leads, individual reporters.
Editorial

Emerging Queries

Open the report →

Weekly 500-query probe panel, bucketed by opportunity / defense / moat.

Data
GET /emerging-queries. OpenRouter weekly probe. SAMPLED.
How to read
  • Opportunity = trending up, we're absent.
  • Defense = we slipped in rank.
  • Moat = we lead across platforms, stable.
  • Demand-vs-supply scatter: top-left = opportunity, top-right = moat.
How to use
  • Feeds Content Commissions engine automatically.
  • Run a custom query to validate editorial effort before committing.
Audience
SEO / GEO lead, desk editors, audience product.
Editorial

Content Commissions

Open the report →

Weekly editorial work order: write new, refresh existing, retire / deprioritize.

Data
Computed client-side from /emerging-queries + /editorial-signal + /content-health. INFERRED — directional guidance, not ground truth.
How to read
  • Priority = demand × gap × GSC proximity.
  • Refresh candidates are past 13-week citation half-life with fixable gaps.
  • Retire candidates are past half-life with declining crawl volume.
How to use
  • Tuesday editorial meeting artifact. PDF export opens directly.
  • Share the URL; filter-state carries.
Audience
Managing editor, desk editors, editorial ops.
Competitive

Visibility Monitor

Open the report →

AI answer share-of-voice across four named platforms vs five named competitors. Topic × competitor heatmap.

Data
GET /visibility-panel/extended. 500 weekly probe queries × 4 AI models. SAMPLED, up to ±5pp WoW variance.
How to read
  • Citation ranks are structurally less stable than organic search rankings — expect 40–60% month-to-month churn (Adobe, Moz).
  • Target avg rank when cited: ≤2.5 on primary platforms.
  • Head-to-head cards show 8-week win rate with stability streak.
How to use
  • Weekly review with product + content strategy.
  • Competitor drill-down for 'why is Reuters beating us on Markets' questions.
Audience
Audience strategy, SEO / GEO, executive leadership.
Competitive

Per-Platform View

Open the report →

Quarterly consumption + return report for a single AI platform. Built for licensing conversations.

Data
GET filtered by model family. Crawls, sessions, unique URLs per quarter.
How to read
  • Top-20 crawled and top-20 session-producing tables side by side.
  • Overlap badge = URL appears on both; high overlap = retrieval confidence.
How to use
  • Pre-meeting packet for any AI-company conversation.
  • PDF export is deal-pack formatted.
Audience
BD / licensing lead, GC during deal review.
Infrastructure & Leverage

Bot Policy

Open the report →

Per-bot policy registry with compliance. Every AI crawler family, rule, observed volume, compliance %.

Data
GET /bot-policy. bot_policy_rules + 28-day crawl events.
How to read
  • Compliance = (expected-status requests) / (total) given the rule.
  • New-bots strip surfaces crawlers first seen in last 7 days.
  • Crawl pattern analysis classifies each bot as retrieval vs training with evidence.
How to use
  • Weekly ops cadence — review new bots, approve/deny.
  • Generate policy files produces deployable robots.txt + llms.txt.
  • Litigation Record ZIP for GC or licensing negotiation.
Audience
Edge / infrastructure ops, SEO lead, GC.
Infrastructure & Leverage

Mission Control

Open the report →

Live full-screen view of AI-crawler hits. SSE pass-through from Cloudflare Logpush.

Data
/bot-policy/live-crawler-feed SSE. MEASURED, no aggregation.
How to read
  • Policy-breach border on a lane = bot is policy=block but returning 200/304 — compliance drop.
  • Anomaly ticker captures: policy breach, rate spike (3+ flags/60s), 404 on expected URL.
  • Synthetic fallback engages when SSE is unreachable so the screen is always live.
How to use
  • Ops screen during an incident — 'are these bots behaving?' in real time.
  • Demo surface: clearest first-party-measurement moment in the product.
Audience
Edge / infrastructure ops, incident response, SRE on-call.
Infrastructure & Leverage

Pricing Intelligence

Open the report →

Per-section and per-bot rate-card recommendations for charging AI crawlers.

Data
GET /pricing-intelligence. Crawl volume × citation density × ad-RPM parity × willingness tier. INFERRED.
How to read
  • Base price $0.25/1k × section factor × ad-RPM parity × bot tier.
  • Value multiplier dot: green = premium, blue = strong, orange = average, muted = low.
How to use
  • Before any marketplace-vendor conversation: open-market rate read.
  • Block-vs-allow decision support even without a licensing deal.
  • CSV export keyed for Cloudflare Pay-Per-Crawl rule builder.
Audience
BD / licensing lead, CFO for monetization planning, ops at scale.
Infrastructure & Leverage

Reconciliation

Open the report →

First-party edge logs reconciled against each marketplace vendor's reported totals.

Data
GET /reconciliation. First-party MEASURED; vendor columns synthesized with deterministic drift until live vendor connectors ship.
How to read
  • Variance bands: within_tolerance (<2%), review (<8%), material (≥8%), uncovered (vendor doesn't report).
  • Annual $ impact = variance × avg rate card.
  • Worst-variance card has a red or orange accent.
How to use
  • Monthly review of every marketplace billing statement.
  • Before renewing a marketplace contract.
  • Evidence layer for billing disputes.
Audience
BD / licensing lead, Finance, GC.
Analyst & Operations

Content Health

Open the report →

Article-level AI-readiness scoring across the catalog. Per-article 0–100 score + intervention recommendations.

Data
GET /content-health/summary + /content-health/article/{url_hash}. Catalog signals + industry-research citation factors.
How to read
  • Score band: excellent ≥80, good 60–79, fair 40–59, poor <40.
  • Intervention confidence: stable-empirical > early-empirical > model-based.
  • Under-cited queue: strong crawl but weak citation.
How to use
  • Monthly portfolio review.
  • Pre-publish: CMS can block publish when score <40.
  • Quarterly board report.
Audience
Editorial ops, SEO / GEO lead, content strategy.
Analyst & Operations

Scenario Explorer

Open the report →

Bayesian attribution calculator for the hidden AI share of 'Direct' traffic.

Data
GET /scenario/{estimate,table,sweep,attribution-comparison}. Closed-form Beta posterior. INFERRED — estimate, not ground truth.
How to read
  • Prior × strength × evidence → posterior mean + 95% CI.
  • Crawl anchor toggle folds crawl concentration into the prior with a deterministic shift.
  • Markov-Shapley attribution credits AI higher than last-click because it accounts for upstream influence.
How to use
  • Pre-board: defensible hidden-AI-share number with bounds.
  • Licensing conversations: evidence AI influence extends beyond attributed sessions.
Audience
Data lead, CRO, CFO, GC.
Analyst & Operations

Raw Events

Open the report →

Read-only SQL on the DuckDB warehouse. SELECT / WITH only, 3s cap, 50k-row cap.

Data
POST /raw/query. Warehouse-direct with whitelist-enforced parser.
How to read
  • Cmd+Enter runs the query.
  • Saved queries are tagged with route so Raw-Events saves stay in Raw Events.
How to use
  • Ad-hoc investigation when a dashboard doesn't answer the question.
  • CSV download for spreadsheet work.
Audience
Data / analytics team.
System

Reports & Alerts

Open the report →

Scheduled digest layer + rule-fired notifications + firing history.

Data
GET /reports, /alerts, /alert-firings + POST endpoints.
How to read
  • Three tabs: Reports (cron'd digests), Alerts (rule-fired Slack/email/webhook), Firing log (history).
  • Preset templates pre-populate common cadences.
How to use
  • Subscribe stakeholders who won't open the dashboard daily.
  • Alert rules for 'AI referrals drop >15% WoW', 'Competitor passes us', etc.
Audience
Every operator needing push vs pull.
System

Methodology

Open the report →

Epistemology doc. What Plumb measures, samples, infers, and refuses to fabricate.

Data
Static content. Authoritative source for what numbers mean.
How to read
  • Three-tier confidence framework: MEASURED / SAMPLED / INFERRED.
  • Three-layer vendor taxonomy: marketplaces / edge / sovereign intelligence.
How to use
  • Reference when any chart raises a confidence question.
  • Send to procurement / legal / editorial during product evaluation.
Audience
Everyone at some point.
System

Settings

Open the report →

Connector authorization, members, tenant defaults, billing, audit log.

Data
GET /connectors, /settings/members, /settings/defaults, /settings/audit-log.
How to read
  • Connector tile state: connected / available / coming soon.
  • Role matrix: admin > analyst > editor > viewer.
How to use
  • First session: wire Cloudflare + GA4 + GSC connectors.
  • Before board review: audit-log the last 30 days.
Audience
Admins.
Docs v1.0 · mirrors user-guide.md in the repo. Send feedback via the Share button on any report.