---
title: "The Agentocene"
date: 2026-03-18
description: "AI agents now generate more web requests than the humans they serve. The data from Cloudflare, Akamai, and the edge tells the story of a phase shift."
tags: ["ai-agents","edge-computing","internet-infrastructure"]
readingTime: "12 min read"
url: https://alexmoening.com/dev-thoughts/the-agentocene.html
markdownUrl: https://alexmoening.com/dev-thoughts/the-agentocene.md
---

# The Agentocene

[← Back to /dev/thoughts](/dev-thoughts/)

<p class="lead">Imperva's 2025 Bad Bot Report put the number at 51% — for the first time in a decade, automated traffic had passed human traffic across the web. Akamai documented a 300% year-over-year surge in AI-driven bots. Cloudflare's crawler data told the same story from a different angle. I'd been reading these reports for weeks when I got to <a href="https://www.linkedin.com/posts/alexmoening_thats-a-wrap-on-aws-edgecon-2026-a-gathering-activity-7436823560171696129-9AXh" target="_blank">lead a fireside chat at AWS EdgeCon 2026</a> — and two words were on everyone's mind, PMs to architects: <em>Agentic Experience</em>. In passing, someone noted our own numbers tracked the same direction. Nobody even blinked. Three independent data sources, three independent vantage points, all converging on the same conclusion. The web's fastest-growing traffic source is no longer human. Not as a trend line. As a present-tense measurement.</p>

### The Numbers

<p class="section-summary">The data from three independent sources tells the same story.</p>

The shift didn't announce itself. It showed up in the metrics.

Akamai's 2025 Digital Fraud and Abuse Report, published in November, documented a 300% year-over-year surge in AI-driven bot traffic. That number captures what their threat detection systems can classify — the known crawlers with identifiable user-agent strings. The actual number is higher, because the most sophisticated agents don't announce themselves.

Cloudflare's data is more granular and, frankly, more unsettling. Their August 2025 analysis of crawler-to-referral ratios revealed the gap between what AI systems consume and what they send back:

<table class="data-table">
    <thead>
        <tr>
            <th>Source</th>
            <th>Metric</th>
            <th>Value</th>
            <th>Date</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td><a href="https://www.akamai.com/resources/state-of-the-internet/digital-fraud-and-abuse-report" target="_blank">Akamai</a></td>
            <td>AI bot traffic YoY growth</td>
            <td>+300%</td>
            <td>Nov 2025</td>
        </tr>
        <tr>
            <td><a href="https://blog.cloudflare.com/crawlers-click-ai-bots-training/" target="_blank">Cloudflare</a></td>
            <td>Anthropic crawl-to-refer ratio</td>
            <td>38,065 : 1</td>
            <td>Aug 2025</td>
        </tr>
        <tr>
            <td><a href="https://blog.cloudflare.com/from-googlebot-to-gptbot-whos-crawling-your-site-in-2025/" target="_blank">Cloudflare</a></td>
            <td>PerplexityBot request growth</td>
            <td>+157,490%</td>
            <td>Jul 2025</td>
        </tr>
        <tr>
            <td><a href="https://blog.cloudflare.com/from-googlebot-to-gptbot-whos-crawling-your-site-in-2025/" target="_blank">Cloudflare</a></td>
            <td>GPTBot request growth</td>
            <td>+305%</td>
            <td>Jul 2025</td>
        </tr>
        <tr>
            <td><a href="https://originality.ai/blog/gptbot-blocked" target="_blank">Originality.AI</a></td>
            <td>Top 1,000 sites blocking GPTBot</td>
            <td>35.7%</td>
            <td>Aug 2024</td>
        </tr>
    </tbody>
</table>

38,065 pages crawled for every single human visitor referred back — and that was actually *down* from 286,930:1 in January 2025, an 86% improvement. Still not a search engine building an index to send you traffic. That's a knowledge extraction pipeline with a rounding-error referral rate.

And these numbers *understate* reality. They only measure crawler traffic — bots with identifiable user-agent strings hitting web pages. They don't capture API-to-API agent traffic, which bypasses web servers entirely. They don't capture agents operating through browser automation frameworks. They don't capture the internal orchestration calls between agent components. The visible crawler traffic is the tip of an iceberg, and the iceberg is growing faster than the tip.

### The Multiplication Effect

<p class="section-summary">One human intent generates hundreds of machine requests.</p>

Here's the mechanism. When a human browses the web, they generate a predictable, modest number of requests. Open a page, click a link, scroll, maybe search. A typical browsing session might produce 50-100 HTTP requests across 10-15 minutes.

When an AI agent acts on a human's behalf, the math changes completely.

<div class="flow-diagram flow-vertical" role="img" aria-label="Agent multiplication flow: Human Intent triggers Agent, which fans out to parallel API calls, retries on failures, synthesizes results, and delivers a response">
    <div class="flow-step">
        <span class="step-icon">👤</span>
        <span class="step-text">Human Intent</span>
    </div>
    <span class="flow-arrow" aria-hidden="true">↓</span>
    <div class="flow-step">
        <span class="step-icon">🤖</span>
        <span class="step-text">Agent Decomposition</span>
    </div>
    <span class="flow-arrow" aria-hidden="true">↓</span>
    <div class="flow-step">
        <span class="step-icon">🔀</span>
        <span class="step-text">Parallel API Calls</span>
    </div>
    <span class="flow-arrow" aria-hidden="true">↓</span>
    <div class="flow-step">
        <span class="step-icon">🔄</span>
        <span class="step-text">Retries + Refinement</span>
    </div>
    <span class="flow-arrow" aria-hidden="true">↓</span>
    <div class="flow-step">
        <span class="step-icon">🧩</span>
        <span class="step-text">Synthesis</span>
    </div>
    <span class="flow-arrow" aria-hidden="true">↓</span>
    <div class="flow-step">
        <span class="step-icon">✅</span>
        <span class="step-text">Response to Human</span>
    </div>
</div>

The [MCPMark benchmark](https://arxiv.org/abs/2509.24002), published in September 2025, quantified this multiplication effect by stress-testing agent behavior across 127 demanding tasks. A single task averaged **16.2 execution turns** and **17.4 tool calls**. Each tool call can itself trigger multiple HTTP requests — an API query, authentication, pagination, retries. One unit of human intent becomes hundreds of machine requests.

<table class="data-table">
    <thead>
        <tr>
            <th>Dimension</th>
            <th>Human Browsing</th>
            <th>Agent Task</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>Requests per intent</td>
            <td>50–100</td>
            <td>500–5,000+</td>
        </tr>
        <tr>
            <td>Duration</td>
            <td>10–15 min session</td>
            <td>Seconds to hours (goal-driven)</td>
        </tr>
        <tr>
            <td>Parallel connections</td>
            <td>6–10 per domain</td>
            <td>Dozens across many domains</td>
        </tr>
        <tr>
            <td>Retry behavior</td>
            <td>Manual refresh, maybe once</td>
            <td>Automatic, exponential backoff, multi-strategy</td>
        </tr>
        <tr>
            <td>Session model</td>
            <td>Login → browse → logout</td>
            <td>Stateless or delegated, no session concept</td>
        </tr>
    </tbody>
</table>

One human thought becomes a cascade of machine actions, each generating its own traffic, its own load, its own entry in your access logs.

### The Blocking Response

<p class="section-summary">A third of the top 1,000 sites are already building walls.</p>

The web's response to this shift has been predictable: block everything.

Originality.AI's August 2024 study found that 35.7% of the world's top 1,000 websites actively block GPTBot via `robots.txt`. The major news publishers led the charge:

<table class="data-table">
    <thead>
        <tr>
            <th>Publisher</th>
            <th>Blocking</th>
            <th>Primary Concern</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>The New York Times</td>
            <td>GPTBot, CCBot, Google-Extended</td>
            <td>Training data extraction, copyright</td>
        </tr>
        <tr>
            <td>CNN</td>
            <td>GPTBot, CCBot</td>
            <td>Content summarization bypassing ad impressions</td>
        </tr>
        <tr>
            <td>Reuters</td>
            <td>GPTBot</td>
            <td>Wholesale content consumption without licensing</td>
        </tr>
        <tr>
            <td>Bloomberg</td>
            <td>GPTBot, Anthropic</td>
            <td>Paywall circumvention via AI summarization</td>
        </tr>
    </tbody>
</table>

But `robots.txt` was designed for an era when crawlers were polite and identifiable. It's a gentleman's agreement, not a technical control. The evidence suggests that many AI crawlers ignore `robots.txt` directives entirely, or use user-agent strings that don't match their block rules. Cloudflare's data shows crawler traffic from AI systems continuing to grow even as blocking adoption increases.

This creates an arms race with no clear winner. Publishers add more bot signatures to their block lists. Crawlers rotate user-agent strings. Publishers deploy JavaScript challenges. Crawlers use headless browsers. Publishers implement behavioral fingerprinting. The pattern is familiar to anyone who's worked in bot mitigation — and it's a pattern where the defense always runs behind.

<blockquote class="pull-quote">The web's immune response — robots.txt, rate limits, CAPTCHAs — was designed for a slower pathogen. Agents evolve faster than the antibodies.</blockquote>

### What This Means If You Build Things

<p class="section-summary">Your users are no longer human.</p>

If you're an API designer, a platform architect, or an edge engineer, the assumptions underlying your systems are changing faster than most organizations realize. Here's the translation table:

<table class="data-table">
    <thead>
        <tr>
            <th>Old Assumption</th>
            <th>New Reality</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>Rate limiting protects fairness</td>
            <td>Rate limiting is containment — one "user" generates 100x the requests</td>
        </tr>
        <tr>
            <td>Identity = a session cookie</td>
            <td>Identity = delegated authority from a human to an agent to a sub-agent</td>
        </tr>
        <tr>
            <td>Caching optimizes delivery</td>
            <td>Caching is part of a decision loop — stale data breaks agent reasoning</td>
        </tr>
        <tr>
            <td>Routing is geography-based</td>
            <td>Routing is intent-based — agents don't have geography</td>
        </tr>
        <tr>
            <td>Auth gates protect resources</td>
            <td>Auth gates must understand delegation chains, not just credentials</td>
        </tr>
    </tbody>
</table>

**For API designers**: Your rate limits need to understand that a single API key might back an orchestration layer generating hundreds of calls per task. Per-key rate limiting is necessary but insufficient — you need per-intent or per-workflow awareness. Your error responses need to be machine-actionable, not human-readable. A 429 with a `Retry-After` header is useful. A 429 with "Please try again later" in an HTML page is useless to an agent.

**For platform architects**: Your authentication model needs to handle delegation. A human authorizes an agent. The agent spawns sub-agents. Each sub-agent calls your API. Whose credentials are those? Whose quota do they count against? OAuth scopes were designed for apps, not for autonomous agents that compose workflows across services.

**For edge engineers**: This is personal for me — I work on CloudFront at AWS, so I watch this play out in the traffic data every day. Edge logic is no longer just about delivering content closer to users. It's about making decisions at the point of request: Is this an agent? What's its delegation chain? Is this the 4th retry of the same request? Should I serve a cached response or force a fresh one because the agent needs current data for a time-sensitive decision?

The infrastructure layer isn't just plumbing anymore. It's part of the agent's reasoning environment.

### Naming the Epoch

<p class="section-summary">The Anthropocene was humans shaping the world. The Agentocene is humans building systems that shape it for them.</p>

I'm using this term deliberately. The parallel is precise.

The Anthropocene — the geological epoch defined by human activity as the dominant influence on the environment — gave us a framework for understanding how our direct actions reshape the planet. We built cities. We burned fossil fuels. We altered ecosystems. The defining characteristic was human agency acting directly on the physical world.

The Agentocene is the next layer. We're no longer the primary actors on the systems we've built. We create agents. Those agents act on our behalf, at scale, continuously, across every digital system they can reach. The human doesn't navigate the web anymore. The web is navigated — and increasingly, *executed* — on their behalf.

Tim Berners-Lee saw this coming. In *Weaving the Web* (1999), he wrote: "The intelligent agents people have touted for ages will finally materialize." He was 27 years early on the timeline, but his architectural intuition was exact — the web was always designed to be machine-readable. We just didn't have machines capable of reading it with intent until now.

The phase shift isn't that agents exist — or even that bots outnumber humans on the web, which Imperva's reports have documented since the mid-2010s. It's that the *type* of bot changed. Legacy bots scraped and indexed. These agents reason, decide, and act. When the autonomous systems hitting your infrastructure are executing multi-step workflows on behalf of humans — not just crawling pages — you've crossed a different boundary. That's not incremental. That's epochal.

<blockquote class="pull-quote">We didn't just add AI to the web. We changed the primary actor. The Anthropocene was humans reshaping the world. The Agentocene is humans building systems that reshape it for them — faster, wider, and without stopping to sleep.</blockquote>

### What I'm Watching

<p class="section-summary">Forward-looking signals from the edge.</p>

The shift is real, the data confirms it, and the legal system is scrambling to draw boundaries. Here's what I think comes next:

**Agent-to-agent protocols.** The current model — agents pretending to be browsers, hitting web pages meant for humans — doesn't scale. Purpose-built protocols for agent-to-agent communication are already forming — Anthropic's Model Context Protocol (MCP) and Google's Agent-to-Agent (A2A) framework are the early drafts. The web won't disappear, but a parallel machine-to-machine layer will grow alongside it.

**Delegated identity standards.** OAuth was built for "this app can access your data." The Agentocene needs "this agent, acting on behalf of this human, with these constraints, can perform these actions, and can delegate sub-tasks to these other agents." The identity chain needs to be auditable end-to-end. I expect to see RFC-track work on this within the next 12-18 months.

**Intent-aware CDNs.** Edge networks will evolve from "deliver content fast" to "understand what's asking and why." The same URL requested by a human browser, an AI crawler, and an autonomous agent executing a purchase workflow should potentially return different responses — not to deceive, but to serve each requestor appropriately. Content negotiation has always been part of HTTP. The `Accept` header just needs a few more values.

**The end of the session model.** Sessions assume a human sitting at a screen, progressing through a flow. Agents don't have sessions. They have tasks. They connect, execute, disconnect. Your analytics, your A/B tests, your conversion funnels — all built on the session abstraction — need rethinking when a growing share of your "users" don't maintain state between requests.

I've been moving bits across the internet since 1999. I watched the web go from static HTML to dynamic applications to mobile-first to API-first. Each transition changed what "a user" meant. This transition is the biggest one yet, because it changes whether "a user" is human at all.

That's what I see from the edge.

---

### Sources

<table class="data-table">
    <thead>
        <tr>
            <th>Source</th>
            <th>Publication</th>
            <th>Date</th>
            <th>Key Finding</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td><a href="https://www.akamai.com/resources/state-of-the-internet/digital-fraud-and-abuse-report" target="_blank">Akamai</a></td>
            <td>Digital Fraud and Abuse Report 2025</td>
            <td>Nov 2025</td>
            <td>AI bot traffic +300% YoY</td>
        </tr>
        <tr>
            <td><a href="https://blog.cloudflare.com/crawlers-click-ai-bots-training/" target="_blank">Cloudflare</a></td>
            <td>The crawl-to-click gap</td>
            <td>Aug 2025</td>
            <td>Anthropic 38,065:1 crawl-to-refer ratio</td>
        </tr>
        <tr>
            <td><a href="https://blog.cloudflare.com/from-googlebot-to-gptbot-whos-crawling-your-site-in-2025/" target="_blank">Cloudflare</a></td>
            <td>From Googlebot to GPTBot</td>
            <td>Jul 2025</td>
            <td>PerplexityBot +157,490%, GPTBot +305%</td>
        </tr>
        <tr>
            <td><a href="https://originality.ai/blog/gptbot-blocked" target="_blank">Originality.AI</a></td>
            <td>GPTBot Blocked by Top 1,000 Sites</td>
            <td>Aug 2024</td>
            <td>35.7% of top 1,000 blocking GPTBot</td>
        </tr>
        <tr>
            <td><a href="https://arxiv.org/abs/2509.24002" target="_blank">MCPMark</a></td>
            <td>arXiv:2509.24002</td>
            <td>Sep 2025</td>
            <td>16.2 turns, 17.4 tool calls per task</td>
        </tr>
        <tr>
            <td>Tim Berners-Lee</td>
            <td><em>Weaving the Web</em> (HarperCollins)</td>
            <td>1999</td>
            <td>"Intelligent agents will finally materialize"</td>
        </tr>
    </tbody>
</table>

---

## Navigation

- [Home](/)
- [About](/about.html)
- [Projects](/projects.html)
- [Contact](/contact.html)
- [/dev/thoughts](/dev-thoughts/)

*Copyright 2026 Alex Moening. Opinions expressed are my own.*
