Pass the Chrome Lighthouse Agentic Browsing auditin under 2 minutes.
Chrome added an Agentic Browsing category to Lighthouse — the same tool every web developer already runs. It scores your site on whether autonomous AI agents (ChatGPT, Claude, Perplexity, custom agents) can navigate and act on it. If your site doesn’t serve llms.txt, the audit fails. BridgeToAgent generates the pass in 2 minutes for $49 once.
What Chrome Lighthouse now scores
The Agentic Browsing category evaluates five signals:
- llms.txt presence. Is a valid
llms.txtfile served at the root of your domain? AI agents and agentic browsers consult this first. - WebMCP tool registration. Are interactive elements declared via WebMCP markup so agents can call them like tools?
- WebMCP schema validity. Are the declarations well-formed JSON-Schema?
- Accessibility tree quality. ARIA names, labels, and tree integrity — agents read the same accessibility tree that screen readers do.
- Cumulative Layout Shift (CLS). Agents need stable DOM positions to click reliably. Layout shifts break them silently.
Unlike the Performance category there is no weighted 0–100 score. Each audit returns pass, fail, or a pass ratio. The messaging from Chrome is clear: a site that doesn’t pass these is not built for the agentic web.
Where most sites fail today
From the 11 real customer sites we have audited, the average score on the kit-readiness signals is 17 out of 100. Almost none ship llms.txt. Few expose accessible forms. The Lighthouse Agentic Browsing audit will surface these gaps to every developer running Lighthouse — which is most of them, since the tool is built into Chrome DevTools.
How BridgeToAgent lifts your audit pass
We map directly onto the audits Chrome is scoring:
| Lighthouse audit | What BridgeToAgent ships |
|---|---|
| llms.txt presence | Yes — we generate a Lighthouse-compatible llms.txt in the standard llmstxt.org format from a real crawl of your site |
| WebMCP tool registration | On our roadmap — we currently ship agents.json (Anthropic’s competing standard) and an agent-instructions runbook |
| WebMCP schema validity | On our roadmap |
| Accessibility tree quality | Indirectly — our agent-instructions runbook documents your action paths so agents can fall back when the accessibility tree is sparse. Direct ARIA improvements require changes to your existing HTML. |
| Cumulative Layout Shift | Out of scope — CLS is a frontend-performance fix on your side, not a file we can deliver. |
The honest picture: we get you a clean pass on the llms.txt audit today, give you durable infrastructure for the AI agents that actually exist (Claude, Perplexity, ChatGPT, custom agents via the MCP ecosystem), and add WebMCP-generator support as the Chrome standard solidifies.
What you get from a single $49 kit
llms.txt— Lighthouse-compatible. Sub-2 minute delivery from a real multi-page crawl.agents.json— Anthropic’s standard. Machine-readable map of every action an agent can perform on your site, with verified URLs.agent-instructions.md— Step-by-step runbook for agents that prefer markdown or that fall back when other discovery fails.- Auto-discovery snippet — three
<link rel="alternate">tags you paste into your<head>so agents find the files immediately. - Platform-specific install guide — verified paths for Shopify, WordPress, Webflow, Wix, Squarespace, Ghost, Next.js, static hosts, and shared cPanel hosting.
Frequently asked
Will this score still matter when WebMCP becomes mandatory?
Yes. llms.txt is a separate audit from WebMCP and remains required regardless. The two standards are complementary — llms.txt is the out-of-band summary, WebMCP is the in-page action declaration. Passing llms.txt today is durable; we add WebMCP-generator support before the Chrome category moves out of experimental status.
Can I just write llms.txt by hand?
Yes — the format is intentionally simple. What BridgeToAgent automates is the part most operators get wrong: crawling your real sitemap, extracting forms and actions from the rendered DOM (not LLM guesses), validating every action URL against your live site, and producing a file populated with verified facts. If you write it by hand, plan for ongoing maintenance every time your site changes meaningfully.
What about agents.json?
agents.json is Anthropic’s competing standard for declaring actions to AI agents. It’s currently the most widely-adopted format among production AI tools (Claude, custom MCP agents). We ship both files — llms.txt for Chrome Lighthouse and the broader llmstxt.org ecosystem, agents.json for the Anthropic and MCP ecosystem.
How is the score calculated?
Chrome explicitly notes the Agentic Browsing category does not use the standard weighted 0–100 score that Performance and Accessibility use. Each audit returns pass, fail, or a pass ratio independently. The practical implication is that you either ship llms.txt or you don’t — there’s no partial credit for almost-shipping.
What does “experimental” mean here?
The Agentic Browsing category is currently available in Lighthouse via configuration flags and Chrome canary builds but is not enabled by default in stable Chrome. Google’s documentation flags it as experimental. We track the spec and update generated kits in lockstep with audit changes.
Do you have a free check?
Yes — paste your URL on the home page for a 5-second readiness score. No email needed. The audit tells you what BridgeToAgent would lift and what we wouldn’t (e.g. CLS issues that need your frontend team).
Ship the Lighthouse pass in 2 minutes
Free 5-second readiness check first. $49 once. No subscription. Delivery in under 2 minutes.