How each audit is scored
Each run fetches the docs directly and applies the same evaluation logic so teams can benchmark quality, prioritize fixes, and measure progress over time.
llms.txt and Discovery
Verifies whether agents can discover your documentation index, follow the links it exposes, and find llms.txt from the pages you publish.
Markdown Delivery
Checks whether the site offers clean markdown through .md URLs or content negotiation instead of forcing agents through bloated HTML.
Page Size
Measures whether pages fit within agent context windows and whether the useful content starts early enough to avoid truncation.
Content Structure
Evaluates tabs, headers, and code fences to ensure the content remains parseable after an agent converts or serializes the page.
URL Stability
Confirms that documentation URLs resolve cleanly, return correct status codes, and avoid redirect behavior that can confuse crawlers and agents.
Observability
Compares freshness and parity signals so agent-facing content stays accurate across llms.txt, markdown output, and cached HTML responses.
Authentication
Tests whether agents can access the docs without hitting auth walls, or whether alternate public paths exist when the main site is gated.