Robots.txt Fetch & Path Check
We load robots.txt from your host and run a heuristic allow/disallow check for your path and user-agent.
FAQs
Is the allow/disallow result authoritative?
It is a heuristic interpretation of fetched robots.txt. Always confirm critical paths in Search Console and with your server logs.
Why might my site block the fetch?
Firewall rules, bot management, or geo blocking can differ from Googlebot. Mention environment details when debugging.
Why do results differ from browser extensions?
Server-side fetches can see a different HTML payload than a logged-in browser session. Cross-check when debugging personalization or geo variants.
Can I suggest an improvement?
Email a short description of the workflow and a redacted example. Useful, repeatable suggestions get prioritized.