Why can't I scrape my website?
Last updated: July 23, 2025
Website disallows crawling
A robots.txt file is a text file that instructs crawlers which parts of a website they are allowed to access or not. We can't' scrape your website if we aren't allowed to.
Here's an example of a disallowed robots.txt file:
User-agent: *
Disallow: /In order to allow our crawling agent to scrape the site they need to allowlist User-agent: FirecrawlAgent , like this:
User-agent: FirecrawlAgent
Disallow:
User-agent: *
Disallow: /