Our Crawler supports the standard features of the Robots.txt file standard, and will respect all rules issued to our User-agent. Among other uses, the Robots.txt file is a good way to exclude certain portions of your site from your Swiftype site-search engine.
If you would like your Robots.txt file rules to apply only to our Crawler you should specify the Swiftbot User-agent in your file, as shown in the Disallow example blow. We will also respect rules specified under the wildcard User-agent.
User-agent: Swiftbot Disallow: /mobile/
If you have a wildcard Disallow, we simply will not touch your site at all. If you would like to specifically Allow only the Swiftype bot to index your site, you can Allow it (using a blank Disallow rule) as shown in the example below.
User-agent: Swiftbot Disallow: User-agent: * Disallow: /
You can also control the rate at which we access your website with the crawler by using the
User-agent: Swiftbot Crawl-delay: 5
For fine-grained control over how your pages are indexed, you may use robots meta tags.