search mobile facets autocomplete spellcheck crawler rankings weights synonyms analytics engage api customize documentation install setup technology content domains user history info home business cart chart contact email activate analyticsalt analytics autocomplete cart contact content crawling custom documentation domains email engage faceted history info install mobile person querybuilder search setup spellcheck synonyms weights engage_search_term engage_related_content engage_next_results engage_personalized_results engage_recent_results success add arrow-down arrow-left arrow-right arrow-up caret-down caret-left caret-right caret-up check close content conversions-small conversions details edit grid help small-info error live magento minus move photo pin plus preview refresh search settings small-home stat subtract text trash unpin wordpress x alert case_deflection
Swiftype Documentation / crawler: Robots.txt Support

Robots.txt Support

Our Crawler supports the standard features of the Robots.txt file standard, and will respect all rules issued to our User-agent. Among other uses, the Robots.txt file is a good way to exclude certain portions of your site from your Swiftype site-search engine.

The Swiftype Crawler's User-agent is: Swiftbot.

If you would like your Robots.txt file rules to apply only to our Crawler you should specify the Swiftbot User-agent in your file, as shown in the Disallow example blow. We will also respect rules specified under the wildcard User-agent.

Example - Robots.txt file disallowing the Swiftype Crawler from indexing any content under the /mobile path.
User-agent: Swiftbot
Disallow: /mobile/

If you have a wildcard Disallow, we simply will not touch your site at all. If you would like to specifically Allow only the Swiftype bot to index your site, you can Allow it (using a blank Disallow rule) as shown in the example below.

Example - Robots.txt file allowing the Swiftype bot while disallowing all other User-agents.
User-agent: Swiftbot

User-agent: *
Disallow: /

You can also control the rate at which we access your website with the crawler by using the Crawl-delay directive.

Example - Robots.txt file with a Crawl-delay of 5.
User-agent: Swiftbot
Crawl-delay: 5

For fine-grained control over how your pages are indexed, you may use robots meta tags.