search mobile facets autocomplete spellcheck crawler rankings weights synonyms analytics engage api customize documentation install setup technology content domains user history info home business cart chart contact email activate analyticsalt analytics autocomplete cart contact content crawling custom documentation domains email engage faceted history info install mobile person querybuilder search setup spellcheck synonyms weights engage_search_term engage_related_content engage_next_results engage_personalized_results engage_recent_results success add arrow-down arrow-left arrow-right arrow-up caret-down caret-left caret-right caret-up check close content conversions-small conversions details edit grid help small-info error live magento minus move photo pin plus preview refresh search settings small-home stat subtract text trash unpin wordpress x alert case_deflection advanced-permissions keyword-detection predictive-ai sso

Robots.txt Support

The Swiftype Crawler's User-agent is: Swiftbot.


The Site Search Crawler supports the features of the robots.txt file standard and will respect all of its rules.

A robots.txt file is not required for Site Search to function, but it can help direct the crawler where you do or do not want it to go.

Disallow the Crawler

The robots.txt file can exclude portions of your site from Site Search by disallowing access to the Swiftbot user agent.

Careful! If your robots.txt is set to disallow content that has already been crawled, it will stay in your Engine but no longer be updated!

See Troubleshooting: Removing Documents if you run into that scenario.

Example - robots.txt file disallowing the Site Search Crawler from indexing any content under the /mobile/ path.
User-agent: Swiftbot
Disallow: /mobile/
Example - robots.txt file disallowing the Site Search Crawler and all other crawlers, for all pages. Not helpful!
User-agent: *
Disallow: /

Allow the Crawler

Use the Disallow rule to permit Swiftbot places which you do not want other crawlers to go.

Example - robots.txt file allowing Swiftbot while disallowing all other User-agents, like those belonging to major search engines. Specifying a User-agent overrides the wildcard (*).
User-agent: Swiftbot
Disallow:

User-agent: *
Disallow: /
Example - robots.txt file disallowing the Swiftbot access to one directory, /documentation/ and disallowing any other User-agent access to all pages.
User-agent: Swiftbot
Disallow: /documentation/

User-agent: *
Disallow: /

Control the Crawler

You can control the rate at which the Crawler will access your website by using the Crawl-delay directive with a number indicating seconds.

A crawl is web traffic, so limiting it can reduce bandwidth. Limiting it too much, however, can slow the uptake of new documents!

Example - robots.txt file with a Crawl-delay of 5 seconds. A delay of 5 seconds is 17,280 crawls per day.
User-agent: Swiftbot
Crawl-delay: 5

For fine-grained control over how your pages are indexed, you can configure Meta Tags. We even support robots Meta Tags.


Stuck? Looking for help? Contact support or check out the Site Search community forum!