search mobile facets autocomplete spellcheck crawler rankings weights synonyms analytics engage api customize documentation install setup technology content domains user history info home business cart chart contact email activate analyticsalt analytics autocomplete cart contact content crawling custom documentation domains email engage faceted history info install mobile person querybuilder search setup spellcheck synonyms weights engage_search_term engage_related_content engage_next_results engage_personalized_results engage_recent_results success add arrow-down arrow-left arrow-right arrow-up caret-down caret-left caret-right caret-up check close content conversions-small conversions details edit grid help small-info error live magento minus move photo pin plus preview refresh search settings small-home stat subtract text trash unpin wordpress x alert case_deflection advanced-permissions keyword-detection predictive-ai sso
Site Search Documentation / crawler: Crawler Overview

Swiftype Crawler

The easiest way to get started using Swiftype is to let our crawler spider your content. The Swiftbot is a high-performance web crawler that will quickly index your website and make it available for searching with Swiftype.

To control how your content is indexed, the crawler supports meta tags and Content Exclusion. The crawler also understands [the Robots.txt format" robots %}. Additionally, on your Swiftype dashboard, you can configure Path Black and White Lists to stop the crawler from indexing parts of your website.

The crawler will not cross domains while indexing content (including subdomains). If you would like to index multiple domains, add while creating the search engine, or on the Manage Domains page on the Dashboard.

Swiftype will re-index your content periodically. You can force a re-crawl by clicking the Recrawl button on the Domain page of your dashboard. (This may be disabled if your domain has more than 50,000 pages or has been re-crawled recently. Contact for assistance if you need a re-crawl.)

Page Schema

The crawler creates a DocumentType called page with the following schema:

Field Type Description
external_id enum For crawler-based search engines, the hexidecimal MD5 digest of the normalized URL of the page.
updated_at date The date when the page was last indexed.
title string The title of the page taken from the <title> tag or the title meta tag.
url enum The URL of the page.
sections string Sections of the page (determined by <h1>, <h2>, and <h3> tags or the or the sections
body text The text of the page
type enum The page type (set by the type meta tag).
image enum A URL for an image associated with the page (set by the image meta tag), used as a thumbnail in your search result listing if present.
published_at date The date the page was published. It can be set with the published_at meta tag. If not specified, it defaults to the time when the page was crawled, which might not be particularly useful for result sorting.
popularity integer The popularity score for a page. Specialized crawlers for content management systems like Tumblr may use this field, or it can be set with the popularity meta tag and used to change search result rankings with functional boosts. If not specified, the default value is 1.
info string Additional information about the page returned with the results (set by the info meta tag)

Read more about Field Types and DocumentTypes.

You may use these field names to control what results are returned with fetch_fields, to control field boosts with functional_boosts, or as search filters with filters. See the search documentation for details.