search mobile facets autocomplete spellcheck crawler rankings weights synonyms analytics engage api customize documentation install setup technology content domains user history info home business cart chart contact email activate analyticsalt analytics autocomplete cart contact content crawling custom documentation domains email engage faceted history info install mobile person querybuilder search setup spellcheck synonyms weights engage_search_term engage_related_content engage_next_results engage_personalized_results engage_recent_results success add arrow-down arrow-left arrow-right arrow-up caret-down caret-left caret-right caret-up check close content conversions-small conversions details edit grid help small-info error live magento minus move photo pin plus preview refresh search settings small-home stat subtract text trash unpin wordpress x alert case_deflection advanced-permissions keyword-detection predictive-ai sso

Quick Start

Site Search is the easiest way to add a search to your website. The key to its simplicity is the dynamic, automated Site Search Crawler. A crawler is what one calls a script that scans the content and structure of publicly available webpages.

The crawler will crawl across your pages depending on the website address that you provide: Once crawled, your pages are ingested into your Engine and then indexed so that they are available for search and result customization.

An excellent and valuable search experience can be yours in just a couple of minutes:

  1. Create an account
  2. Create an Engine
  3. Install your Search box
  4. (Optional) Customize your crawling
  5. (Optional) Advanced implementations

1. Create an account

You will need to create a Site Search account to begin.

Be sure to confirm your email address!

2. Create an Engine

As part of the account creation process, you will be asked to create your first Engine. Engine is short for Search Engine - it is the sophisticated control centre, the brain of your search experience.

An Engine is fueled by the crawler. When the crawler 'crawls' your webpages, it ingests their contents and then organizes them in an index. The end-result is a well-structured, searchable document. These documents will live within your Engine.

After beginning your Site Search trial, enter your website's URI when prompted.

Note: You have the option of creating an API-based Engine, too.

Creating an engine

The crawler will undergo a preliminary scan of your website. You will be clearly notified of any issues. Once the scan is complete, you are to provide your Engine with a name. This can be something memorable for you.

Creating an engine

Great! We are almost there.

Now that you have an Engine created and you have ingested and indexed your documents, the next step is to install Site Search into your website. Installation requires that you place a JavaScript snippet within each of your webpages for which you would like to enable search.

The snippet will look like this:

<script type="text/javascript">


Your own snippet will be generated once you complete the customization process. You can choose how to style your results, configure autocomplete and calibrate a wide array of options from within your dashboard:

Creating an engine

Great! We have accomplished the following:

  • Created an Engine.
  • Released the Site Search crawler, which ingested and indexed your webpages, turning them into documents.
  • Styled our search experience and our result presentation.
  • Installed the snippet.

Phew! What a day. Connecting people to relevant information has its benefits! You are ready to realize them. If you stop here, you will have a standard, useful Site Search installation.


Note: The next steps are optional and can help you build deeper search experiences.

4. (Optional) Customizing your crawl

The standard Sites Search implementation produces satisfying results on most websites. However, you can customize deep into Site Search features and configure the crawler in different ways.

Meta Tags

Site Search Meta Tags give you an easy way to override how title or section data is extracted from a page. They can also be used to add powerful utility, like associating an image with a page or filtering pages by type.

For example, if a page has a title like " -- Books -- The Master and Margarita by Mikhail Bulgakov" and you would like the title in search results to be "The Master and Margarita", you can use Site Search Meta Tags to customize the default value of the title field.

Example - Using title
    <title> -- Books -- The Master and Margarita by Mikhail Bulgakov</title>
    <meta class="swiftype" name="title" data-type="string" content="The Master and Margarita" />

Building on the previous example, you can also associate an image with a page. The image will be displayed in the search results.

Example - Using image
    <title> -- Books -- The Master and Margarita by Mikhail Bulgakov</title>
    <meta class="swiftype" name="title" data-type="string" content="The Master and Margarita" />
    <meta class="swiftype" name="image" data-type="enum" content="" />

Read more: Meta Tags.

Excluding content by URL

Site Search allows you to limit the crawler to specific areas of your website using path inclusions and exclusions.

  • Whitelists only index URLs matching a pattern.

  • Blacklists do not index URLs matching a pattern.

Whitelists and blacklists can be combined for precise control over what will be ingested on your site.

Consider this example:

Path rule example

This will make Site Search ingest only URIs starting with /documentation and /questions and not ending with /danger. You could use this to index only support or product related materials, while excluding content that might not be valuable or relevant.

Read more: Path Whitelist and Blacklist Rules

Excluding pages with robots.txt

Site Search supports the robots.txt standard. You can use that file to exclude pages you do not want indexed. The robots.txt file is used by major search engines like Google, DuckDuckGo and Bing, in order to rank your webpages. Interfering with that process could have dire consequences. As such, you can specify the Swiftbot User-agent:

Example - robots.txt file restricting Swiftbot while allowing all other User-agents.
## Don't let Swiftbot index the pages under /archives/

User-agent: Swiftbot
Disallow: /archives/

## Allow other agents to index the entire site
User-agent: *

Read more: Robots.txt support.

5. (Optional) Advanced implementations

The Site Search Result Designer works well for customizing the basic styling of your search results and autocomplete menu, but more advanced implementations are also possible by using our our jQuery search and jQuery autocomplete plugins.

For additional information about these plugins, visit our jQuery tutorial.

What next?

Looking for some more reading materials? You may want read more about the Site Search Crawler. Another option is to explore the wide array of useful Features that are available to all Site Search users. If you want to get deeper into a code-level implementation, checkout API-based Engines.

Stuck? Looking for help? Contact Support!