Quick How Tos

How do I adjust how "fuzzy" the search is?

You can adjust how precisely a search result should match the user's search query under Search Settings
  • Setting the fuzziness to 0 means only 100% matches should be shown.
  • Setting the fuzziness to 1 means only >=50% matches should be shown.
  • Setting the fuzziness to 2 means show everything slightly related to avoid empty results.

Can I boost certain pages?

Imagine that more than one page is relevant to a certain query but you'd like one of them to always be ranked a bit higher. You can use data points to boost certain pages. For example, let us assume that your users can "like" the page. You can now create a "like" data point and boost the pages that have more likes. To do so, create a data point and tell the crawler where to find the information about the number of likes on the page and then set up boosting under Search Settings to "Data Point" with the data point name "like". Make sure the data behind these data points is numeric. Other use cases could be to boost the pages that are newer (use a timestamp or the year as a data point). You can also boost all pages under a certain URL path. For example, let's say everything under is indeed important. You can boost by URL Patterns and assign a boost factor >1 to those pages.

How do I index and search over multiple sites?

Let us assume you have the following setup: You now want to index content from all three sites into one index and provide a search that finds content on all of those pages. This can be easily achieved by using one of the following three methods.

Multiple Root URLs

Just let the crawler index multiple sites by providing multiple start URLs in Crawler settings. All the settings, e.g. white and blacklisting will be the same for all the sites. You probably want to create some Content Groups to visually separate results from the different sites.


Create a sitemap that contains URLs from all sites that you want to index. Our crawler will get every page and add it to your global index. Go to sitemap settings to enter the sitemap after you created it.


You can simply add pages from any of your sites via the API using your API key. You can either index by URL or send a JSON object with the indexable contents.

How do I avoid duplicate indexed content?

If you find duplicate content in your index there are multiple options to resolve that. In both cases, please clear the index before re-indexing to remove the duplicates.

Canonical Tags

Use canonical tag (read more here). Let us assume you have the two URLs with the same content: You don't want them to be indexed twice so on each page you would add the following tag to indicate that they refer to the same URL:

Ignore URL parameters

Let us assume you have the two URLs with the same content: Both URLs are different so they would be separate entries in the index. You can avoid that by removing URL parameters that have no influence on the page's content. To do so, go to crawler settings and turn ON "Ignore Query Parameters".

How do I switch from search results in a layer to embedded results?

Site Search 360 allows you to show results in a layover (default) when the search is triggered or embed the results seamlessly into your page. Just edit your ss360Config to choose one of the two options.
  • For the layover : searchResults: undefined
  • For embedded results: searchResults: {'contentBlock':'CSS-SELECTOR'} where CSS-SELECTOR is one or a comma-separated list of CSS selectors to DOM elements where the content should be embedded to. For example, if your main content block can be found under and that is where search results should appear you would write searchResults: {'contentBlock':'div#main'}

How to show embedded results in a new page?

If you choose to embed the search results, they will be embedded in the page where the search is triggered. That is fast and avoids reloading the site. However, if you have a certain search result page that you want to use instead, you can redirect the user to that page by adding the following to your ss360Config.
If you use v7 of the Site Search 360 javascript you can also use a different approach in your ss360Config object as follows:
Of course you have to replace /search.html with whatever the path to your search result page is and CSS-SELECTOR with a selector pointing to the area where the search results should be embedded.

How do I exclude certain pages from search results?

So, you'd like to prevent crawlers from indexing a part of your website.

First things first, this can be configured globally, or, in other words, for all crawlers at once (Google, Bing, Site Search 360, etc.) by implementing noindex either as a meta tag and or as an HTTP response header.

The robots attribute means that this directive applies to all crawlers and instructs search engines to remove the page from search results. If you only want to exclude pages from Site Search 360 but keep them for Google, use ss360 instead of robots:

Important! Make sure you're not blocking the same pages in your robots.txt file. When a page is blocked from crawling through robots.txt, your noindex simply won't be found by the crawler. In this case the page can still appear in search results, for example if other pages link to it.


If you're not so sure about using meta tags or you want to only adjust indexing for Site Search 360, you can easily do this under the Crawler Settings. You have a few options here:

    • Whitelist URL Patterns:

      For example, you want to enable site search exclusively on your blog. You can simply whitelist /blog/ and our crawler won't index anything except for URLs containing /blog/. This can also be useful for multi-language websites:

      whitelist URL

      In this case only pages containing /fr/ will appear in the index and therefore in the search results.

    • Blacklist URL patterns:

      For example, you want our crawler to ignore a certain type of file or an entire section of your website. Go ahead and register it here:

    • blacklist URL

      Note: blacklisting has a higher priority over whitelisting. If our crawler finds both, the whitelist patterns will be ignored.

  • No-index URL patterns:

    This field essentially acts the same as noindex,follow robots meta tag: it tells the crawler not to record the information on the page but to still crawl the pages linking out of it.

    For example, you want to keep a particular page out of search results. This can be your homepage or a special promotional page that you only communicate to specific customers via a direct link (and you don't want it to be found otherwise):

    noindex URL  

    Note the $ sign: it indicates where the matching pattern should stop. In this case, URLs such as /promo/product1 or www.homepage.com/store will still be followed and indexed.

To access these settings, simply go under Indexing Control > Crawler Settings and scroll down the page.

How to index my website with Site Search 360 while keeping it invisible for Google?

When you work a new website and you don't want Google to pick it up yet, you might be using this:
which essentially prevents all crawlers, including Site Search 360, from indexing your pages. To solve this, you simply need to address our crawler by the following tag:
In this case your website will still be hidden from Google while allowing Site Search 360 to work at its full capacity.

Can I use Site Search 360 with WordPress?

Definitely! As long as you can add our JavaScript code to your site, the search will work with any CMS like Joomla, Magento, HubSpot, Drupal, etc. But for an even easier integration with WordPress, we have developed a plugin.

Simply go to WP plugins and look for Site Search 360 (by SEMKNOX GmbH):

Site Search 360 WordPress plugin

Here's a more detailed guide made by one of our users showing how to configure Site Search 360 plugin in WordPress.

Can I use Site Search 360 with Cloudflare?

Yes, you can! We have a Cloudflare app that you can simply enable in your Cloudflare account. There are fewer configuration options than if you choose to insert the JavaScript by yourself but the search integration is even faster via the app.

How do I prevent logging and tracking for certain users?

You might have your own team using your website's search often and don't want these searches to skew your logs. You can simply set a cookie in your browser for those users which prevents logging of their queries. To do so, simply open your browser console (F12 in Chrome and Firefox) and write document.cookie = "ss360-tracking=0; expires=Sun, 14 Jun 2020 10:28:31 GMT; path=/";. Of course you can change path and expiration date depending on your needs. You can also block IPs from within the Control Panel under IP Blacklisting if the cookie approach does not work for you.

How does the SS360 crawler work?

The SS360 crawler visits your "Root URLs" (typically the homepage) and then follows all links that point to other pages within your site. It will not follow external links but you can configure the crawler to follow links to subdomains of your page. For example, the crawler on your page domain.com can also crawl blog.domain.com if you wish. The SS360 Crawler cannot index content that is dynamically loaded via JavaScript. You can, however, use the API to index such content. The crawler identifies itself as If you are blocking access for certain IPs but want the SS360 crawler to have access, please whitelist the following IPs in your firewall:
You can also look at the User Agent in the HTTP header. Our crawler identifies itself with this user agent string:

How do I index filters if I'm not using the API?

Our crawler can also index filter settings from the pages themselves. Just write the JSON array in an invisible element on your page and give it the id ss360IndexFilters For example, you want a page to get a certain set of filters, you could add a content block like this to your page:

How can I add promotions or custom content to the search results?

You can add your custom HTML content anywhere in the search results for any query you like. For example, if you want your users to see a banner promotion when they search for "food" you would follow this process:
  1. Go to "Query Mappings"
  2. Type the query for which you would like to add your custom content, e.g. "food".
  3. Decide whether the query must match exactly, only contain the term, or even match a regular expression.
  4. Choose the tab "Order Results" and press "Add Custom Result".
  5. Edit the newly created custom search result by writing any HTML you want the user to see.
  6. Don't forget to save your mapping. You can edit or delete that mapping later.

What search operators can I use with Site Search 360?

Search operators are special characters that you can add to your query in order to refine the search results. We currently support these 2 most frequently used operators (based on our research after analyzing 10 mln queries):
  1. Quotes to search by phrase: put your query in quotes (" ") for an exact match. Example: "bill gates". In this case no pages or documents that mention something like "you have to pay this bill to open the gates" would come up.
  2. Negation to exclude a term. Put a minus (-) right before the word that you want to remove from search results. Example: bill -gates. In this case all pages or documents mentioning bill and NOT mentioning gates would be found.

What are XPaths?

First: you find a very detailed post about how to use XPaths with Site Search 360 in Working with XPaths.

XPaths are expressions that allow you to identify elements on your web page. For example, the Xpath "//img" selects all images. If you are not used to XPath expressions but rather know CSS selectors, you can look at a very simple conversion table here.

Here is a list of potentially useful XPaths that you could modify and use for your purposes:

XPath Description
//h1 Selects all your elements
//div[@id="main"] Selects the div element with the id "main": . This can be useful if you want to only index text within a certain element of the page and avoid indexing text from footers and sidebars.
//p[contains(@class,"notes")] Selects the p elements that have a class called "notes": .
//img[contains(@class,"main-image")]//@src Selects the src attribute of all image elements that have a class called "main-image": . This path can be used if you want to tell the crawler which image to index for your page.
If you're using Chrome the "XPath Helper" extension is useful for finding the right XPaths to your elements. Follow these steps to use the extension:
  1. After installing the extension, go to your web page that you want to analyze and press ctrl+shift+x to open the XPath Helper.
  2. Now hover with your mouse over the element to which you want to find the XPath to, e.g. an image on your page and press the shift key. The XPath that points to this element is now shown in the black XPath Helper box.
  3. In most cases, you do not need the entire XPath but you can shorten it. That sometimes requires some testing but a good indicator is if there is an element with an id in the XPath. You can remove everything before that element and start the XPath with two forward slashes. Example: The XPath shown is , you can shorten that to .
  4. Copy your XPath in the control panel and test it again there to see whether the crawler can use the XPath to find the exact content.
Check out this process in detail by reading Working with XPaths.

We're an agency, how can we administer multiple accounts?

For users that want to administer multiple accounts there is a simple solution. Just follow these steps:
  1. Create an account using the normal sign up form. This will be your master account, you could for example use the domain of your agency, just don't use a customer's name here.
  2. Log into that account and go to Managed Accounts. Here you can add sites of your customers. These will be fully functional accounts but they will be attached to your master account. That means you can log into them with your credentials from your master account and jump between them easily.
  3. If you want to administer more than five paid accounts we have volume discounts that you can benefit from, just contact us for more information.

How do I implement pagination?

To put it shortly: you shouldn't (read here why). Site Search 360 offers a "load more" button out of the box with the JavaScript integration. To use that just configure it in your ss360Config object.
If you still want to implement pagination you can use the API with offset and limit parameters.

Can I use multiple search boxes on one page?

Yes, you just have to adjust the searchBoxSelector in your ss360Config to point to all of your search fields. Usually, you would give your search input fields a CSS class like so
Then your selector in your ss360Config object would look like this searchBoxSelector: '.ss360SearchBox'. See an example here: Multiple Search Boxes on one Page

Can I show the search in Google's Sitelinks Searchbox?

Absolutely, and you should. It allows your users to quickly search your website without landing on it. Here's how it looks:

google sitelinks searchbox github

Please refer to Google's guidelines for more detail.

To enable this feature with Site Search 360, just add the following script to your home page. Next time Google crawls your site it will interpret the script and show the sitelinks searchbox.

  1. Make sure you change "https://example.com/" to your website URL and modify the "target" parameter to match your search URL pattern. E.g. https://site.com?ss360Query={search_term_string}.
  2. Pay attention to the search query parameter in your ss360Config object: it should have the same name as in the target URL. By default (v8 and up of our script) you have searchQueryParamName: 'ss360Query'.
<script type="application/ld+json">
  "@context": "http://schema.org",
  "@type": "WebSite",
  "url": "https://example.com/",
  "potentialAction": {
    "@type": "SearchAction",
    "target": "https://example.com/search?ss360Query={search_term_string}",
    "query-input": "required name=search_term_string"

To test your configuration, replace "search_term_string" with a test query and copy that URL to a web browser. For example, if your website is site.com, and you want to test the query "pizza", you would browse to https://www.site.com/search/?q={pizza}

What are "Content Groups"?

Content groups help making your search results more accessible and - using data points - much more informative. For example, let's say you publish recipes and kitchen appliance reviews on your site. In that case you could make recipes and reviews your content groups. If a user types "crock pot" the results are now grouped into recipes that you make in a crock pot and reviews of crock pots on your site. Additionally, you can show the data point "calories" for recipes and "price" for product reviews right in the search results. Content groups are exclusive, i.e. one page can only be in one content group. If a page does not match any of the content groups, it will be put in the "Other" content group. You can rename or ignore that content group using the parameters otherContentGroupName and ignoreOtherContentGroup in your ss360Config object on your site. For a live example on how this could look go to spoonacular and type "chicken". You find more detailed information about content groups one the page about content groups.

How do I change what search result snippet is shown?

You have control over where the text that is shown in the search results is coming from.

Search Snippet Source

You can choose from:

  • Content around the search query.
  • The first sentences of the indexed content.
  • A specific area of the website that you determine via XPath under Crawler Settings. For example, you could show the Meta Description of a website in the search results. Please note that you might have to change the crawler setting "Search Snippet XPath" and re-index your site before choosing this option.

How do I change what search suggestions are shown?

Once your website content is properly indexed (see How do I exclude certain pages from search results?), you can change the behavior and the precision of your search suggestions without initiating a re-index. Just go to the Search Settings where you can:
  • choose the degree of search suggestion fuzziness, i.e. whether you want your suggestions to strictly match the search query or you'd like to allow more flexibility.
  • restrict suggestions to only derive from page titles. tweak search suggestions
When is the setting "Suggestions only on titles" useful? For example, you have a series of products called "Travelr." When your customers type travel in your search box, you may only want to prompt them with the "Travelr" product pages and ignore all the other pages that mention, let's say, travel budgeting. Tip: By default, the first h1 header on your page (//h1) is taken as a title that comes up in search suggestions and results. However, you can point our crawler to any part of a page by adjusting the Title XPath under the Crawler settings. Here's how to work with XPaths. When editing the XPaths, remember to run a re-index for the changes to take effect.

How do I change the font / styles / colors of suggestions and results?

The easiest way to just change the themeColor in your ss360Config like so themeColor: '#00aa35'. If you want to have more influence on the design, keep reading. By default ss360 brings its own stylesheet where we use, for example: font-family: sans-serif;. You can simply deactivate it by editing your ss360Config object and set defaultCss: false. Here you can get a copy of our CSS to know what you can style (take out the .min for a readable version):

How can I show more information in the search suggestions?

You can choose to show static or dynamic data in your search suggestion next to each result. For example, let us assume you want to show the breadcrumb of the result directly in the search suggestions. Simply create a data point, give it a name, e.g. "breadcrumb", and then reference it in your ss360Config with extraHtml: '#breadcrumb#' (the data point name needs to be wrapped in two #). This example shows you how to do it, in this case with the data point "calories". Just add a CSS class to position the newly created information in your search suggestions, e.g.

How do I track the search with Google Analytics or Google Tag Manager

The Site Search 360 javascript allows you to configure external tracking for Google Analytics and Google Tag Manager. All you have to do is configure the tracking object in your ss360Config object. You just have to set the provider that you want and can optionally react to tracking events using the searchCallback.
This tracking will add a ?ss360Query={query} to the search result pages which can be viewed and analyzed in Google Analytics. To see the search terms in Google Analytics, you have to specify the URL parameter for the query (ss360Query): https://support.google.com/analytics/answer/1012264?hl=en. Please note that you need at least v7 of the script.

What can I do in the Control Panel?

Watch this brief overview of the Control Panel to get a better understanding what is possible: