Quick How Tos

How do I adjust how "fuzzy" the search is?

You can adjust how precisely a search result should match the user's search query under Search Settings.

  • Setting the fuzziness to 0 means only 100% matches should be shown.
  • Setting the fuzziness to 1 means only >=50% matches should be shown.
  • Setting the fuzziness to 2 means show everything slightly related to avoid empty results.

Can I boost certain pages?

Imagine that more than one page is relevant to a certain query but you'd like one of them to always be ranked a bit higher. There are a few ways to boost and give higher search rankings to a specific type of your search results while decreasing the importance of the others.

how to boost results with site search 360

Boost by using data points:

For example, let us assume that your users give "likes" to your articles or products. You can create a "Popularity" data point and boost the pages that have more upvotes. To do so:

  1. Create a data point and tell the crawler where to find the information about the number of likes on the page. You can source this information from a meta tag or even a hidden HTML-element on your pages. Use XPaths to point the crawler to the right element:
  2. creating a data point for boosting
  3. You will see 4 checkbox options for every data point you create:
    • Show - lets choose if this data point is visible or hidden from your audience.
    • Single - check it to only extract one value per page, recommended for boosting.
    • Boost - check it to be able to use the data point for boosting.
    • Sort - checking this option will trigger another toggle setting where you'd choose between ASC (ascending) or DESC (descending) order of results. This would also automatically add a dropdown filter to your search results enabling your searchers to only sort results by this value, thus overriding the default search result relevance.
  4. Now go to Search Settings and set Page boosting to "Use numeric data point" with the corresponding data point name.
  5. Another use case: to boost the pages that are newer, use a timestamp or the year as a data point.

Boost by using URL patterns:

This essentially works as "levels". Even though you can set anything between 1 and 100, it is advisable to always use levels of 10 (20, 30 etc.)

Example: you can boost your /products/ by 90, /category/ by 30, and /blog/ by 10: the higher the boosting score is, the more priority is given to the respective results.

How exactly does it work? Let's say you are boosting /blog/ by 10 and /products/ by 30. You type in a search query and you see that matching results under /products/ come up above /blog/ results, even if a blog post is a very good match to your query (e.g. has the exact same title). So boosting happens more or less independently of how well the query matches, although the query relevance does still play a role, especially for smaller boosting levels such as 1, 2 etc.

boost using URL patterns

You can also downrank or "penalize" certain results by setting the value at anything between 0 and 1. This is something to play around with as there are no linear dependencies, it is a logarithmic function.

Note: with URL boosting, a re-index from scratch (empty the index first) is necessary.

How do I index and search over multiple sites?

Let us assume you have the following setup:

You now want to index content from all three sites into one index and provide a search that finds content on all of those pages.

This can be easily achieved by using one of the following three methods.

Multiple Root URLs

Just let the crawler index multiple sites by providing multiple start URLs in Crawler settings. All the settings, e.g. white and blacklisting will be the same for all the sites. You probably want to create some Content Groups to visually separate results from the different sites.


Create a sitemap that contains URLs from all sites that you want to index. Our crawler will get every page and add it to your global index. Go to sitemap settings to enter the sitemap after you created it.


You can simply add pages from any of your sites via the API using your API key. You can either index by URL or send a JSON object with the indexable contents.

How do I avoid duplicate indexed content?

If you find duplicate content in your index there are multiple options to resolve that. In both cases, please clear the index before re-indexing to remove the duplicates.

Canonical Tags

Use canonical tag (read more here). Let us assume you have the two URLs with the same content:

You don't want them to be indexed twice so on each page you would add the following tag to indicate that they refer to the same URL:

Ignore URL parameters

Let us assume you have the two URLs with the same content:

Both URLs are different so they would be separate entries in the index. You can avoid that by removing URL parameters that have no influence on the page's content. To do so, go to crawler settings and turn ON "Ignore Query Parameters".

How do I switch from search results in a layer to embedded results?

Site Search 360 allows you to show results in a layover (default) when the search is triggered or embed the results seamlessly into your page. Just edit your ss360Config to choose one of the two options.

  • For the layover : searchResults: undefined
  • For embedded results: searchResults: {'contentBlock':'CSS-SELECTOR'} where CSS-SELECTOR is one or a comma-separated list of CSS selectors to DOM elements where the content should be embedded to. For example, if your main content block can be found under and that is where search results should appear you would write searchResults: {'contentBlock':'div#main'}

How to show embedded results in a new page?

If you choose to embed the search results, by default they will be embedded in the page where the search is triggered. That is fast and avoids reloading the site. However, if you have a certain search result page that you want to use instead, you can adjust your ss360Config object as follows:

You would have to replace /search.html with the path to your search result page and CSS-SELECTOR with a selector pointing to the area of the page where the search results should be embedded.

If you use an old version of the Site Search 360 Javascript (v6 or earlier) you can redirect the user to that page by adding the following to your ss360Config:

How to control what's indexed and shown in search results (exclude or include specific pages)?

Global search engine rules

When you work a new website and you don't want Google to pick it up yet, you might already be using the noindex robots meta tag:

which essentially prevents all crawlers, including Site Search 360, from indexing your pages. When a page isn't indexed, it's missing from search results.

If you want to keep your site hidden from Google but allow Site Search 360 to index it, simply turn on the Ignore Robots Meta Tag toggle under your Crawler Settings.

If it's the other way around (keeping results for Google but removing them from your on-site search), you can add the following meta tag where you'd use ss360 instead of robots:

Important! Make sure you're not blocking the same pages in your robots.txt file. When a page is blocked from crawling through robots.txt, your noindex tag won't be found by our crawler, which means, if other pages link out to the no-indexed page, it will still appear in search results.


Site Search 360-specific indexing rules

If you want to get Site Search 360 to show or ignore specific pages, use the options described below. You can find them under Indexing Control > Crawler Settings (scroll down the page).

  • Whitelist URL Patterns:

    Restrict the crawler to a specific area of your site.

    For example, you want to limit your search to blog pages only. If you whitelist /blog/, our crawler won't index anything except for the URLs containing /blog/.

    This can also be useful for multi-language websites:

    whitelist URL

    Note: make sure that your root URL matches your whitelisting pattern (e.g. https://website.com/blog/ or https://website.com/fr/). If the root URL itself doesn't contain the whitelist pattern, it will be blacklisted -> nothing can be indexed -> no search results.

  • Blacklist URL patterns:

    Tell the crawler to completely ignore specific areas of your site.

    For example, you want our crawler to ignore certain files or skip an entire section of your website. Go ahead and put one pattern per line here:

    blacklist URL

    Note: blacklisting has a priority over whitelisting. If there's a conflict, the whitelist patterns will be ignored.

  • No-index URL patterns:

    This setting is the same as noindex,follow robots meta tag: the crawler follows the page and all the outgoing links but doesn't include the no-indexed page in the results. It is different from blacklisting where the crawler fully ignores the page without checking it for other "useful" links.

    For example, URLs that are important for the search are linked from the pages you want exclude (e.g. your homepage, product listings, FAQ pages. Add them as no-index patterns:

    noindex URL  

    Note the $ sign: it indicates where the matching pattern should stop. In this case, URLs such as /promo/product1 or www.homepage.com/store will still be followed and indexed.

  • Blacklist XPaths:

    Sometimes you need to no-index pages that do not share any specific URL patterns. Instead of adding them one by one to no-index patterns (see above), check if you can blacklist them based on a specific CSS class or id.

    For example, you have category or product listing pages that you wish to hide from search results. If those pages have a distinct element which isn't used elsewhere, e.g. , add the following Blacklist XPath: .

    In this case the crawler would still follow all linking out URLs so your product pages get indexed and shown in the results. Learn how to use XPaths or reach out to us if you need any help.

Note: using a lot of No-index URL patterns or Blacklist XPaths slows down the indexing process as the crawler needs to scan every page and check it against all the indexing rules. If you're sure that a page can be safely excluded from indexng, use Blacklist URL patterns instead.

Can I use Site Search 360 with WordPress?

Definitely! As long as you can add our JavaScript code to your site, the search will work with any CMS like Joomla, Magento, HubSpot, Drupal, etc. But for an even easier integration with WordPress, we have developed a plugin.

Simply go to the Plugins section in your WP admin and look for Site Search 360 (by SEMKNOX GmbH):

Site Search 360 WordPress plugin

Make sure to check our detailed WordPress integration guide.

Can I use Site Search 360 with Cloudflare?

Yes, you can! We have a Cloudflare app that you can simply enable in your Cloudflare account. There are fewer configuration options than if you choose to insert the JavaScript by yourself but the search integration is even faster via the app.

Can I use Site Search 360 with Weebly?

Yes, we have developed a special app for Weebly so that you could easily add a search box and customize your search result page within the Weebly interface. You simply need to connect the Site Search 360 Weebly app to your site and drag and drop the app elements to your pages. You can refer to our Weebly integration guide for a step-by-step walkthrough.

When you connect the app, we automatically open a Site Search 360 trial account for you and start indexing your site's content. In order to check what URLs are indexed, remove unnecessary links, add quick filters (Content Groups such as "Blog", "Products", etc.) and Query Mappings, you'll need to log in to your Control Panel.

How do I prevent logging and tracking for certain users?

You might have your own team using your website's search often and don't want these searches to skew your logs. You can simply set a cookie in your browser for those users which prevents logging of their queries. To do so, simply open your browser console (F12 in Chrome and Firefox) and write document.cookie = "ss360-tracking=0; expires=Sun, 14 Jun 2020 10:28:31 GMT; path=/";. Of course you can change path and expiration date depending on your needs.

You can also block IPs from within the Control Panel under IP Blacklisting if the cookie approach does not work for you.

Note: when you test your search by using the search bar in your Control Panel, these test queries are not logged either.

How does the Site Search 360 crawler work? What does crawling or indexing mean? Can you index JavaScript content?

The Site Search 360 crawler visits your Root URL(s) (typically the homepage) and then follows all the links that point to other pages within your site. It will not follow external links but you can configure the crawler to follow links to subdomains (by turning the "Crawl Subdomains" setting on under the Crawler Settings) so that, for example, from your page domain.com the crawler then also goes to blog.domain.com, etc.

The Site Search 360 Crawler can index content that is dynamically loaded via JavaScript. This is currently a beta feature, please contact us to enable it for your account. You can also use the API to index JS-rendered content.

If you are blocking access for certain IPs but want the Site Search 360 crawler to have access to your site, please whitelist the following IPs in your firewall:


You can also look at the User Agent in the HTTP header. Our crawler identifies itself with this user agent string:

How do I index filters if I'm not using the API?

Our crawler can also index filter settings from the pages themselves. Just write the JSON array in an invisible element on your page and give it the id ss360IndexFilters.For example, you want a page to get a certain set of filters, you could add a content block like this to your page:

NOTE: Filters have to be defined in the Control Panel and referenced with the generated Filter-ID.

How can I add promotions or custom content to the search results?

You can add your custom HTML content anywhere in the search results for any query you like. For example, if you want your users to see a banner promotion when they search for food, you would follow this process:

  1. Go to Query Mappings.
  2. Type the query for which you would like to add your custom content, e.g. food.
  3. Decide whether the query must match exactly, only contain the term, or even match a regular expression.
  4. Choose the tab "Order Results" and press "Add Custom Result".
  5. Edit the newly created custom search result by writing any HTML you want the user to see.
  6. Don't forget to save your mapping. You can edit or delete that mapping later.

How to use all the Query Mapping features:

Query Mappings allow you to customize and display desired results for specific queries. This is a powerful tool that guides your site visitors in their search journey. Please refer to this detailed post on query mappings or read this short overview. So you can:

  • order results: just click to create a new mapping, enter your query, and drag and drop the best results to the top. You can also hide (by clicking on the red cross) some results from showing for this query.
  • rewrite a query: when a certain query already triggers correct search result rankings and you'd like a similar query or a synonym bring up the same exact results, e.g. you have a product called "Travelr" and you want to display the same results for "Traveler" so you can rewrite Traveler to Travelr.
  • redirect to URL: when you want certain keywords to directly open a dedicated landing or a promo page or save your customers some clicks, e.g. redirect contact or email to your contact information page.

Every time you create a mapping, you have to choose the Matching Type. This defines how your visitor's query compares to your mapped query. Let's say you want to customize search results for update:

  • Match means that it should be 100% match, so when someone types exactly update, they will get the results that you have specified for update. The lowercase/uppercase difference is ignored.
  • Phrase matches if your query is part of a phrase, e.g. with this setting software update or last update download would bring up the results specified for update.
  • Contains matches when the searcher's query contains your query: e.g. updates would work the same as update in this case. NOTE: keep in mind that, for example, prevention would bring matching results for event because it is contained in prEVENTion.
  • Regular Expression matches when your expression is found in the search query. This is the most precise way but you need to be fluent in regex.

If you want your customized queries to be featured in search suggestions (search-as-you-type results), switch the toggle to YES. Now if someone types in a part of your query, it will be shown above other suggestions.

suggest a query with customized results

What search operators can I use with Site Search 360?

Search operators are special characters that you can add to your query in order to refine the search results. We currently support these 2 most frequently used operators (based on our research after analyzing 10 mln queries):

  1. Quotes to search by phrase: put your query in quotes (" ") for an exact match. Example: "bill gates". In this case no pages or documents that mention something like "you have to pay this bill to open the gates" would come up.
  2. Negation to exclude a term. Put a minus (-) right before the word that you want to remove from search results. Example: bill -gates. In this case all pages or documents mentioning bill and NOT mentioning gates would be found.

What are XPaths?

First: you can find a very detailed post about how to use XPaths with Site Search 360 in Working with XPaths.

XPaths are expressions that allow you to identify elements on your web page. For example, the Xpath "//img" selects all images. If you are not used to XPath expressions but rather know CSS selectors, you can look at a very simple conversion table here.

Here is a list of potentially useful XPaths that you could modify and use for your purposes:

XPath Description
//h1 Selects all your <h1> elements
//div[@id="main"] Selects the div element with the id "main": <div id="main"></div>. This can be useful if you want to only index text within a certain element of the page and avoid indexing text from footers and sidebars.
//p[contains(@class,"notes")] Selects the p elements that have a class called "notes": <p class="something notes whatever">.
//img[contains(@class,"main-image")]//@src Selects the src attribute of all image elements that have a class called "main-image": <img class="main-image" src="image.jpg" />. This path can be used if you want to tell the crawler which image to index for your page.

If you're using Chrome the "XPath Helper" extension is useful for finding the right XPaths to your elements. Follow these steps to use the extension:

  1. After installing the extension, go to your web page that you want to analyze and press ctrl+shift+x to open the XPath Helper.
  2. Now hover with your mouse over the element to which you want to find the XPath to, e.g. an image on your page and press the shift key. The XPath that points to this element is now shown in the black XPath Helper box.
  3. In most cases, you do not need the entire XPath but you can shorten it. That sometimes requires some testing but a good indicator is if there is an element with an id in the XPath. You can remove everything before that element and start the XPath with two forward slashes. Example: The XPath shown is , you can shorten that to .
  4. Copy your XPath in the control panel and test it again there to see whether the crawler can use the XPath to find the exact content.

Check out this process in detail in our Working with XPaths explainer video.

We're an agency, how can we administer multiple accounts?

For users that want to administer multiple accounts there is a simple solution. Just follow these steps:

  1. Create an account using the normal sign up form. This will be your master account, you could for example use the domain of your agency, just don't use a customer's name here.
  2. Log into that account and go to Managed Accounts. Here you can add sites of your customers. These will be fully functional accounts but they will be attached to your master account. That means you can log into them with your credentials from your master account and jump between them easily.
  3. You can also manage permissions on the Team page by inviting your clients and colleagues to join your account and showing/hiding specific sections of the Control Panel to a particular user.

How do I implement pagination?

To put it shortly: you shouldn't (read here why). Site Search 360 offers a "load more" button out of the box. To use it, just adjust the results parameter in your ss360Config object.

If you still want to implement pagination you can use the API with offset and limit parameters.

You can now also allow you users to infinitely scroll down the search results by setting results.inifiniteScroll: to true. This would replace the 'Show more results' button and is only available when you don't have any content group set up or if your content group navigation is tabbed.

Can I use multiple search boxes on one page?

Yes, you just have to adjust the searchBoxSelector in your ss360Config to point to all of your search fields. Usually, you would give your search input fields a CSS class like so

Then your selector in your ss360Config object would look like this searchBoxSelector: '.ss360SearchBox'. See an example here: Multiple Search Boxes on one Page.

Can I show the search in Google's Sitelinks Searchbox?

Absolutely, and you should. It allows your users to quickly search your website without landing on it. Here's how it looks:

google sitelinks searchbox github

Please refer to Google's guidelines for more detail.

To enable this feature with Site Search 360, just add the following script to your home page. Next time Google crawls your site it will interpret the script and show the sitelinks searchbox.

  1. Make sure you change "https://example.com/" to your website URL and modify the "target" parameter to match your search URL pattern. E.g. https://site.com?ss360Query={search_term_string}.
  2. Pay attention to the search query parameter in your ss360Config object: it should have the same name as in the target URL. By default (v8 and up of our script) you have searchQueryParamName: 'ss360Query'.
<script type="application/ld+json">
  "@context": "http://schema.org",
  "@type": "WebSite",
  "url": "https://example.com/",
  "potentialAction": {
    "@type": "SearchAction",
    "target": "https://example.com/search?ss360Query={search_term_string}",
    "query-input": "required name=search_term_string"

To test your configuration, replace "search_term_string" with a test query and copy that URL to a web browser. For example, if your website is site.com, and you want to test the query "pizza", you would browse to https://www.site.com/search/?q={pizza}

What are Content Groups?

Content groups help making your search results more accessible and - using Data Points - much more informative.

For example, let's say you publish recipes and kitchen appliance reviews on your site. In that case you could make recipes and reviews your content groups. If a user types "crock pot" the results are now grouped into recipes that you make in a crock pot and reviews of crock pots on your site. Additionally, you can show the data point "calories" for recipes and "price" for product reviews right in the search results.

Content groups are exclusive, i.e. one page can only be in one content group. If a page does not match any of the content groups, it will be put in the "Other" (uncategorized results) content group. You can rename or hide that content group by using the parameters otherContentGroupName and ignoreOtherContentGroup in your ss360Config object on your site.

For a live example on how this could look go to spoonacular and type "chicken". For more details, check out our articles on Content Groups and Data Points.

How do I change what search result snippet is shown?

You can control where the text shown in the search results is coming from on the Search settings page.

Search Snippet Source

You can choose from:

  • Content around the search query (selected by default).
  • First sentences of the indexed content.
  • A specific area of the website that you determine via XPath under Crawler Settings.
  • No snippet at all (titles and URLs will still be shown).
change search result snippets  

If you want to use Meta Descriptions in your snippets, select the option "Use content behind the Search Snippet XPath" and save. By default, we already tell the crawler to index meta descriptions with the following XPath: //meta[@name="description"]/@content

If you set a different Search Snippet XPath, you need to run a full re-index. When you add or update your meta descriptions, you also need to manually re-index your site or wait until the crawler picks up the changes on the next scheduled re-crawl.

How do I change what search suggestions are shown?

Once your website content is properly indexed (see How do I exclude certain pages from search results?), you can change the behavior and the precision of your search suggestions without initiating a re-index. Just go to the Search Settings where you can:

  • choose the degree of search suggestion fuzziness, i.e. whether you want your suggestions to strictly match the search query or you'd like to allow more flexibility.
  • restrict suggestions to only derive from page titles.
  • tweak search suggestions

When is the setting "Suggestions only on titles" useful? For example, you have a series of products called "Travelr." When your customers type travel in your search box, you may only want to prompt them with the "Travelr" product pages and ignore all the other pages that mention, let's say, travel budgeting.

Tip: By default, the first h1 header on your page (//h1) is taken as a title that comes up in search suggestions and results. However, you can point our crawler to any part of a page by adjusting the Title XPath under the Crawler settings. Here's how to work with XPaths. When editing the XPaths, remember to run a re-index for the changes to take effect.

NOTE: search suggestions (= autocomplete, or search-as-you-type results) are generated differently from search results (the ones you see after you hit Enter or search button). That is because when you type, it's impossible to be sure whether you are done typing or not, so the engine has to search _within_ words. When you submit a search query, this indicates that you have typed in everything you wanted to type. E.g. if you type hot, showing search suggestions for hotel would make total sense, but once you press Enter, it becomes clear that you want to find pages with the word hot and not hotel-related pages.

How do I change the font / styles / colors of suggestions and results?

The easiest way is to just change the themeColor in your ss360Config like so themeColor: '#00aa35'. If you want to have more influence on the design, keep reading.

First, you can use our Search Designer to not only change the colors but the entire search result layout if you'd like. The corresponding code will be auto-generated for you at the bottom of the Designer page. If you want to make more specific changes, you can add some inline CSS by modifying the style.additionalCss parameter.

By default Site Search 360 brings its own stylesheet where we use, for example: font-family: sans-serif;. You can simply deactivate it by editing the style parameter of ss360Config object and setting defaultCss: false.

Finally, here's a copy of our CSS so you know what can be styled (take out the .min for a readable version):

How can I show more information in the search suggestions?

You can choose to show static or dynamic data in your search suggestion next to each result. For example, let us assume you want to show the breadcrumb of the result directly in the search suggestions. Simply create a data point, give it a name, e.g. "breadcrumb", and then reference it in your ss360Config with extraHtml: '#breadcrumb#' (the data point name needs to be wrapped in two #).

This example shows you how to do it, in this case with the data point "calories". Just add a CSS class to position the newly created information in your search suggestions, e.g.

How do I track the search with Google Analytics or Google Tag Manager

The Site Search 360 Javascript allows you to set up external tracking for Google Analytics and Google Tag Manager. All you have to do is configure the tracking object in your ss360Config object. You just have to set the provider that you want and can optionally react to tracking events using the searchCallback.

This tracking will add a ?ss360Query={query} to the search result pages which can be viewed and analyzed in Google Analytics. To see the search terms in Google Analytics, you have to specify the URL parameter for the query (ss360Query): https://support.google.com/analytics/answer/1012264?hl=en.Please note that you need at least v7 of the script.

How to integrate Site Search 360 with Google Tag Manager?

  1. Head over to Google Tag Manager, log in to your account, and add a New Tag.
  2. In the Tag Configuration select Custom HTML as tag type.
  3. If you're using v12 of our search script, add the code snippet below to your tag. Note that you need to replace 'mysiteid.com' with your site ID (which can be found under Profile) and specify other parameters (searchBox, results, etc.) of your ss360Config (the easiest way to generate them is to use our Search Designer). Everything else, starting from var e=, is ready for a copy-paste:
  4. <script type="text/javascript">
    window.ss360Config = {
       siteId: 'mysiteid.com',
       searchBox: {...}
    var e=document.createElement("script");
    e.setAttribute("nomodule", "nomodule");
  5. If you're using an earlier script version, consider upgrading first.
  6. Now set the Trigger to All pages.
  7. Finally, hit Save in the upper right and publish your changes.

How do I set up OpenSearch?

Setting up OpenSearch and enabling users to install your webpage as native Search Engine in the browser (or search directly from the address bar in Google Chrome) is pretty straightforward.

First you need to upload a opensearch.xml file with the following content to your server:

<OpenSearchDescription xmlns="http://a9.com/-/spec/opensearch/1.1/" xmlns:moz="http://www.mozilla.org/2006/browser/search/">
<Image height="16" width="16" type="image/x-icon">
<Url type="text/html" template="https://samplesite.com?ss360Query={searchTerms}&ref=opensearch"/>

Make sure to replace the following:

  • SHORT_NAME with the name of your website (16 or fewer characters)
  • DESCRIPTION with a brief description of your search engine - what users can search for
  • TAGS with few tags/keywords describing your search engine (web page), separated by comma
  • https://samplesite.com/favicon.ico with a link to an icon (e.g. favicon)
  • https://samplesite.com with a link to your webpage (where the search results are located, or homepage if you are using the layover layout)
  • ss360Query with the name of your search query parameter (if you've adjusted the default one)

And finally you'll need to reference the uploaded file in your HTML templates (e.g. on your homepage, or on your search result page). To do this, just add the following meta tag to the <head></head> part of the page (and update the content of the href attribute if the opensearch.xml isn't placed in the root directory + replace the TITLE with the name of your website):

<link rel="search" type="application/opensearchdescription+xml" href="/opensearch.xml" title="TITLE">

What can I do in the Control Panel?

Watch this brief overview of the Control Panel to get a better understanding of what is possible: