Quick How-Tos

Indexing settings

Indexing rules are directly responsible for the quality of your search results and can be set under the Data Import section. Whenever you edit these settings, you need to re-index all configured sources for the changes to be applied. We also update your index automatically, the re-crawl/reindexing frequency depends on your plan.

What is indexing and how does the Site Search 360 crawler work?

A crawler, or spider, is a type of bot that browses your website and builds a search index that enables your search. A search index is a list of pages and documents that are shown as search results in response to a query that your site visitor types into a search box on your site.

The exact crawler behavior can be configured with "Website Crawling" or "Sitemap Indexing", or both at the same time.

Sitemap Indexing is a preferred indexing method. This means that, if we can detect a valid sitemap XML file for the domain you've provided at the registration, the Site Search 360 crawler will go to that sitemap - typically found at https://www.yoursite.com/sitemap.xml or https://www.yoursite.com/sitemap-index.xml - to pick up your website URLs listed there.

Note: The sitemap XML file should be formatted correctly for the crawler to process it. Check these guidelines.

If we cannot detect a valid sitemap for your domain, we automatically switch on the Website Crawling method: the Site Search 360 crawler visits your root URL(s) - typically the homepage - and follows outgoing links that point to other pages within your site or within the domains you've specified as the starting points for the crawler.

Indexing means adding the pages that were discovered by the crawler to a search index that is unique for every Site Search 360 project. Every project is referenced by its unique ID displayed under Account and is essential if you integrate the search manually.

Your Index log allows you to look up any URL and check if it's indexed.

Tip: If you notice that some search results are missing, the first thing to check is whether the missing URL is indexed. You can also try re-indexing it and see if it triggers any errors. With Sitemap Indexing, make sure the missing URL is included in your sitemap.

General principles:

  • The crawler does NOT go to external websites including Facebook, Twitter, LinkedIn, etc., but we do crawl your subdomains by default. For example, we would pick up the URLs from https://blog.domain.com if your start URL is https://domain.com. Turn the "Crawl Subdomains" setting OFF under Website Crawling if you'd like to exclude pages from your subdomains. You can also blacklist unwanted URL patterns.

  • The crawler always checks if there're any rules preventing link discovery. These rules can be set for all search engines (e.g. with the robots meta tag) or only applied for your internal search results, in which case you need to specify them under your Site Search 360 Settings. Check out how.

If you are blocking access for certain IPs but want the Site Search 360 crawler to have access to your site, please whitelist the following IP addresses in your firewall:

  • 88.99.218.202

  • 88.99.149.30

  • 88.99.162.232

  • 149.56.240.229

  • 51.79.176.191

  • 51.222.153.207

You can also look at the User Agent in the HTTP header. Our crawler identifies itself with this user agent string:

Mozilla/5.0 (compatible; SiteSearch360/1.0; +https://sitesearch360.com/)

How do I index and search over multiple sites?

Let's assume you have the following setup:

You now want to index content from all three sites into one index and provide a search that finds content on all of those pages.

This can be easily achieved by using one, or a combination of the following three methods:

  1. Create a sitemap that contains URLs from all the sites you want to index or submit multiple sitemaps, one per line. In this case, our crawler only picks up the links that are present in your sitemap(s).

  2. Let the crawler index multiple sites by providing multiple start URLs in Website Crawling. All the other settings, e.g. white- and blacklisting will be applied for all the specified domains.

  3. You can also add pages from any of your sites via the API using your API key (available for Holmes plan or higher). You can either index by URL or send a JSON object with the indexable content.

Tip: consider setting up Result Groups to segment results from different sites or site sections. By default, content groups will be shown as tabs.

How do I set up sitemap indexing or root URL crawling?

  1. Go to Website Crawling or to Sitemap Indexing and provide your start URL(s) or sitemap URL(s) in the respective fields.

    With sitemaps, you can press "Test Sitemap" to make sure your sitemaps are valid and ready for indexing. This check also shows you how many URLs are found in your sitemap(s).

    With Website Crawling, the only way to check the number of indexed pages and documents is to wait until a full site crawl is complete.

  2. Switch on the "Enable and Re-Index Automatically" toggle under the preferred crawling method. Hint: you can use both website crawling and sitemap indexing simultaneously.

  3. Save your changes and re-index your site. When updating start page URLs or sitemaps, we recommend emptying the index first (press "Empty Entire Index") to rebuild the search index from scratch.

Note: The sitemap XML file should be formatted correctly for the crawler to process it. Check these guidelines.

How do I index secure or password-protected content?

If you have private content that you’d like to include in the search results, you need to authenticate our crawler so we’re able to access the secure pages. There are a few options to add search to password-protected websites:

  • Go to Website Crawling > Advanced Settings.

    • If you use HTTP Basic Authentication, simply fill out a username and a password.

    • If you have a custom login page, use the Custom Login Screen Settings instead (read below for the instructions).

    • You can also set a cookie to authenticate our crawler.

  • Whitelist our crawler's IP addresses for it to access all pages without login:

    • 88.99.218.202

    • 88.99.149.30

    • 88.99.162.232

    • 149.56.240.229

    • 51.79.176.191

    • 51.222.153.207

  • Provide a special sitemap.xml with deep links to the hidden content

  • Detect our crawler with the following User Agent string in the HTTP header:

Mozilla/5.0 (compatible; SiteSearch360/1.0; +https://sitesearch360.com/)

How do I crawl content behind a custom login page?

  1. Go to Website Crawling > Advanced Settings > Custom Login Screen Settings. Activate the toggle.

  2. Provide the URL of your login page, e.g. https://yoursite.com/login

  3. Provide the login form XPath:

    On your login page, right-click the login form element, press Inspect, and find its id in the markup. For example, you can see something like:

    <form name="loginform" id="loginform" action="
    https://yoursite.com/login.php
    " method="post”>

    So you'd take id="loginform" and address it with the following XPath: //form[@id="loginform"]

  4. Define the authentication parameter names and map them with the credentials for the crawler to access the content.

    Let's find out what parameter name is used for your login field first. Right-click the field and press Inspect. For example, you'll have:

    <input type="text" name="log" id="user_login" class="input”>

    So you’d take log and use it as Parameter Name. The login (username, email, etc.) would be the Parameter Value. Click Add and repeat the same process for the password field.

  5. Save and go to the Index section where you can test your setup on a single URL and re-index the entire site to add the password-protected pages to your search results.

Some login screens have a single field, usually for the password (e.g. in Weebly), in which case you'd only need one parameter name-value pair.

How do I index JavaScript content?

The Site Search 360 crawler can index content that is dynamically loaded via JavaScript. To enable JS crawling, activate the respective toggle under Website Crawling > Advanced Settings, and re-index your site.

Note: JS crawling is an add-on feature and it isn't enabled for free trial accounts by default. Please reach out if you need to test it before signing up for a paid plan.

JavaScript crawling also takes more time and more resources to index JavaScript-rendered content. If there are no search results or some important information seems missing unless you activate the JavaScript Crawling feature, make sure to add it to your Custom Plan options to be able to use it after your trial period expires.

Alternatively, you can push your content via our API.

How do I avoid duplicate indexed content?

If you find duplicate content in your index, there are a few options under Website Crawling and Sitemap Indexing to help you resolve that.

For your changes to take effect you'll need to re-index your site. We recommend clearing the index (Index -> Empty Entire Index) first to make sure the duplicates are removed.

Use Canonical URL

Canonical tags are a great strategy to avoid duplicate results not only for your internal site search but also in Google and other global search engines. Learn more about the changes required on your side.

So let's assume you have 3 distinct URLs but the content is exactly the same:

You don't want to have 3 times the same search result so you would add the following tag to the first two pages to indicate that they refer to the same 'master' URL:

<link rel="canonical" href="
http://mysite.com/page1
" />

Once this is set up correctly on your site, turn on the "Use Canonical URL" toggle and re-index your site.

Ignore Query Parameters

Let's assume you have these two URLs with the same content:

Even though these URLs refer to the same page, they are different for the crawler and would appear as separate entries in the index. You can avoid that by removing URL parameters that have no influence over the content of the page. To do so, turn ON "Ignore Query Parameters".

Note: The setting cannot be applied safely if you use query parameters:

  • For pagination (?p=1, ?p=2 or ?page=1, ?page=2 etc.)

  • Not only as a sorting method (?category=news), but also to identify pages (?id=1, ?id=2, etc.)

In these cases ignoring all query parameters might prevent our crawler from picking up relevant pages and documents. We can recommend the following strategies instead:

  • Submit a sitemap with 'clean' URLs and switch from Website Crawling to Sitemap Indexing, which is a faster indexing method and usually produces cleaner results.

  • Add pagination to No-Index Patterns (e.g., \?p=) and blacklist other query parameter patterns under Blacklist URL Patterns. Here's how to do it.

Lowercase All URLs

Before turning the setting ON, make sure your server is not case-sensitive. Example:

Remove Trailing Slashes

Only turn this setting ON if the URLs with and without the slash at the end display the same page:

What is the 'noindex robots meta tag' and how does it affect my search results?

When you don't want Google to pick up specific pages or your entire site (e.g. when it's still in development), you might already be using the noindex robots meta tag:

<meta name='robots' content='noindex,follow' />

If you want to keep your site pages hidden from Google while allowing Site Search 360 to index them, simply turn on the Ignore Robots Meta Tag toggle under Website Crawling and Sitemap Indexing.

If it's the other way round, i.e. you want to keep the pages visible in Google but remove them from your on-site search results, use the blacklisting or no-indexing fields.

Alternatively, you can add a meta tag to the selected pages and use ss360 instead of robots:

<meta name="ss360" content="noindex" />

Important! Make sure you're not blocking the same pages in your robots.txt file. When a page is blocked from crawling through robots.txt, your noindex tag won't be picked up by our crawler, which means that, if other pages link out to the no-indexed page, it will still appear in search results.

If you want Site Search 360 to show or ignore specific pages in your search results, use whitelisting, blacklisting, or no-indexing options as described below. You can find them under Website Crawling and Sitemap Indexing.

General rules:

  1. URL and XPath patterns are interpreted as regular expressions so remember to put a backslash (\) before special characters such as []\^$.|?*+(){}.

  2. Important! When you modify Crawler Settings, remember to go to the Index section and press "Re-index Entire Site" for the changes to take effect.

How do I whitelist, blacklist, and no-index URLs to control which pages and documents are shown in search results?

If you want to remove specific pages or documents from your site search results (without deleting them from your website), you can blacklist or no-index unwanted URL and URL patterns, or whitelist a specific sub-domain or folder on your site.

You can update these features under both Website Crawling and Sitemap Indexing.

Please note that URL and XPath patterns are interpreted as regular expressions so remember to put a backslash (\) before special characters such as []\^$.|?*+(){}.

  • Blacklist URL patterns:

    Tell the crawler to completely ignore specific areas of your site. For example, you want our crawler to ignore certain files or skip an entire section of your website. Go ahead and put one pattern per line here:

    blacklist URL patterns

    Note: blacklisting takes priority over whitelisting. If there's a conflict in your settings, the whitelisted patterns will be ignored.

  • Whitelist URL patterns:

    Restrict the crawler to a specific area of your site.

    For example, you want to limit your search to blog pages only. If you whitelist /blog/, our crawler won't index anything except for the URLs containing /blog/. This can also be useful for multilingual sites.

    Depending on your URL structure, you could, for instance, use the following patterns to limit the search to French-language pages only:

    whitelist URL patterns

    Note: make sure that your start page URL matches your whitelisting pattern (e.g. https://website.com/blog/ or https://website.com/fr/). If the start page URL itself doesn't contain the whitelist pattern, it will be blacklisted -> nothing can be indexed -> no search results.

  • No-index URL patterns:

    This setting is the same as noindex,follow robots meta tag: the crawler follows the page and all the outgoing links but doesn't include the no-indexed page in the results. It is different from blacklisting where the crawler fully ignores the page without checking it for other "useful" links.

    For example, URLs that are important for the search are linked from the pages you want to exclude (e.g. your homepage, product listings, FAQ pages. Add them as no-index patterns):

    no-index URL patterns

    Note the $ sign: it indicates where the matching pattern should stop. In this case, URLs linking from the escaped page, such as /specific-url-to-ignore/product1 , will still be followed, indexed, and shown in search results.

    Note: no-index URL pattern takes priority over whitelisting. If there's a conflict in your settings, the whitelisted patterns will be ignored.

  • No-index XPaths:

    Sometimes you need to no-index pages that do not share any specific URL patterns. Instead of adding them one by one to no-index URL patterns (see above), check if you can no-index them based on a specific CSS class or ID.

    For example, you have category or product listing pages that you wish to hide from the search results. If those pages have a distinct element which isn't used elsewhere, e.g. <div class="product-grid"></div>, add the following No-Index XPath: //div[@class="product-grid"]

    add no-index XPath patterns

    In this case, the crawler would go to the "product-grid" pages, then follow and index all the outgoing URLs, so your product pages will get indexed and shown in the results. Learn how to use XPaths or reach out to us if you need any help.

    Note: using a lot of No-index URL patterns or No-Index XPaths slows down the indexing process, as the crawler needs to scan every page and check it against all the indexing rules. If you're sure that a page or a directory with all outgoing links can be safely excluded from indexing, use the Blacklist URL pattern feature instead.

  • Whitelist XPaths:

    Similar to Whitelist URL patterns, it restricts the crawler to a specific area of your site.

    If you want to limit your search to some pages, but they do not share any specific URL patterns, then whitelist XPaths come in handy.

    For example, the following XPath limits the search to Russian-language pages only:

    Note: whitelist XPath takes priority over no-index XPath. If there's a conflict in your settings, the no-index XPaths will be ignored.

How do I use the URL List?

URL List can be found under Data Sources.

For one reason or another, our crawler may fail to pick up some URLs. In this case, you can add pages to the index manually.

This method is different from adding URLs one by one under Index:

  • The URL List isn't purged upon a full re-index, whereas links added manually under the Index section are deleted from the index.

  • The URL list allows adding links in bulk by clicking on "Edit as Text", instead of pasting them individually.

How do I control what images are displayed in search results?

By default, we use a striped background to indicate a missing image. This happens when none of our default extraction rules have led to a meaningful image (e.g., we ignore icons and footer logos by default). We also do not support images that are set as background property in CSS.

There are a few ways to control which pictures are used as search result thumbnails.

1. If you want to show a specific thumbnail for every result page, you can define it by adding or updating the Open Graph (og:image) meta tags with the desired image. Usually, SEO plugins, as well as modern CMS systems, offer this functionality: the same images would be shown when you share links from your website in social media and messengers. You can learn more about Open Graph tags.

Once you have the meta tags set up, you'll simply need to re-index your site for the crawler to pick them up.

2. You can get rid of empty image containers by setting the placeholderImage parameter to null, it goes under results in your ss360Config code:

var ss360Config = {
...
results: {
placeholderImage: null
}

3. You can also specify a custom placeholder image URL by using this instead:

results: {
placeholderImage: "https://placekitten.com/200/300"
}

4. Finally, you can fine-tune the rules the crawler uses to pick up images from your site by adjusting Image XPaths under Data Structuring -> Content Extraction. You can learn more about XPaths here.

If you want to display two or more images for one search result, you can set this up with Alternative Image Extraction.

You'll need to point our crawler to the alternative images displayed on your site under Content Extraction > Alternative images XPath. Here is an example of how results can look like after configuring it:

Main image:

Alternative image:

How do I fix Client Error 499?

When indexing your site's pages, we need to send HTTP requests to your web server. Every request received by the server gets a 3-digit status code in response. You can check these response codes in your Index Status table.

Client errors are the result of HTTP requests sent by a user client (i.e. a web browser or other HTTP client). Client Error 499 means that the client closed the connection before the server could answer the request (often because of a timeout) so the requested page or document could not be loaded. Re-indexing specific URLs or your entire site would usually help in this case.

This error can also occur when our crawlers are denied access to your site content by Cloudflare. Please make sure to whitelist our crawler IPs at Cloudflare (under Firewall > Tools):

Cloudflare firewall

Here's the list of IPs used by our crawlers:

  • 88.99.218.202

  • 88.99.149.30

  • 88.99.162.232

  • 149.56.240.229

  • 51.79.176.191

  • 51.222.153.207

You can also allow us as a User Agent at Cloudflare:

Mozilla/5.0 (compatible; SiteSearch360/1.0; +https://sitesearch360.com/)

Note: Cloudflare can be set up as part of your CMS (Content Management System - e.g. WordPress, Magento, Drupal, etc.). If you're not sure, please check with your CMS and ask them to whitelist Site Search 360 crawler IPs for you.

How do I fix Client error 403?

The 403 error means that when our crawler requests a specific page or file from your server, the server refuses to fulfill the request.

If whitelisting our crawler IP addresses and allowing us as a User Agent haven't helped, the issue may be related to your HTTP method settings.

Some Content Management Systems, e.g., Magnolia CMS, block HEAD requests by default. Try adding HEAD to the allowed methods so your documents can successfully be reached by the crawler and added to your search index.

Search settings

How do I adjust how "fuzzy" the search is?

To adjust how precisely your search results should match the search terms entered by your site visitors, go to Search Settings -> General.

You can control the relevance of your search suggestions (also known as autocomplete or search-as-you-type results) with Search Suggestion Fuzziness and the relevance of your search results (triggered by Enter/search button) with the Search Fuzziness.

There're different fuzziness levels that you can play around with and compare the outcome in the Search Preview.

SS360 search fuzziness
  • Extremely strict:

    • All searched keywords should be present in the results (AND logic between the terms) and results should be a 100% match.

  • Strict:

    • All searched keywords should be present in the results (AND logic between the terms) but the matching percentage is a bit more forgiving (>=90%).

    • If a product's article number matches the query exactly, it will be the only product shown in the results. For example, if there are two products with article numbers "A123" and "A123-4" and the search query is "A123", only the first product will be found.

  • Default:

    • Either one of the searched keywords should be present in the results (OR logic between the terms) but it has to be a >=90% match.

    • With article number search we'll look both for full and partial matches here. For example, if there are two products with article numbers "A123" and "A123-4" and the query is "A123", both products will be found. Also, "A1234" or "123-4" will bring up the respective products.

    • Queries such as "iphone8" will be broken into "iphone 8" if nothing is found for the original query.

    • If we don't find any results after breaking up the search query, we'll try fuzzier matches.

  • More results:

    • Also uses the OR Boolean logic between the query terms but the matching conditions are more lenient (>=50%).

    • Article number search rules are the same as with the Default level.

  • Even more results:

    • OR logic between the query terms and even more lenient matches (>=40%).

    • Article numbers work the same way as with the Default level.

  • Get most results:

    • All results at least slightly related to the search query should be shown.

The optimal level differs from site to site as it is related to the type of your site content (product descriptions? blog articles? research papers?) and depends on what your users search for the most (product numbers and SKUs? long-tail search phrases?), The Popular Results table in your Dashboard can provide insight into your search data.

Regardless of the fuzziness level, the best-matching results are always shown at the top, so it's more about how many results should be returned even if they don't fully match the search query.

Please note that the difference between the levels is often unnoticeable for single-word terms e.g., "mittens", "payment", but becomes apparent when you test multiple-word queries e.g., "women ski mittens" or "monthly or yearly payment".

Tip: if you want search suggestions to look for matches within the entire content of your pages and documents, and not only in the titles, make sure to have the "Suggestions only on titles" setting unchecked.

To choose the best fuzziness level for your site, we recommend searching your site from the search box and adjusting the fuzziness in Search Preview to compare.

All control panel queries are NOT tracked and NOT counted towards your search limit quotas. Also, caching is disabled so there is no delay in reflecting your changes.

How do I change which search result snippet is shown?

You can control where the text shown in the search results is coming from in Search Settings -> General.

search snippet source

You can choose from:

  • Content around the search query (default).

  • First sentences of the indexed content.

  • A specific area of the website that you determine via XPath.

  • No snippet at all (titles and URLs will still be shown).

If you want to use Meta Descriptions in your snippets, select the option "Use content behind search snippet XPath" and save. By default, we already tell the crawler to index meta descriptions with the following XPath: //meta[@name="description"]/@content

If you set a different Search Snippet XPath, you need to run a full re-index. When you add or update your meta descriptions, you also need to manually re-index your site or wait until the crawler picks up the changes on the next scheduled re-crawl.

How do I change what search suggestions are shown?

Once your website content is properly indexed, you can change the behavior and the precision of your search suggestions without initiating a re-index. Just go to Search Settings -> General where you can:

  • Choose the degree of Search Suggestion Fuzziness, i.e. whether you want your suggestions to strictly match the search query or you'd like to allow more flexibility. More information on fuzziness levels.

  • Restrict suggestions to only derive from page titles.

    tweak search suggestions
  • Restrict your suggestions to a specific result group. You can add data-ss360-include-suggest or data-ss360-exclude-suggest attributes to your search box HTML markup.

    For example, you have a result group named 'Products' and you want the autocomplete to only suggest products, nothing else. You could then add the following to your search input code:

    <input type="search" data-ss360-include-suggest="['Products']">

    Or, to ignore the Uncategorized results or the 'Other' group, you would use:

    <input type="search" data-ss360-exclude-suggest="['Other']">

    If the same restrictions should apply to full search results as well, use the data-ss360-include or data-ss360-exclude attributes instead. More information on result groups.

When is the setting Suggestions only on titles useful?

For example, you have a series of products called "Travelr." When your customers type travel in your search box, you may only want to prompt them with the "Travelr" product pages and ignore all the other pages that mention, let's say, travel budgeting.

Tip: By default, the first h1 header on your page (//h1) is taken as a title that comes up in search suggestions and results. However, you can point our crawler to any part of a page by adjusting the Title XPath under Data Structuing -> Content Extraction. Here's how to work with XPaths. When editing the XPaths, remember to run a re-index for the changes to take effect.

You can enable the Suggest popular searches feature under Result Manager -> Query Suggestions (available for paid plans). All search queries are anonymously tracked and counted and the data is updated every 3 days.

The tracking works even while the feature is disabled so once you turn it on, you can test popular suggestions immediately, unless there is not enough search data (e.g. for new accounts). See it in action by typing, e.g. curry on our demo site.

Popular searches are shown when they fully or partially match the query that is typed into your search box. They appear on top of your regular suggested results.

Here are the defaults that you can change by adjusting the following parameters in your configuration code:

var ss360Config = {
        suggestions: {
            maxQuerySuggestions: 3,
            minChars: 3,
            num: 6
        }
}

Tip: check how many suggestions you already show (default max: 6, can be modified with the suggestions.num parameter) and how many popular searches you want to add (default max: 3) and make sure the total number of entries (default: 9) fits above the screen breakpoint.

Can I boost certain pages?

Imagine that more than one page is relevant to a certain query but you'd like one of them to always be ranked a bit higher. There are a few ways to boost and give higher search rankings to a specific type of your search results while decreasing the importance of the others. These options are available under Search Settings -> General.

how to boost results with Site Search 360

For example, let us assume that your users give "likes" to your articles or products. You can create a "Popularity" data point and boost the pages that have more upvotes. To do so:

  1. Create a data point and tell the crawler where to find the information about the number of likes on the page. You can source this information from a meta tag or even a hidden HTML element on your pages. Use XPaths to point the crawler to the right element:

  2. You will see the following options for every numeric data point you create:

    1. Unique - check it to only extract one value per page, recommended for boosting.

    2. Boosting - check it to be able to use the data point for boosting.

      creating a data point for boosting
  3. Now, go to Search Settings -> General and set Page boosting to "Use numeric data point" with the corresponding data point name.

    use a numeric data point for boosting

You can also use data points to implement content sorting (descending/ascending).

Another use case: to boost the pages that are newer, use a timestamp or the year as a data point.

This essentially works as "levels". Even though you can set anything between 1 and 100, it is advisable to always use levels of 10 (20, 30 etc.)

Example: you can boost your /products/ by 90, /category/ by 30, and /blog/ by 10: the higher the boosting score is, the more priority is given to the respective results.

How exactly does it work? Let's say you are boosting /blog/ by 10 and /products/ by 30. You type in a search query and you see that matching results under /products/ come up above /blog/ results, even if a blog post is a very good match to your query (e.g. has the exact same title).

So boosting happens more or less independently of how well the query matches, although the query relevance does still play a role, especially for smaller boosting levels such as 1, 2 etc.

boost using URL patterns

You can also downrank or "penalize" certain results by setting the value at anything between 0 and 1. This is something to play around with as there are no linear dependencies, it is a logarithmic function.

This setting respects the tag value if it's set in your sitemap.

PDF Settings

Under Search Setting -> PDF settings you can configure what title, content, and thumbnails should be displayed for your PDF search results.

PDF crawling is available in all paid plans with a default maximum size of one PDF document - 15 MB (can be increased if needed).

If the crawler finds the "pdf-data-title" attribute, it overrides whatever is set in the PDF title strategies setting. e.g.:

<a href="link/to/pdf/file.pdf" pdf-data-title="The title of the PDF">read more</a>

You can add your custom HTML content anywhere in the search results for any query you like. For example, if you want your users to see a banner promotion when they search for food, you would follow this process:

  1. Go to Result Manager -> Result Mappings.

  2. Type the query for which you would like to add your custom content.

  3. Decide whether the query must match exactly, contain the term, be part of a phrase, or match a regular expression.

  4. Choose the Custom Results tab and press "Add new custom result".

  5. Edit the newly created custom search result by adding title, image URL, result link, and result content (Snippet) or by writing any custom HTML you want the user to see.

  6. Drag and drop your custom results to the desired position and pin it there.

  7. Save your mapping. You can edit or delete that mapping later. You can use the Search Preview in your control panel to immediately test the mapping.

If you have an XML file with your Google Custom Search promotions, you won't have to rebuild them for Site Search 360. Use the import function at the bottom of the Result Mappings section.

Please refer to this detailed post on Result Mappings for more information.

How do I prevent logging and tracking for certain users?

You might have your own team using your website's search often and don't want these searches to skew your logs. You can simply set a cookie in your browser for those users which prevents logging of their queries.

When you're on your site, press ctrl+shift+e and the tracking will be disabled for this specific browser. Alternatively you can set a ss360-tracking=0 cookie to prevent the tracking.

To do so, open your browser console (F12 in Chrome and Firefox) and write document.cookie = "ss360-tracking=0; expires=Sun, 14 Jun 2022 10:28:31 GMT; path=/";.

You can of course change path and expiration date according to your needs.

You can also add IPs as "IPs that should not be logged" under Search Settings -> IP Settings if the cookie approach does not work for you. We support wildcard notation so you can use an asterisk (*) to add IP ranges e.g., 46.229.168. or 46.229.. or 46...*

Note: when you test your search by using the search bar in Search Preview, these test queries are not logged either.

What search operators can I use with Site Search 360?

Search operators are special characters that you can add to your query in order to refine the search results. We currently support these 2 most frequently used operators (based on our research analyzing 10 mln queries):

  1. Quotes to search by phrase: put your query in quotes (" ") for an exact match. Example: "bill gates". In this case no pages or documents that mention something like "you have to pay this bill to open the gates" would come up.

  2. Negation to exclude a term. Put a minus (-) right before the word that you want to remove from search results. Example: bill -gates. In this case all pages or documents mentioning bill and NOT mentioning gates would be found.

Implementation options

How do I switch from search results in a layer to embedded results?

When the search is triggered, Site Search 360 allows you to show results in an overlay (default) or embed the results seamlessly into your page.

To embed the results, adjust the results.embedConfig parameters of your ss360Config:

var ss360Config = {
        results: {
            embedConfig: {
                contentBlock: 'CSS-SELECTOR',
                url: '/search'
            }
        }
}

where CSS-SELECTOR is one or a comma-separated list of CSS selectors to DOM elements where search results should be injected.

For example, if your main content block can be found under <div id="main"></div> and that is the page element where you want search results to be populated, you would write:

results: {embedConfig: {contentBlock: '#main'}}

How do I show embedded results on a new page?

If you choose to embed search results, by default they will be embedded in the same page where the search is triggered. That is fast and avoids reloading the site.

However, if you have a dedicated search result page that you want to open instead, you can adjust your ss360Config object as follows:

results: {embedConfig: {contentBlock: 'CSS-SELECTOR', url:'/search'}}

You would have to replace /search with the relative path to your search result page and CSS-SELECTOR with a selector pointing to the area of the page where the search results should be embedded.

How do I implement pagination?

To put it shortly: you shouldn't (read here why). Site Search 360 offers a "See more" button out of the box. To use it, just adjust the results parameter in your ss360Config object.

var ss360Config = {
...
  // HTML for the more results button, if this is undefined, all results will be shown
  moreResultsButton: 'See more', 
  // the number of new results to show each time the more results button is pressed, moreResultButton must not be undefined
  moreResultsPagingSize: 12, 
...
}

If you still want to implement pagination you can use the API with offset and limit parameters.

You can now also allow you users to infinitely scroll down the search results by setting results.inifiniteScroll to true. This would replace the 'Show more results' button and is only available when you don't have any result group set up or if your result group navigation is tabbed.

Can I use multiple search boxes on one page?

Yes, you just have to adjust the searchBox.selector in your ss360Config to point to all of your search fields.

You would usually give your search input fields the same CSS class:

<input class="ss360SearchBox" type="text" placeholder="search" />

If you have set up result groups, you can restrict every search box to trigger results from one or a few selected groups by adding data-ss360-include or data-ss360-exclude attributes to your search input markup. For example:

<input type="search" data-ss360-include="['Products','Reviews']">

Check out our demo site page using multiple search boxes for sample configuration code.

Note: Starting from JS integration code v13.3 we allow different siteIDs on the same page (multiple instances of SS360).

  • For React it can be done by adding an alias on the Site Search 360 component, for example:

<SiteSearch360 siteId=".." alias="SS360mysite1"> and <SiteSearch360 siteId=".." alias="SS360mysite2">

  • For other cases with standard JS integration (standard ss360Config and a ss360Configs map) following options are available that maps an alias to configurations:

/* #1 */
window.ss360Config = {
    siteId: 'mysite',
    searchBox: {
        selector: '#searchBox1'
    }
};
window.ss360Configs = {
    SS360_SECONDSEARCH: {
        siteId: 'mysite2',
        searchBox: {
            selector: '#searchBox2'
        }
    }
};
  • or you can make your ss360Config an array and have multiple entries (you can also specify an alias)

/* #2 */
window.ss360Config = [{
    siteId: 'mysite',
    searchBox: {
        selector: '#searchBox1'
    }
}, {
    siteId: 'mysite2',
    searchBox: {
        selector: '#searchBox2'
    }
}];

How do I show more information in search suggestions?

You can choose to show static or dynamic data in your search suggestions next to each result. Let's assume you have some products and you want to show their category and price directly in search suggestions.

  1. Create the respective data points, for example, 'Category' and 'Price'.

  2. Reference them in your ss360Config under the suggestions.dataPoints parameter:

    suggestions: {
        dataPoints: {
            Category: {
                html: '#Category#',
                position: 1
            },
            Price: {
                html: '#Price#',
                position: 2
            }
        }
    }
  3. To see it in action, start typing e.g. granola or any other food item into the search box on this search example page. In this case, the "Calories" data point is displayed.

  4. Finally, add some CSS to fine-tune the positioning of the newly added information in your search suggestions, for example:

#unibox-suggest-box .unibox
selectable .unibox
extra {
  position: relative;
  left: 74px;
  top: 20px;
}

For more examples, you can see how to show data points in search suggestions.

Why are search suggestions different from search results?

Search suggestions (= autocomplete, or search-as-you-type results) are generated differently from full search results which are rendered after you hit Enter or the search button.

That is because when you type, it's impossible for the engine to be sure whether you are done typing or not, so we have to search within words. When you submit a search query, it indicates that you have typed in everything you wanted to type.

For example, if you type hot, showing search suggestions for hotel would make total sense, but once you press Enter, it becomes clear that you want to find pages with the word hot and not hotel-related pages.

Unlike search results, search suggestions are NOT counted against your monthly search volume.

To indicate that there are more results available, you can display a View All Results call-to-action button at the end of the search suggestion dropdown. To add a CTA, use the suggestions.viewAllLabel in your ss360Config code.

There is also a setting allowing you to trigger instant search results after every typed character and skip search suggestions altogether:

var ss360Config = { 
...
    suggestions: {
        /
 This setting will submit a query whenever the the search field value changes 
/
        triggersSearch: true,
        /
 After how many characters search results should be triggered 
/
        minChars: 3
    },
}

Check this demo site example to try out instant search results.

Important! With the triggersSearch setting set to true every unfinished query will be counted against your plan search quota whereas our default suggestions do not take away from your search volume.

How do I add a "View Product", "Add to Cart", or "Read more" CTA button to my search results?

Call-to-action (CTA) buttons encourage users to take further action and can increase your search result click-through rate. To add one, use the results.cta parameter and customize it using the following settings:

var ss360Config = { 
...
    results: {
        cta: { /
 only shown if text and/or icon are defined 
/
            text: "View Product", /
 your CTA button label 
/
            link: "#RESULT
URL#", /* use the #RESULT
URL# variable to open the result page when the CTA is clicked 
/
            icon: "/images/some-icon.png", /
 a link to the icon shown inside the CTA */
        }
    }
}

It is also possible to add custom callbacks and set up different call-to-action elements for different Result Groups. Check the advanced parameter list.

If you use our Lightspeed plugin, you can choose whether clicking on the CTA should directly add the product to cart or simply open the product page. Manual configuration is not required, simply click the respective checkboxes under CTA in the Plugin Config section:

How do I change the font, colors, and styles of suggestions and results?

The easiest way to make Site Search 360 match your website's color palette is by configuring the style in your Control Panel > Installation (a.k.a. the New Search Designer). If you're using v13 script, you can modify the hex codes for the accentColor and themeColor parameters in your ss360Config:

var ss360Config = { 
    style: {
      accentColor: "#3d8fff",
      themeColor: "#4a4f62"
   }
}

accentColor modifies the color of your Site Search 360 search button, result titles, 'See More' results, hover effects, etc. - basically, of all clickable elements.

themeColor is less noticeable and used with non-interactive elements so it's usually more fitting to use a calming shade, e.g., from a grayscale color palette.

accentColor changed from blue to green

If you want to make more specific styling changes, you can add some inline CSS by modifying the style.additionalCss parameter.

How do I override the default Site Search 360 CSS?

By default Site Search 360 brings its own stylesheet where we use, for example: font-family: sans-serif;.

You can simply deactivate it by editing the style parameter of ss360Config object and setting defaultCss: false.

Here's a copy of the CSS brought by the Site Search 360 plugin so you know what can be styled (take out the .min for a readable version):

<link rel="stylesheet" type="text/css" href="
https://cdn.sitesearch360.com/v13/sitesearch360-v13.min.css
" />

Can I use Site Search 360 with WordPress?

Definitely! As long as you can add our JavaScript code to your site, the search will work with any CMS like Joomla, Magento, HubSpot, Drupal, etc. But for even easier integration with WordPress, we have developed a plugin.

Simply go to the Plugins section in your WP admin and look for Site Search 360 (by SEMKNOX GmbH):

install Site Search 360 WordPress plugin

Make sure to check our detailed WordPress integration guide.

Can I use Site Search 360 with Cloudflare?

Yes, you can! We have a Cloudflare app that you can simply enable in your Cloudflare account. There are fewer configuration options than if you choose to insert the JavaScript yourself, but the search integration is even faster via the app.

Can I use Site Search 360 with Weebly?

Yes, we have developed a special app for Weebly so that you could easily add a search box and customize your search result page within the Weebly interface. You simply need to connect the Site Search 360 Weebly app to your site and drag and drop the app elements to your pages. You can refer to our Weebly integration guide for a step-by-step walkthrough.

When you connect the app, we automatically open a Site Search 360 trial account for you and start indexing your site's content. In order to check what URLs are indexed, remove unnecessary links, add quick filters (Result Groups such as "Blog", "Products", etc.) and Result Mappings, you'll need to log in to your Control Panel.

Can I use Site Search 360 with Magento?

Yes, right now we have a Magento 2 extension for our SS360 ecom.
It is open source and you find it here: https://bitbucket.org/SEMKNOX/semknox-magento2-extension/src/master/

Absolutely, and you should. It allows your users to quickly search your website without landing on it. Here's how it looks:

google sitelinks searchbox github

Please refer to Google's guidelines for more detail.

To enable this feature with Site Search 360, just add the following script to your home page. Next time Google crawls your site it will interpret the script and show the sitelinks search box.

<script type="application/ld+json">
{
  "@context": "http://schema.org",
  "@type": "WebSite",
  "url": "https://example.com/",
  "potentialAction": {
    "@type": "SearchAction",
    "target": "https://example.com/search?ss360Query={search_term_string}",
    "query-input": "required name=search_term_string"
  }
}
</script>
  1. Make sure you change "https://example.com/" to your website URL and modify the "target" parameter to match your search URL pattern. E.g. https://site.com?ss360Query={searchtermstring}.

  2. Pay attention to the search query parameter in your ss360Config object: it should have the same name as in the target URL. By default you have searchQueryParamName: 'ss360Query'.

To test your configuration, replace "searchtermstring" with a test query and copy that URL to a web browser. For example, if your website is site.com, and you want to test the query "pizza", you would browse to https://www.site.com/search/?ss360Query={pizza}

How do I track the search with Google Analytics or Google Tag Manager?

The Site Search 360 Javascript allows you to set up external tracking for Google Analytics and Google Tag Manager. All you have to do is configure the tracking object in your ss360Config object.

You just have to set the provider that you want and can optionally react to tracking events using the searchCallback.

var ss360Config = {
   ...
   tracking: { 
      // how to track, supported values: 'GA' (Google Analytics), 'GTM' (Google Tag Manager)
      providers: ['GA'], 
      // callback before SERP is reported, SERP events aren't reported if this returns false
      searchCallback: function(query) {
         // do custom things here
         return true;
      }
   }
   ...
};

This tracking will add a ?ss360Query={query} to the search result pages which can be viewed and analyzed in Google Analytics. To see the search terms in Google Analytics, you have to specify the URL parameter for the query (ss360Query): https://support.google.com/analytics/answer/1012264?hl=en. Please note that you need at least v7 of the script.

How do I integrate Site Search 360 with Google Tag Manager?

  1. Head over to Google Tag Manager, log in to your account, and add a New Tag.

  2. In the Tag Configuration select Custom HTML as tag type.

  3. If you're using v13 of our search script, add the code snippet below to your tag. Note that you need to replace 'mysiteid.com' with your site ID (which can be found under Account -> General) and specify other parameters (searchBox, results, etc.) of your ss360Config (the easiest way to generate them is to use our Search Designer). Everything else, starting from var e=, is ready for a copy-paste.

  4. If you're using an earlier script version, consider upgrading first.

  5. Now set the Trigger to All pages.

  6. Finally, hit Save in the upper right and publish your changes.

window.ss360Config = {
   siteId: 'mysiteid.com',
   searchBox: {...}
};
                           
var e=document.createElement("script");
e.src="
https://cdn.sitesearch360.com/v13/sitesearch360-v13.min.js
"
document.getElementsByTagName("body")[0].appendChild(e);

How do I set up OpenSearch?

Setting up OpenSearch and enabling users to install your webpage as native Search Engine in the browser (or search directly from the address bar in Google Chrome) is pretty straightforward.

search directly from address bar in your browser

First you need to upload a opensearch.xml file with the following content to your server:

<OpenSearchDescription xmlns="
http://a9.com/-/spec/opensearch/1.1/
" xmlns:moz="
http://www.mozilla.org/2006/browser/search/
">
<ShortName>SHORT_NAME</ShortName>
<Description>DESCRIPTION</Description>
<InputEncoding>UTF-8</InputEncoding>
<Tags>TAGS</Tags>
<Image height="16" width="16" type="image/x-icon">

https://samplesite.com/favicon.ico

</Image>
<Url type="text/html" template="
https://samplesite.com?ss360Query=
{searchTerms}&ref=opensearch"/>
<moz:SearchForm>
https://samplesite.com/
</moz:SearchForm>
</OpenSearchDescription>

Make sure to replace the following:

  • SHORT_NAME with the name of your website (16 or fewer characters)

  • DESCRIPTION with a brief description of your search engine - what users can search for

  • TAGS with few tags/keywords describing your search engine (web page), separated by comma

  • https://samplesite.com/favicon.ico with a link to an icon (e.g. favicon)

  • https://samplesite.com with a link to your webpage (where the search results are located, or homepage if you are using the overlay layout)

  • ss360Query with the name of your search query parameter (if you've adjusted the default one)

And finally you'll need to reference the uploaded file in your HTML templates (e.g. on your homepage, or on your search result page). To do this, just add the following meta tag to the <head></head> part of the page (and update the content of the href attribute if the opensearch.xml isn't placed in the root directory + replace the TITLE with the name of your website):

<link rel="search" type="application/opensearchdescription+xml" href="/opensearch.xml" title="TITLE">

What data is stored via cookies, and which cookies can be disabled?

Cookies (can be disabled):

- ss360CGResults: last active content group exclude/include set via data-ss360-include/data-ss360-exclude attribute on the search box; stored for 1 hour

- ssi--lastInteraction: last search interaction timestamp; stored for 10 minutes

- ssi--sessionId: the session id, used to cluster search statistics; stored for 1 year

- ss360-open-tab: search query and content group, to reopen tab on page reload; stored for 1 hour

- ss360-cg--c: active content group, used to scroll to the last selected result when using back button (history navigation); set just before a result is selected; stored for 1 hour

- ss360-offset--c: offset of selected result, used to scroll to the last selected result when using back button (history navigation); set just before a result is selected; stored for 1 hour

- ss360-query--c: last query, used to scroll to the last selected result when using back button (history navigation); set just before a result is selected; stored for 1 hour

- ss360LastQuery: last query; stored for 24 hours

Local storage (Cookies as Fallback):

- uniboxsearchhistory: a list of last queries, used to display search history

- ss360lastquery_result: cached search result object, used on page reload etc. This data is used to speed up the loading of the search result within a short period of time.

Your Site Search 360 account and Control Panel

How do I administer multiple accounts and search projects?

Whether you are an agency helping clients set up their site search or a webmaster managing multilingual sites, there is a way to keep all your search projects easily accessible yet separate so that you and your colleagues can manage the Site Search 360 settings in a flexible way.

Projects vs Accounts

A project is a search index (collection of search results).

An account can have one or multiple projects if they relate to the same website but the search indexes should be kept separately.

Use projects for:

  • multiple language versions of the same website

  • a dedicated test project for your staging environment

When you sign up for a free 14-day trial, we automatically create an account and your first search project associated with the website domain you've specified. The email you provide becomes the account owner so, all subsequent projects created within this account will be also attached to this email. Go to Account -> Projects to add new projects.

Team permissions are managed on the account level.

Go to Account -> Projects and click the "Manage Team" button to send out email invites and specify what sections of the Control Panel should be shown or hidden to a particular user. Invited members get access to all projects within the account - that's why it's better to keep unrelated websites as separate projects under separate accounts.


Billing is managed on the project level but if you'd like to manage multiple projects under the same plan, reach out to us and we'll be happy to set up a custom plan for you.