Page tree
Skip to end of metadata
Go to start of metadata

The Solr module uses the  Apache Solr  search platform to index and crawl Magnolia content. Solr is a standalone enterprise search server with a REST like API.

The Magnolia Solr bundle consists of two modules:

  • Content Indexer: Indexes Magnolia workspaces. It can also crawl a published website.
  • Search Provider: Provides templates for displaying Solr search results on the site and faceted search components.

Solr uses the Lucene library for full-text indexing and provides faceted search, distributed search and index replication. You can use Solr to index content in an event-based or action-based fashion. The module from version 5.0 is compatible with Solr5.3, older versions of the module are compatible with Solr4.

Installing

Maven is the easiest way to install the modules. Add the following dependencies to your bundle:

<dependency>
  <groupId>info.magnolia.solr</groupId>
  <artifactId>magnolia-content-indexer</artifactId>
  <version>5.1</version>
</dependency>

<dependency>
  <groupId>info.magnolia.solr</groupId>
  <artifactId>magnolia-solr-search-provider</artifactId>
  <version>5.1</version>
</dependency>

Pre-built jars are also available for download. See Installing a module for help.

If you install with JAR files, include the dependent third-party libraries.

Installing Apache Solr

Apache Solr is a standalone search server. You need the server in addition to the Magnolia Solr modules.

Download Apache Solr and extract the zip to your computer.

Installing Solr 5

Create Magnolia config set and configuring a schema and solrconfig

A schema file specifies what fields the Magnolia content can contain, how those fields are added to the index, and how they are queried. https://cwiki.apache.org/confluence/display/solr/Documents%2C+Fields%2C+and+Schema+Design

SolrRequestHandler is a Solr Plugin that defines the logic executed for any request. https://wiki.apache.org/solr/SolrRequestHandler


Create new magnolia config set by duplicating $SOLR_HOME/server/solr/configsets/data_driven_schema_configs folder and name it magnolia_data_driven_schema_configs ($SOLR_HOME/server/solr/configsets/magnolia_data_driven_schema_configs).

Download the magnolia example configuration files (based on Solr data_driven_schema_configs https://cwiki.apache.org/confluence/display/solr/Config+Sets) and overwrite the default files in newly created magnolia_data_driven_schema_configs/conf:

Starting Apache Solr and creating new core based on Magnolia config set

Go to the $SOLR_HOME/bin , start Solr server and create new core called magnolia 

cd $SOLR_HOME/bin
./solr start
./solr create_core -c magnolia -d magnolia_data_driven_schema_configs

This type of startup works for testing and development purposes. For production installation see Taking Solr to Production.

Installing Solr 4

Configuring a schema and solrconfig

A schema file specifies what fields the Magnolia content can contain, how those fields are added to the index, and how they are queried. An ExtractingRequestHandler extracts searchable fields from Magnolia pages.

Download the configuration files and overwrite the default files in  $SOLR_HOME/example/solr/collection1/conf/ :

solr/
  bin/
  contrib/
  dist/
  docs/
  example/
    solr/
      collection1/
        conf/
          schema.xml
          solrconfig.xml
  licenses/

Starting Apache Solr

Go to the example directory and start Solr.

cd $SOLR_HOME/example
java -jar start.jar

This type of startup works for testing and development purposes. For production installation see Taking Solr to Production.

What's new in Solr Search Provider module version 5.0.2

 This version contains changes in solrconfig.xml and  managed-schema please read the notes before update to 5.0.2.

Fixed the issue of two indexers/crawlers mutually overwriting the resulting index when indexing the same content. For example when one indexer was for indexing the English translation and other one for indexing the German translation.  MGNLEESOLR-102 - Getting issue details... STATUS

Problem was caused by using jcr uuid(indexers) and url(crawlers) as unique identifier for solr indexes. To fix this issue changes in solrconfig.xml and  managed-schema were required.

  • <uniqueKey> in managed-schema was changed to uuid
  • default value for unique key field was changed to uuid in info.magnolia.search.solrsearchprovider.logic.providers.FacetedSolrSearchProvider
  • solrconfig.xml now generates uuid field from combination of type and id fields. https://wiki.apache.org/solr/Deduplication method is used for generating the uuid. For more details see the change in code diff.

Update to 5.0.2

Option 1:

If you don't plan to index same content by two different indexers or crawlers then you don't need to update your solrconfig.xml and managed-schema for your solr core. Only change what you need to do is add uniqueKeyField property with value id into your solr sear result page.

Option 2:

Use new solrconfig.xml and managed-schema configuration files for your solr core and for $SOLR_HOME/server/solr/configsets/magnolia_data_driven_schema_config

It's needed to recreate all Solr indexes, because of the changes in configuration files. Probably the easiest way to do it is recreate the solr core and then retrigger indexing int Magnolia.

  1. Use new solrconfig.xml and  managed-schema configuration files for $SOLR_HOME/server/solr/configsets/magnolia_data_driven_schema_config Magnolia config set. 
  2. Delete  magnolia core an create it again

    cd $SOLR_HOME/bin
    ./solr delete -c magnolia
    ./solr create_core -c magnolia -d magnolia_data_driven_schema_configs
  3. Retrigger the indexers, by changing their property indexed to false 

What's new in Solr Search Provider module version 5.0

Solr Search Provider module version 5.0 brings support to Solr 5 (officially tested with version 5.3.1).

Full changelog for version 5.0 https://jira.magnolia-cms.com/browse/MGNLEESOLR/fixforversion/18141

Regarding the changes in the module it's recommended completely recreate the Solr indexes after to upgrade to version 5.0.

API changes

org.apache.solr.client.solrj.SolrServer is deprecated and was replaced by org.apache.solr.client.solrj.SolrClient in solr-solrj 5.x library. Because of that info.magnolia.search.solrsearchprovider.MagnoliaSolrBridge#getSolrServer method was changed to info.magnolia.search.solrsearchprovider.MagnoliaSolrBridge#getSolrClient method.

What's new in Solr Search Provider module version 3.0

Solr Search Provider module version 3.0 delivers the following key fixes and enhancements:

  • Module doesn't depend on STK anymore and can be used also with MTE  MGNLEESOLR-66 - Getting issue details... STATUS
  • magnolia-solr-search-provider-theme module has gone  MGNLEESOLR-66 - Getting issue details... STATUS

  • Improve search performance  MGNLEESOLR-64 - Getting issue details... STATUS
  • Connect Crawlers with activation process  MGNLEESOLR-77 - Getting issue details... STATUS
  • Ability to use different implementation of edu.uci.ics.crawler4j.crawler.WebCrawler and different triggering command for every Crawler  MGNLEESOLR-61 - Getting issue details... STATUS
  • Search results are now configured at the page level instead of the component level  MGNLEESOLR-70 - Getting issue details... STATUS
  • Don't ignore the Robots Meta Tag  MGNLEESOLR-72 - Getting issue details... STATUS

Full changelog for version 3.0 https://jira.magnolia-cms.com/browse/MGNLEESOLR/fixforversion/17434

Regarding the changes in the module it's recommended completely recreate the Solr indexes after to upgrade to version 3.0.

Indexing Magnolia workspaces

The Content Indexer module is a recursive repository indexer and an event based indexer. You can configure multiple indexers for different sites and document types. The content indexer also allows you to crawl external websites using JSoup and CSS selectors. You then define different field mappings that will be obtained for each node and indexed in the solr index.

Indexer configuration

Configure an indexer in Configuration > /modules/content-indexer/config/indexers. Example configurations for indexing a website and DAM assets are provided. Duplicate one of the examples to index another site or workspace.

Node nameValue

 modules

 

 content-indexer

 

 config

 

 indexers

 

 websiteIndexer

 

 fieldMappings

 

 abstract

abstract

 author

author

 date

date

 teaserAbstract

mgnlmeta_teaserAbstract

 text

content

 title

title

 enabled

true

 indexed

false

 pull

false

 rootNode

/

 type

website

 workspace

website

Properties:

enabled

required

true enables the indexer configuration. false disables the indexer configuration.

indexed

required

Indicates whether indexing was done. When Solr finishes indexing content-indexer will set this property to true. You can set it to false to trigger re-indexing.

nodeType

optional, default is mgnl:page

JCR node type to index. For example, if you were indexing assets in the Magnolia DAM you would set this to mgnl:asset.

pull

optional, default is false (push)

Pull URLs instead of pushing. When true Solr will use Tika to extract information from a document, for instance a PDF. When false it will push the collected information using a Solr document.

assetProviderId

optional , default is jcr

If pull is set to true, specify an assetProviderId to obtain an asset correctly.

rootNode

required

Node in the workspace where indexing starts. Use this property to limit indexing to a particular site branch.

type

required

Sets the type of the indexed content such as website or documents. When you search the index you can filter results by type.

workspace

required

Workspace to index.

fieldMappings

required

Field mappings defines how fields in Magnolia content are mapped to Solr fields. Left side is Magnolia, right side is Solr.

<Magnolia_field>

<Solr_field>

You can use the fields available in the schema. If a field does not exist in Solr's schema you can use a dynamic field mgnlmeta_*. For instance if you have information nested in a deep leaf of your page stored with property specComponentAbstract, you can map this field with mgnlmeta_specComponentAbstract. The indexer contains a recursive call which will explore the node's child leaves until it finds the property.

IndexService

The indexer uses an IndexService to handle the indexing of a node. A basic implementation is configured by default: info.magnolia.search.solrsearchprovider.logic.indexer.BasicSolrIndexService. You can define and configure your own IndexService for specific needs.

Implement the IndexService interface:

IndexService
public class I18nIndexerService implements info.magnolia.module.indexer.indexservices.IndexService {

   private static final Logger log = LoggerFactory.getLogger(I18nIndexerService.class);

   @Override
   public boolean index(Node node, IndexerConfig config) {
      ...

Register the IndexService in the Content Indexer module configuration:

Node nameValue

 modules

 

 content-indexer

 

 config

 

 indexService

 

 class

 info.magnolia.search.solrsearchprovider.logic.indexer.BasicSolrIndexService

Crawling a website

The crawler mechanism uses the Scheduler to crawl a site periodically.

From version 3.0 Crawlers can be also connected with activation process by adding  info.magnolia.module.indexer.crawler.commands.CrawlerIndexerActivationCommand into command chain with activation command. By default this is done for this activation/deactivation commands:

  • catalog: default, command: activation - configured under /modules/activation/commands/default/activate/activate
  • catalog: default, command: deactivate - configured under /modules/activation/commands/default/deactivate
  • catalog: default, command: personalizationActivation - configured under /modules/personalization-integration/commands/default/personalizationActivation

If you are using custom activation command and you wish to connect it with crawler mechanism, you can use info.magnolia.module.indexer.setup.AddCrawlerIntoCommandChainTask install/update task for it.

Example: Configuration to crawl bbc.com

Node nameValue

 bbc_com

 

 sites

 

 bbc

 

 url

http://www.bbc.co.uk/

 fieldMappings

 

 abstract

#story_continues_1

 keywords

meta[name=keywords] attr(0,content)

 depth

2

 enabled

false

 nbrCrawlers

2

 type

news

Properties:

enabled

required

true enables the crawler. false disables the crawler.

When a crawler is enabled info.magnolia.module.indexer.CrawlerIndexerFactory registers a new scheduler job for the crawler automatically. 

depth

required

The max depth of a page in terms of distance in clicks from the root page. This should not be too high, ideally 2 or 3 max.

nbrCrawlers

required

The max number of simultaneous crawler threads that crawl a site. 2 or 3 is enough.

crawlerClass

optional, since version 3.0, default value is info.magnolia.module.indexer.crawler.MgnlCrawler

Implementation of {@link edu.uci.ics.crawler4j.crawler.WebCrawler which is used by the Crawler to crawl sites.

catalog

optional, since version 3.0, default value is content-indexer

Name of the catalog where the command resides.

command

optional, since version 3.0, default value is crawlerIndexer

Command which is used to instantiate and trigger the Crawler.

activationOnly

optional, since version 3.0

If it's set to true then crawler should be triggered only during activation. No scheduler job will be registered for this crawler.

delayAfterActivation

optional, since version 3.0, default value is 5s

Defines the delay (in seconds) after which crawler should start when activation is done. Default value is 5s.

cron

optional, default is every hour 0 0 0/1 1/1 * ? *

A CRON expression that specifies how often the site will be crawled. CronMaker is a useful tool for building expressions.

type

optional

Sets the type of the crawled content such as news. When you search the index you can filter results by type.

sites

required

List of sites to crawl. For each crawler you can define multiple sites to crawl.

<site>

required

Name of the site.

url

required

URL of the site.

fieldMappings

required

Field mappings defines how fields parsed from the site pages are mapped to Solr fields. Left side is Solr field, right side is the crawled site.

<site_field>

required

You can use any CSS selector to target an element on the page. For example, #story_continues_1 targets an element by ID.

You can also use custom syntax to get content inside attributes. For example, meta keywords are extracted using meta[name=keywords] attr(0,content). This will extract first value of keywords meta element. If you don't specify anything after the CSS selector then the text contained in the element is indexed. meta[name=keywords] would return an empty string because a meta element does contain any text, keywords are in the attributes. To get the value of a specific attribute specify attr(<index>,<Solr_field_name>). If you set index=-1 then all attributes are extracted and separated by a semicolon ;.

jcrItems

optional, since version 3.0

List of jcr items. If any of this items is activated crawler will be triggered.

<item_name>

optional, since version 3.0

Name of the jcr item.

workspace

required, since version 3.0

Workspace where jcr item is stored.

path

required, since version 3.0

Path of the jcr item.

siteAuthenticationConfig

optional, since version 5.0.2

Authentication information to allow crawling password restricted area.

username

required, since version 5.0.2

Username which is used for login into restricted area.

password

required, since version 5.0.2

User's password used for login into restricted area.

loginUrl

required, since version 5.0.2

Url to page with login form.

usernameField

required, since version 5.0.2, default value is mgnlUserID

Name of input field for entering the username in login form.

passwordField

required, since version 5.0.2, default value is mgnlUserPSWD

Name of input field for entering the password in login form.

logoutUrlIdentifier

required, since version 5.0.2, default value is mgnlLogout

String which identifies the logout Url. Crawler doesn't crawl over the urls which contains logoutUrlIdentifier to avoid logout.

Providing a Solr search

The Solr Search Provider module contains templates to display search results on the site. It also provides faceted search components for refining the results further. The faceted search gets related facets from the search context. Suggestions and available fields are available in Freemarker context.

Configuring the Solr server base URL

Configure the Solr server address in Configuration > /modules/solr-search-provider/config/solrConfig@baseURLbaseURL should be http://<domain_name>:<port>/solr/<solr_core_name>. So if solr server was installed as described in https://documentation.magnolia-cms.com/display/DOCS/Solr+module#Solrmodule-InstallingSolr5 then baseURL is http://localhost:8983/solr/magnolia.

See HttpSolrClient Javadoc for other properties.

Node nameValue

 solr-search-provider

 

 config

 

 solrConfig

 

 allowCompression

false

 baseURL

http://localhost:8983/solr/magnolia

 connectionTimeout

100

 followRedirects

false

 maxConnectionsPerHost

100

 maxRetries

0

 maxTotalConnections

100

 soTimeout

1,000

Creating a search results page

Create a search results page using one of the available templates. Which template you use depends on the type of project you have and the modules that are installed.

ModuleTemplateConfiguration
mtemteSolrSearchResult/modules/solr-search-provider/templates/mteSolrSearchResult
standard-templating-kitsolrSearchResult/modules/solr-search-provider/templates/solrSearchResult

To try it in the demo travel site:

  1. Make the template available in the site definition.
  2. Create a page which uses the template.
  3. Edit the home page properties.
  4. Select your Solr results page in the Search Page field.

Search result settings

Url domain filtering

You can filter results by URL domain in the Filter url prefix field

.

Field boosting for relevance

The example query title^100 abstract^0.1 will boost the rank for matches in the title field 1000 times more than equivalent matches in the abstract.

The query will give the following results:

If instead you boost the abstract over the title you would get the following results for the same search. The returned snippets are now primarily from page titles.

Filtering search results

Positive filtering: Return only results where the keyword conference is present.

Negative filtering: Don't return results where the keyword conference is present.

You can add more filters by separating them by spaces.

Autocomplete search bar

The autocomplete search bar provides suggestions while you type into the search field. jQuery UI Autocomplete widget and info.magnolia.search.solrsearchprovider.logic.servlets.SearchServlet are used for this functionality.

How to configure it

  1. Go http://jqueryui.com/download and download jQuery UI javascript for Autocomplete widget and required dependencies
  2. In downloaded archive find jquery-ui.js (or jquery-ui.min.js) and jquery.js and add them into Magnolia resources
  3. Add jQuery javascript libraries into to the Search result page

    <script src="path to jquery.js" type="text/javascript"></script>
    <script src="path to jquery-ui.js" type="text/javascript"></script>

  4. Add this small javascript into the Search result page

    var jq = jQuery.noConflict();
    jq(document).ready(function () {
        jq("#searchbar, #nav-search, #search").autocomplete({
            open: function () {
                jq(this).autocomplete('widget').css('z-index', 999);
            },
            source: function (request, response) {
                jq.get("${contextPath}/searchservlet/", {search: request.term.toLowerCase(), queryType: "SUGGEST", fields: "collation", fq: "*"},
                    function (data) {
                        response(data);
                    }, "json"
                );
            },
            minLength: 2
        });
    });

More information about the autocomplete feature

For more information see series of the blog posts:

Other features

  • Pagination
  • Faceting on all fields
  • Ranged faceting
  • Similar search
  • Localized search

  • Suggestions

2 Comments

  1. I did some tests with the SOLR 5.5.3 and 6.3.0 server version and it seems that it works nicely with the module. Nice work!

  2. just note: if you decide to change search result logic from page to component, make sure that configuration for search results (e.g. additional filters, boost query, etc... stays saved on page content level in jcr).