Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

SMILA/Documentation/Solr 3.5

< SMILA‎ | Documentation
Revision as of 09:38, 15 April 2013 by Pfreytag.brox.de (Talk | contribs) (SolrAdministrationHandler)

Solr is an open source search server based on the Lucene search engine. In addition to a powerful full-text-search, sorting and filtering, Solr comes with a lot of built-in features like highlighting, facets, auto-suggest and spell checking.

SolrServerManager & SolrProperties

Solr can run as stand alone remote server as well as embedded server within SMILA. There exist a properties file to control the running mode: configuration/org.eclipse.smila.solr/solr.properties

##### If true SMILA load default configuration for an embedded Solr instance (see below) #####
solr.embedded=true
 
##### Alternative workspace folder equals solr.home (embedded only) #####
solr.workspaceFolder=./workspace/.metadata/.plugins/org.eclipse.smila.solr
 
##### Server url for http connections to Solr server (remote only) #####
solr.serverUrl=http://localhost:8983/solr

Configuration

SMILA supports Solr only in multicore setup ("core" is the solr word for a search index), regardless whether Solr runs embedded or remote.

DefaultCore

The default configuration included in SMILA is defined in configuration/org.eclipse.smila.solr. The default mode is 'embedded' in which case SMILA starts up its own internal solr server. The full solr multicore configuration which is present in the configuration folder is used when the mode is set to embedded. This setup defines the sole DefaultCore holding that is suitable for the HowTo cases in SMILA.

If SMILA should connect to an already running Solr server instead of starting up an own instance, the property solr.embedded must be set to false. In that case the URL to connect to the (external) Solr server URL has to be provided by setting the property solr.serverUrl in the properties file.

Please note that you have to add the PingRequestHandler in each cores solrconfig.xml file, see section solrconfig.xml

More information about solr cores and their configuration can be found at: http://wiki.apache.org/solr/CoreAdmin

If SMILA starts up for the first time and Solr is configured embedded, the configuration is copied to Solr workspace (solr.home).

schema.xml

One of the most import configuration files is configuration/org.eclipse.smila.solr/DefaultCore/conf/schema.xml. This file defines index fields and types. SMILA comes with the following set of predefined fields:

<field name="Id" type="string_id" indexed="true" stored="true" required="true" />
<field name="LastModifiedDate" type="date" indexed="true" stored="true" />
<field name="Filename" type="text_path" indexed="true" stored="true" termVectors="true" termPositions="true" termOffsets="true" />
<field name="Path" type="text_path" indexed="true" stored="true" termVectors="true" termPositions="true" termOffsets="true" />
<field name="Extension" type="textgen" indexed="true" stored="true" />
<field name="Size" type="long" indexed="true" stored="true" />
<field name="MimeType" type="textgen" indexed="true" stored="true" />
<field name="Content" type="textgen" indexed="true" stored="true" termVectors="true" termPositions="true" termOffsets="true" />
<field name="Title" type="textgen" indexed="true" stored="true" termVectors="true" termPositions="true" termOffsets="true" />
<field name="spell" type="textSpell" indexed="true" stored="true" multiValued="true" />

The schema.xml also contains the uniqueKey property which Solr needs to know what field is used to id the documents and transparently handles add/updated accordingly. By default it is set to Id.

Information about other configuration possibilities like field types, default search field, copy fields and many more can be found here: http://wiki.apache.org/solr/SchemaXml

solrconfig.xml

Another major configuration file is configuration/org.eclipse.smila.solr/DefaultCore/conf/solfconfig.xml. This is the configuration for all SearchComponents, RequestHandlers and the general indexing and query configuration.

Please refer to its documentation here: http://wiki.apache.org/solr/SolrConfigXml

Important for SMILA is that in the embedded case the dataDir property defaults to the data/ sub folder of the core instance (e.i. solr.home/DefaultCore/data/. Hence, in embedded mode the SMILA workspace may grow quite large. Use this property in this file or set it through solr.xml at the core to provide an alternative location.

SMILA uses autoCommit via solr.DirectUpdateHandler2. It tells Solr to commit automatically every 60 seconds or after 1000 documents were added. If this property is not set, no commit will occur and the indexed data will not be persistent or search-able unless you send appropriate solr commands yourself. The values are a compromise where these factors play a role:

  • how soon shall/must a user that searches see the updates?
  • how many update request are sent to solr?

Note, that during commit the solr server stalls updates which might lead to index pipelet timeouts.

Note that when using an external Solr server, you have to add the PingRequestHandler since this handler is required by the SolrAdminHttpHandler to check if the cores exist and are alive before adressing them. You have to add the handler to each core's configuration file:

<requestHandler name="/admin/ping" class="PingRequestHandler">
  <lst name="defaults">
    <str name="qt">standard</str>
    <str name="q">solrpingquery</str>
    <str name="echoParams">all</str>
  </lst>
</requestHandler>

Setup another core

If you don't want to use the default solr index (DefaultCore), you can easily setup your own core. Just copy the DefaultCore configuration (see SMILA/configuration/org.eclipse.smila.solr) with another name, e.g. MyCore, in the same directory and adapt the configuration files described before to your needs.

Afterwards add your new core to the file SMILA.application/configuration/org.eclipse.smila.solr/solr.xml:

<?xml version='1.0' encoding='UTF-8'?>
 <solr persistent="true">
  <cores adminPath="/admin/cores">
   <core name="DefaultCore" instanceDir="DefaultCore"/>
   <core name="MyCore" instanceDir="MyCore"/>
  </cores>
 </solr>

How to use Solr with SMILA

Indexing data

The SolrIndexPipelet can add, update or delete records (equates to Solr documents) in an index.

Configuration in addpipeline:

  <extensionActivity>
      <proc:invokePipelet name="SolrIndexPipelet">
        <proc:pipelet class="org.eclipse.smila.solr.index.SolrIndexPipelet" />
        <proc:variables input="request" output="request" />
        <proc:configuration>
          <!-- either ADD or DELETE. -->
          <rec:Val key="ExecutionMode">ADD</rec:Val>
          <!-- defines the default core into which the record will be written. optional, but if missing then the target core 
            must be set in the record via SolrConstants.DYNAMIC_TARGET_CORE -->
          <rec:Val key="CoreName">DefaultCore</rec:Val>
          <!-- seq of fields that are to be filled. each tuple is a map that defines the target core field, the source field 
            (optional) and the source type (optional ) -->
          <rec:Seq key="CoreFields">
            <rec:Map>
              <!-- target field name in the solr core -->
              <rec:Val key="FieldName">Folder</rec:Val>
              <!-- name of the source attribute or attachment in the record. optional, defaults to the target field name -->
              <rec:Val key="RecSourceName">Path</rec:Val>
              <!-- either ATTRIBUTE or ATTACHMENT. optional, defaults to ATTIRBUTE. -->
              <rec:Val key="RecSourceType">ATTRIBUTE</rec:Val>
            </rec:Map>
            <rec:Map>
              <rec:Val key="FieldName">Filename</rec:Val>
            </rec:Map>
            ...
          </rec:Seq>
        </proc:configuration>
      </proc:invokePipelet>
    </extensionActivity>

Configuration in deletepipeline:

  <extensionActivity>
      <proc:invokePipelet name="SolrIndexPipelet">
        <proc:pipelet class="org.eclipse.smila.solr.index.SolrIndexPipelet" />
        <proc:variables input="request" output="request" />
        <proc:configuration>
          <rec:Val key="ExecutionMode">DELETE</rec:Val>
          <rec:Val key="CoreName">DefaultCore</rec:Val>
        </proc:configuration>
      </proc:invokePipelet>
    </extensionActivity>

Search

The SMILA standard search servlet already uses solr to search via the SolrSearchPipelet since SMILA version 1.0. Up to version 0.9 the SMILA standard search servlet used plain lucene search.

Search Pipelet Config

The SolrSearchPipelet offers the possibility to search a Solr index. The pipelet needs only a small configuration without any special parameters.

    <extensionActivity>
      <proc:invokePipelet name="invokeSolrSearchPipelet">
        <proc:pipelet class="org.eclipse.smila.solr.search.SolrSearchPipelet" />
        <proc:variables input="request" output="request" />
        <proc:configuration>
        </proc:configuration>
      </proc:invokePipelet>
    </extensionActivity>

Solr Specific Search Record

For full feature support an enhanced search record is required. This sections will provide both, XML samples on how the features are configured in the search record as well as description on helper classes that are available from within SMILA. Path notations for the elements in the record just have their key names of the respective elements as the path element and always start from the root; e.g. _solr.params/highlighting.

To understand the following section you must know the standard SMILA search record

Standard Parameters

The following SMILA standard query parameters are supported:

  • maxcount
  • offset
  • indexname, this must correspond to an existing solr core name
  • resultAttributes
  • query


Query

Native String

The solr pipelet suports a sole query element as a string value, which it passes unaltered to solr. The solr default handler assumes this to be a valid Lucene query string, but ultimately this depends on the configured handler. All escaping needs to be done by the one constructing the search record (Note: There is no need to URL encode it, as this is done internally).

<Record xmlns="http://www.eclipse.org/SMILA/record" version="2.0">
  <!-- query (q) -->
  <Val key="query">Content:solr Content:eclipse</Val>
  <Val key="maxcount" type="long">3</Val>
  <Val key="offset" type="long">3</Val>  
  <Val key="indexname">wikipedia</Val>
  <Seq key="resultAttributes">
    <Val>Content</Val>
    <Val>Id</Val>
  </Seq>
    ...
<Record>

The above sample shows a query on the index field Content for the string "solr eclipse".

Fielded Search

SMILA's fielded search is also implemented as of v1.0, so that you may search on distinct fields in the index w/o having to employ the native query format.

 
<Record>
  <Map key="query">
    <Val key="author">shakespeare</Val>
    <Seq key="title">
      <Val>hamlet</Val>
      <Val>merchant</Val>
    </Seq>
  </Map>
</Record>

which will be translated to: author:shakespeare title:(hamlet merchant)

How this is interpreted by solr depends on the config in place, e.g. combination of the terms are dependent on the value for <solrQueryParser defaultOperator="OR"></solrQueryParser> in the solrschema.xml.

Note, that the values passed to solr are not escaped or modified in any way. Hence a query for:

  <Map key="query">
  ...
  <Val key="title">merchant of venice</Val>
  ...
  </Map>

... will lead to (the likly not intended) title:merchant of venice. Which is a search on the field "title" with "merchant" but a search of the other words on the default search field as defined in the schema.xml.



Highlighting

Highlighting for Solr deviates from the standard SMILA way to support solr features. The configuration is contained in _solr.query/highlighting

<Map key="_solr.query">
  ...
  <Seq key="highlighting">
    <Map>
      <Val key="attribute">global.solr.params</Val>
      <Val key="hl" type="boolean">true</Val>
      <!-- list of fields to be highlighted, space delimited -->
      <Val key="hl.fl">Content  Title</Val>
      <Val key="hl.simple.pre">&lt;b&gt;</Val>
      <Val key="hl.simple.post">&lt;/b&gt;</Val>
    </Map>
    <!-- other maps with attribute = field name for per-field configuration -->
  </Seq>
  ...
</Map>

The configuration can be done globally (applies to all HL fields) as well as per field and are contained in maps that must have an entry attribute that either contains the value golabl.solr.params which then signifies the the global highlight settings or the name of the attribute/filed that is to be highlight-configured. The other entries in this map correspond in name and values to the ones solr supports. See http://wiki.apache.org/solr/HighlightingParameters.

In order to turn on highlighting, at least the global config must be present with the entry hl=true.

Programmatic highlighting configuration is done though HihglightingQueryConfigAdapter. The default constructor creates a configuration object with global highlighting parameters which is required to enable highlighting. The other constructor provides an optional per-field configuration.

   // create global highlighting configuration (required, enables highlighting)
    final HighlightingQueryConfigAdapter highlighting = new HighlightingQueryConfigAdapter();
    highlighting.setHighlightingFields("Content Title");
    highlighting.setHighlightingSimplePre("<b>");
    highlighting.setHighlightingSimplePost("</b>");
    builder.addHighlightingConfiguration(highlighting);

Other than in SMILA, the _highlight annotation is not created per result item but replaces the normally returned field value, i.e. when you have the Content field to be returned in your search and you also configured highlighting on it, then the search returns only the highlighted value for the Content field.

Filters

Filters are supported the normal way. However, these extensions exist:

  • It is also possible to specify an fq parameter natively via _solr.query.
  • grouping
  • tagging

fq Grouping/Combining

In solr you can have many fq parameters which are ANDed in the end. However, it makes a difference from a performance point of view when several smaller filters are combined to a larger one and also there are other things that can be done with one fq, such as local params. Hence, you may specify a filterGroup parameter to assign a particular smila filter to that group. All filters belonging to that group become a single fq parameter where all parts are ANDed, i.e. in lucene syntax: +(${group part 1})+(${group part 2})+...+(${group part N})

<Seq key="filter">
  <Map>
    <Val key="attribute">author</Val>
    <Seq key="oneOf">
      <Val>pratchett</Val>
      <Val>adams</Val>
    </Seq>
    <Val key="filterGroup">fq1</Val>
  </Map>
</Seq>

Special behaviors:

  • if the group name is missing, then it defaults to the attribute name, resulting in one fq parameter per attribute
  • if the group name is q then it will be added to the main query parameter q.

Specifying local parameters (and possibly other options in the future) for the group is done like so:

  ...
  <Map key="_solr.query">
    <Map key="filterGroups">
      <Map key='${groupName}'>
        <Map key='localParameters'>
          <Val key=q.op>AND</Val>
        </Map>
      </Map>
    </Map>
    ...
  </Map>
  ...

Because the filter group name q always denotes the main query parameter, it is possible to specify local parameters for q as well through this config.

Note, it is not necessary to declare filter groups in this section unless you want to specify other parameter than a tag (see below)

Tagging

By default filters are not tagged if they dont belong to a group. To add a tag name you must assign a group name. If a filter has a group name, then the fq's tag name is filterTag_${groupName} by default but may be overridden in the filterGroups config.

{TODO @PW do you like this?}

Facets

Facets are specified for solr through the /facetby Seq as defined in the standard. However, the following differences exist:

  • maxcount is optional to allow for solr default configs to work.
  • solr doesn't support ordering of facets, so if this is set, then there is a warning in the log but otherwise ignored.

Faceting is turned on as soon as the facetby Seq is present.

Note, that the attibute value must be the solr field-name as the mapping from the solrSearchPipelet is not applied (yet).

The values in the nativeParameters Map are passed to solr for the field verbatim after the pattern f.${attribute}.${key}=${value}. This allows you to just specify any valid solr parameter/value pair on field level without any interaction on our part. Global facet parameters may be defined in the _solr.query map.

Solr supports different kinds of faceting and this can be selected with the type parameter. It's value is solr's respective parameter name and is passed as given. No checks are performed here as to allow future methods OOB. However, it defaults to facet.field if missing. Solr's facet.query is not supported thru this structure ATM as it needs to be formulated quite differently and hence must be formulated as global parameters in the _solr.query map.

<Seq key="facetby">
  <!-- per-field configuration for facet.field -->
  <Map>
    <Val key="type">facet.field</Val>
    <Val key="maxcount" type="long">10</Val>
    <Val key="attribute">Extension</Val>
  </Map>
  <!-- per-field configuration for facet.date -->
  <Map>
    <Val key="type">facet.date</Val>
    <Val key="attribute">LastModifiedDate</Val>
    <Map key="nativeParameters">
      <Val key="facet.date.start">NOW/DAY-5DAYS</Val>
      <Val key="facet.date.gap">+1DAY</Val>
      <Val key="facet.date.end">NOW/DAY+1DAY</Val>
    </Map>
  </Map>
</Seq>

Facets are returned the SMILA standard way in the facets map.

Naming Facets

{TODO

  • facetName defaults to attribute name
  • impl: creates an implicit localParams with key=${facetName}

}

Filtering on Facets

Filtering on sepcific facet values is done by specifying a oneOf filter as in the normal filters section and behaves exactly like that, i.e. groupName is also supported.

<Seq key="facetby">
  <Map>
    <Val key="attribute">Extension</Val>
    <Val key="maxcount" type="long">10</Val>
    <Val key="groupName">commonFilters</Val>
    <Seq key="oneOf">
      <Val>doc</Val>
      <Val>xls</Val>
    </Seq>
  </Map>
</Seq>

Note: ATM only the oneOf filter is supported as we cant think of a use case where u want to filter differently on facets.

The filter specified will be transformed to a solr filter and is tagged automatically with the value facet_${facetLabel} unless groupName is spec'ed in which case the groupName is used or what ever is spec'ed as the filter-group's tag.

Multi Select

To turn this on, just add the flag multiselect. The respective local parameters for the facet and filter is done automatically. This cannot be combined with group though and will result in an error. (this is due to the fact that multiselect relies on tagging the fq and exluding it form the facet, see [1])

<Seq key="facetby">

 <Map>
   <Val key="attribute">Extension</Val>
   <Val key="maxcount" type="long">10</Val>
   <Val key="multiselect">true</Val>
   <Seq key="oneOf">
     <Val>doc</Val>
     <Val>xls</Val>
   </Seq>
 </Map>

</Seq> </source>

Other parameters

The following common constructs are supported

  • local parameters
  • native parameters, which are applied on field level

TODO: refactor the common stuff into own sections

Range Facetting for different gap sizes

ATM solr doesnt support range queries with different gap sizes [2]. Until this feature is present in solr, smila defines the following construct to support this case to give an easier construct to the user than the underlying facet.query.

<Map>
  <Val key="facetName">FileSize</Val>
  <Val key="attribute">Size</Val>
  <Val key="multiselect" type='boolean'>true</Val>
  <Val key="type">custom.ranges</Val>
  <Seq key="ranges">
    <Val>[* TO 999]</Val>
    <Val>[1000 TO 9999999]</Val>
    <Val>[9999999 TO *]</Val>
  </Seq>
  <Seq key="oneOf">
    <Val>FileSize_1</Val>
  </Seq>
</Map>

This construct will be transformed to facet.query constructs. In fact, the strings in the ranges Seq are just prefix'ed with the attribute name like so: facet.query={!key=${groupName}_${range_pos} ex=${groupName}}${attribute}:${range}, e.g.

&facet.query={!key=FileSize_0 ex=facet_FileSize}Size:[ *        TO  999      ]
&facet.query={!key=FileSize_1 ex=facet_FileSize}Size:[ 1000     TO  9999999  ]
&facet.query={!key=FileSize_2 ex=facet_FileSize}Size:[ 1000000  TO  *        ]

The result is returned as a normal facet where the facet values are ${groupName}_${range_pos}.

<Map key="facets">
  <Seq key="FileSize">
    <Map>
      <Val key="value">FileSize_0</Val>
      <Val key="count" type="long">42</Val>
    </Map>
    <Map>
      <Val key="value">FileSize_1</Val>
      <Val key="count" type="long">21</Val>
    </Map>
    ...
  </Seq>
</Map>

A filter on a facet value would look like so: &fq={!tag=${groupName}}${attrinute}:${selectedRange}, e.g. tt>&fq={!tag=facet_FileSize}Size:[1000 TO 9999999]</tt>.

Notes:

  • the local param ex= is only set when multiselect is on in which case groupName is not allowed

Solr Specific Parameters (_solr.query)

Some configuration deviations from the SMILA standard and other solr specialties are put into a Solr specific _solr.query Map element at top level of the search record.

{ TODO: rename this nativeParameters? pro: generic con: not possible to run 2 engines side by side But what to do with/name the symentric _solr.results_ then?

IDEA: 1. keep the _solr.query name and have the nativeParameters be just that in side it 2. rename to nativeParameters that contains known entries but also another nativeParameters with params passed as-is. }

The following are supported:

  • filters
  • shards
  • request handler

Shards

Shards are only supported in remote mode and may be defined through the _solr.query/shards Seq.

Solr Request Handler

To select another solr request handler add the _solr.query/qt entry.

The following XML snippet shall illustrate these cases:

<Record xmlns="http://www.eclipse.org/SMILA/record" version="2.0">
  ...
  <Map key="_solr.query">
    <!-- filter query (fq) -->
    <Seq key="fq">
      <Val>Size:[500 TO 1000]</Val>
      <Val>Author:"H. Simpson"</Val>
    </Seq>
 
    <!-- shards -->
    <Seq key="shards">
      <Val>http://localhost:8983/solr</Val>
      <Val>http://remote-server:8983/solr</Val>
    </Seq>
 
    <!-- request handler (qt) -->
    <Val key="qt">/custom</Val>
 
</Record>

The value give is used verbatim.


Filter Groups

This subject is discussed under Filters.

Auxillary Search Functions

Auto-suggest/Terms

Auto suggest/completion is also done via a search request, albeit a very special, stripped down version, which looks like so in the default setup:

<Record >
  <Map key="_solr.query">
    <Map key="terms">
      <Val key="terms" type="boolean">true</Val>
      <Val key="terms.fl">Content</Val>
      <Val key="terms.prefix">con</Val>
    </Map>
    <Val key="qt">/terms</Val>
  </Map>
</Record>

The only items present have to be the terms map and qt entry that needs to be set to an appropriate handler (by default this is /terms). The entries in the terms map are passed as is to solr. For more information about terms configuration and parameters see http://wiki.apache.org/solr/TermsComponent.

The results are returned in the _solr.result/terms map with the key as the actual completed word and its value tells you how many documents in the index contain this word.

<Record >
  <Seq key="records"></Seq>
  <Val key="runtime" type="long">3</Val>
  <Map key="_solr.result">
    <Map key="terms">
      <Val key="congratulations" type="long">1</Val>
      <Val key="conjugate" type="long">1</Val>
      <Val key="containing" type="long">1</Val>
    </Map>
  </Map>
</Record>

In SMILA code this can be done like so:

    final TermsQueryConfigAdapter terms = new TermsQueryConfigAdapter(_solrField);
    terms.setTermsPrefix("con");
    _queryBuilder.setTermsConfiguration(terms);
    _queryBuilder.setRequestHandler("/terms");

Spellcheck (Did you mean)

SIMLA's default setup has spell checking (Did you mean) for the Content field enabled. In most cases it's useful to configure the default request handler to use SpellCheckComponent (solrconfig.xml) and this has been done. Otherwise the correct request handler must be set (solrconfig.xml example: /spell). By default SpellCheckComponent uses a separate index which is created on the fly and updated on every commit. Therefore, to retrieve alternative suggestions for possibly misspelled input words, you just need to add the spellcheck map to _solr.query:

  <Map key="_solr.query">
     ....
    <Map key="spellcheck">
      <Val key="spellcheck" type="boolean">true</Val>
      <Val key="spellcheck.count" type="long">5</Val>
      <Val key="spellcheck.extendedResults" type="boolean">true</Val>
      <Val key="spellcheck.collate" type="boolean">true</Val>
    </Map>
  </Map>

The map contains solr parameters (see http://wiki.apache.org/solr/SpellCheckComponent) that are passed "as is" to solr.

This will add the spellcheck map to _solr.result:

  <Map key="_solr.result">
    ...
    <Map key="spellcheck">
      <Map key="rust">
        <Val key="just" type="long">1</Val>
        <Val key="bust" type="long">1</Val>
      </Map>
      <Val key="collation">Content:just</Val>
    </Map>
    ...
  </Map>

For each misspelled word there is a nested map containing the corrections, where the key is the corrected term and the value is the frequency of the term in the index. The value for the frequency must be turned on via spellcheck.extendedResults and defaults to -1 otherwise.

When collate is on then you can also find a full alternative query under the key collation.

The code for the above XML snippets has been generated with the following code:

    addSolrDoc("1",
      "This is a simple text without real meaning as i dont want to bust my behind for smth. with more sense.");
    addSolrDoc("2", "It is just used for testing.");
    indexAndCommit();
 
    // setup search
    final SpellCheckQueryConfigAdapter spellcheck = new SpellCheckQueryConfigAdapter();
    spellcheck.setSpellCheckCount(5);
    spellcheck.setSpellCheckExtendedResults(true);
    spellcheck.setSpellCheckCollate(true);
    _queryBuilder.setSpellCheckConfiguration(spellcheck);
    _queryBuilder.setQuery("Content:rust");

More Like This / What's related

Solr offers a feature to return related documents which is called in Solr More Like This (MLT). There are 2 modes supported:

  1. return for all items in the SRL the top N related documents, see [3]
  2. the other does this ad-hoc for just one document for which it uses an own request handler, see [4]

It is obvious that the first variant requires much more performance than the 2nd.

Both modes are supported through SMILA and configured very similar. SMILA doesn't do anything special to the arguments you pass in with the record and hands them on to Solr as-is, except that it performes any necessary URL encoding for you. While you may assign specific data types to the parameters, this is not necessary and all values may be given as strings as this is what is being passed on to Solr anyhow.

Which mode is active ultimatly depends on your handler configuration in solrconfig.xml. However, we will assume here SMILA's default setup which binds the MLT handler to /mlt and a normal query to /select.

Both modes share most of the MLT parameters but also need/support specific ones.

<record>
 
  <!-- this is the lucene query expression that is executed in both cases. -->  
  <Val key="query">euklid</Val>
  ...
  <Map key="_solr.query">
    <!-- this select the solr request handler. set it to /mlt when u want to use the MLT handler  -->  
    <!-- <Val key="qt">/mlt</Val> -->
    <!-- determines the list of fields returned for both the normal results as well as the MLT results  -->  
    <Val key="fl" >Id,score,Size</Val>
    ...
    <Map key="moreLikeThis">
      <Val key="mlt" >true</Val>
      <Val key="mlt.fl" >Content</Val>
      <Val key="mlt.mindf">1</Val>
      <Val key="mlt.mintf">1</Val>
      ...
    </Map>
  </Map>
<record>


MLT Results w/o Handler

In this case solr will add the moreLikeThis section on the same level as the normal response section and you need to manually look up the MLT docs for each given result item. SMILA on the other hand transforms the solr result in that it converts the MLT information as a nested part of SMILA's result item, like so:

<Seq key="records">
  <Map>
    <Val key="_recordid">file:Euklid.html</Val>
    <Val key="_weight" type="double">0.7635468</Val>
    <Map key="_mlt.meta">
      <Val key="start" type="long">0</Val>
      <Val key="count" type="long">3</Val>
      <Val key="max_score" type="double">0.8115930557250977</Val>
    </Map>
    <Seq key='_mlt'>
      <Map>
        <Val key="_recordid">file:Archytas_von_Tarent_7185.html</Val>
        <Val key="_weight" type="double">0.5511907</Val>
        <Val key="Size" type="long">47934</Val>
        ...                
      </Map>
      <Map>
        <Val key="_recordid">file:Aristoxenos.html</Val>
        <Val key="_weight" type="double">0.44604447</Val>
        <Val key="Size" type="long">39332</Val>
        ...                
      </Map>
      ... 
    </Seq>
    ...
  </Map>
   ... 
</Seq>

This sample contains the Solr result item with the id file:Euklid.html. With MLT turned on, it now contains a nested _mlt Seq which holds the N related docs for that result item each represented by a Map (MLT-Map) (yes, this prevents you from having a solr doc field of the same name and have it returned in this MLT mode). The Val elements in each MLT-Map are defined by the list of fields in the fl parameter. But how do the _recordid and _weight VALs get in there if the value is actually Id,score,Size? Well, SMILA defines the fields Id and score and automatically maps them to _recordid and _weight. Any other field that you include thru fl is added as a Val element to the MLT result item having the same key as the field name, as is shown for Size here. There is also the _mlt.meta Map that contains result info regarding the MLT result, such as number of items, start (offset), and max_score. The keys of these values are the same as for the normal result.

MLT Results with Handler

The more common use case of MLT is to actually return the related docs for just one document due to performance considerations. This is done by making a request against the MLT handler itself.

The document for which you want the related docs is usually known, e.g. from a previous search and your rendered result list contains a link to fetch/show related docs. In this case the query just selects the given document by its Id ( as shown in the example below). But you also may provide any other query here. However, if the query returns >1 docs it will select just one depending on the other MLT parameter and return only the related docs for that document.

The differences to the query record above are like so:

<record>
 
  <!-- this is the lucene query to select an document by its Id. Note, the escaping of the ID string! -->  
  <Val key="query">Id:file\:Euklid.html</Val>
  ...
  <Map key="_solr.query">
    <!-- this select the solr MLT request handler. -->  
    <Val key="qt">/mlt</Val>
    ...
  </Map>
<record>

The results for such an MLT request are contained in the standard records Seq the same way that normal search results are returned, except that they signify MLT docs.

<Seq key="records">
  <Map>
    <Val key="_recordid">file:Archytas_von_Tarent_7185.html</Val>
    <Val key="_weight" type="double">0.5511907</Val>
    <Val key="Size" type="long">47934</Val>
  </Map>
  <Map>
    <Val key="_recordid">file:Aristoxenos.html</Val>
    <Val key="_weight" type="double">0.44604447</Val>
    <Val key="Size" type="long">39332</Val>
  </Map>
  ...
</Seq>
Note.png
help wanted
due to lack of need, returning the mlt.interestingTerms has not been impl'ed yet


In case of mlt.interestingTerms=details the result record will contain the following additional information:

<Map key="_solr.result">
    ...
    <Map key="interestingTerms">
      <Val key="Content:euklid" type="double">1.0</Val>
      <Val key="Content:geometrie" type="double">1.0</Val>
      ...
    </Map>
    ...
  </Seq>
</Map>

or in case of mlt.interestingTerms=list just:

<Map key="_solr.result">
    ...
    <Seq key="interestingTerms">
 
      <Val>euklid</Val>
      <Val>geometrie</Val>
      ...
    </Map>
    ...
  </Seq>
</Map>

SolrAdministrationHandler

SolrAdministrationHandler provides a basic administration utility for Solr. It’s implemented as REST Handler at: http://localhost:8080/smila/solr/administration/[COMMAND]. Some commands requires additional parameters like “?core=DefaultCore”

The following commands are available via GET request:

  • CORENAMES: Return a list of all available Solr cores in record xml format.
  • FIELDNAMES: Return a list off all field names for a given core.
    •  ?core=[core name]

The following commands are available via POST request:

  • STATUS: Return a status report of either all registered cores or a given core.

?core=[core name] (optional)

  • CREATE: Create and registers a new core based on preexisting .instanceDir/solrconfig.xml/schema.xml

?name=[name of new core] &instanceDir=Path to the instanceDir of the core with solrconfig.xml and schema.xml]

  • RELOAD: Reload a core.

?core=[core name]

  • RENAME: Rename a core (in solr.xml).

?core=[old core name] &other=[new core name]

  • SWAP: Swap two core names.

?core=[core1] &other=[core2]

  • UNLOAD: Unload a core and optionally deletes it.

?core=[core name] &deleteIndex=[true/false] (optional)

  • LOAD: Not implemented in Solr right now.

?core=[core name]

?core=[core name to merge to] &srcCore=[core names to merge from]

  • CLEARCORECACHE: Clear the core cache either for all registered cores a the one specified.

?core=[core name] (optional)

  • OPTIMIZE: Reorganize a core

?core=[core name]

  • PING: Ping a Solr core

?core=[core name]

Most of the commands are available with the aid of Solr CoreAdmin. For detailed information about this commands look here: http://wiki.apache.org/solr/CoreAdmin.

Back to the top