Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "SMILA/5 Minutes Tutorial"

(Create a new BPEL pipeline)
(Stop SMILA)
 
(133 intermediate revisions by 9 users not shown)
Line 2: Line 2:
 
[[Category:HowTo]]
 
[[Category:HowTo]]
  
This page contains installation instructions for the SMILA application which will help you taking the first steps with SMILA.
+
On this page we describe the necessary steps to install and run SMILA in order to create a search index on the [[SMILA]] Eclipsepedia pages and search them.
  
== Download and unpack SMILA ==
+
If you have any troubles or the results differ from what is described here, check the [[SMILA/FAQ|FAQ]].
  
[http://www.eclipse.org/smila/downloads.php Download] the SMILA package and unpack it to an arbitrary folder. This will result in the following folder structure:
+
== Supported Platforms ==
 +
The following platforms are supported:
 +
*Linux 32 Bit
 +
*Linux 64 Bit
 +
*Mac OS X 64 Bit (Cocoa)
 +
*Windows 32 Bit
 +
*Windows 64 Bit
 +
 
 +
== Download and start SMILA ==
 +
 
 +
[http://www.eclipse.org/smila/downloads.php Download] the SMILA package matching your [[#Supported_Platforms|operation system]] and unpack it to an arbitrary folder. This will result in the following folder structure:
  
 
<pre>
 
<pre>
 
/<SMILA>
 
/<SMILA>
  /about_files
+
   /configuration  
   /configuration
+
  /features
+
  /jmxclient
+
  /plugins
+
  /workspace
+
  .eclipseproduct
+
 
   ...
 
   ...
 
   SMILA
 
   SMILA
Line 22: Line 26:
 
</pre>
 
</pre>
  
== Check the preconditions ==
+
=== Preconditions ===
 +
To be able to start SMILA, check the following preconditions first:
  
To be able to follow the steps below, check the following preconditions:
+
==== JRE ====
 +
You will have to provide a JRE executable to be able to run SMILA. The JVM version should be Java 7 (or newer). You may either:
 +
* add the path of your local JRE executable to the PATH environment variable <br>or<br>
 +
* add the argument <tt>-vm <path/to/jre/executable></tt> right at the top of the file <tt>SMILA.ini</tt>. <br>Make sure that <tt>-vm</tt> is indeed the first argument in the file, that there is a line break after it and that there are no leading or trailing blanks. It should look similar to the following:
 +
<div style="margin-left: 1.5em;">
 +
<source lang="text">
 +
-vm
 +
d:/java/jre7/bin/java
 +
...
 +
</source>
 +
</div>
  
* You will have to provide a JRE executable to be able to run SMILA. The JVM version should be at least Java 5. <br> Either:
+
==== Linux ====
** add the path of your local JRE executable to the PATH environment variable <br>or<br>
+
When using Linux, make sure that the file <tt>SMILA</tt> has executable permissions. If not, set the permission by running the following commands in a console:  
** add the argument <tt>-vm <path/to/jre/executable></tt> right at the top of the file <tt>SMILA.ini</tt>. <br>Make sure that <tt>-vm</tt> is indeed the first argument in the file and that there is a line break after it. It should look similar to the following:
+
 
<tt>
 
<tt>
  -vm
+
  chmod +x ./SMILA
d:/java/jre6/bin/java
+
...
+
 
</tt>
 
</tt>
* Since we are going to use <tt>JConsole</tt> as the JMX client later in this tutorial, it is recommended to install and use a Java SE Development Kit (JDK) and not just a Java SE Runtime Environment (JRE) because the latter does not include this application.
+
 
* You need a REST HTTP client to access the SMILA REST API (e.g. "RESTClient" or "Poster" add-on for Firefox browser)
+
==== MacOS ====
* When using the Linux distributable of SMILA, make sure that the files <tt>SMILA</tt> and <tt>jmxclient/run.sh</tt> have executable permissions. If not, set the permission by running the following commands in a console:  
+
When using MAC, switch to <tt>SMILA.app/Contents/MacOS/</tt> and set the permission by running the following command in a console:
 
<tt>
 
<tt>
  chmod +x ./SMILA
+
  chmod a+x ./SMILA
chmod +x ./jmxclient/run.sh
+
 
</tt>
 
</tt>
  
== Start SMILA ==
+
=== Start SMILA ===
 +
To start SMILA, simply start the <tt>SMILA</tt> executable.
  
To start the SMILA engine, simply double-click the <tt>SMILA</tt> executable. Alternatively, open a command line, navigate to the directory where you extracted the files to, and call the <tt>SMILA</tt> executable. Wait until the engine has been fully started. If everything works fine, you should see output similar to that on the following screenshot:
+
You can see that SMILA has fully started if the following line is printed on the OSGI console:  
 
+
<tt>
[[Image:Smila-console-0.9.0.png]]
+
 
+
== Check the log file ==
+
Open the SMILA log file in an editor of your choice to find out what is happening in the background. This file is named <tt>SMILA.log</tt> and can be found in the same directory as the <tt>SMILA</tt> executable:
+
 
+
<pre>
+
/<SMILA>
+
  /about_files
+
  /configuration
+
  /features
+
  /jmxclient
+
  /plugins
+
  /workspace
+
  .eclipseproduct
+
 
   ...
 
   ...
   SMILA
+
   HTTP server started successfully on port 8080
  SMILA.ini
+
</tt>  
  -> SMILA.log <-
+
and you can access SMILA's REST API at [http://localhost:8080/smila/ http://localhost:8080/smila/].
</pre>
+
  
You should see no stacktraces in the log ;) and it should end with an entry like that:
+
If it doesn't work, check the log file (SMILA.log) for possible errors.
  
<pre>
+
=== Stop SMILA ===
INFO  ... internal.HttpServiceImpl    - HTTP server started successfully on port 8080.
+
</pre>
+
  
== Prepare indexing job ==
+
To stop SMILA, type <tt>exit</tt> into the OSGI console and press ''Enter'':
  
We define an indexing job based on the predefined asynchronous "indexUpdate" workflow (see <tt>SMILA/configuration/org.eclipse.smila.jobmanager/workflows.json</tt>). This indexing job will process the imported data.
+
<tt>
 +
  osgi> exit
 +
</tt>
  
For more information about job management please check the [[SMILA/Documentation/JobManager|JobManager documentation]].
+
== Start Indexing Job and Crawl Import ==
  
The "indexUpdate" workflow contains a [[SMILA/Documentation/Worker/PipelineProcessingWorker|PipelineProcessingWorker worker]] which executes the synchronous "AddPipeline" BPEL workflow. So, the synchronous "AddPipeline" BPEL workflow is embedded in the asynchronous "indexUpdate" workflow:
+
Now we're going to crawl and process the SMILA Eclipsepedia pages, Finally we index and search them by using the embedded [[SMILA/Documentation/Solr|Solr integration]].
  
<pre>
+
=== Install a REST client ===
...
+
    {
+
      "name": "indexUpdate",
+
      ...
+
      "actions":
+
      [
+
          {
+
              "worker": "pipelineProcessingWorker",
+
              "parameters":
+
              {
+
                  "pipelineName": "AddPipeline"
+
              },
+
      ...
+
    }
+
...
+
</pre>
+
  
Use your favourite REST Client to create a job definition based on the "indexUpdate" workflow:
+
We're going to use SMILA's REST API to start and stop jobs, so you need a REST client. In [[SMILA/Documentation/Using_The_ReST_API#Interactive_Tools|REST Tools]] you find a selection of recommended browser plugins if you haven't got a suitable REST client yet.
  
<pre>
+
=== Start the indexing job run ===
POST http://localhost:8080/smila/jobmanager/jobs/
+
  {
+
    "name":"indexUpdateJob",
+
    "parameters":{     
+
      "tempStore": "temp"
+
    },
+
    "workflow":"indexUpdate"
+
  }
+
</pre>
+
  
Afterwards, start a job run for the defined job:
+
We are going to start the predefined indexing job "indexUpdate" based on the predefined asynchronous workflow with the same name. This indexing job will process the imported data.
  
<pre>
+
Use your favorite REST Client to start a job run for the job "indexUpdate":
POST http://localhost:8080/smila/jobmanager/jobs/indexUpdateJob
+
</pre>
+
  
Your REST client will show a result like that:
+
<tt>
 +
  POST http://localhost:8080/smila/jobmanager/jobs/indexUpdate/
 +
</tt>
  
<pre>
+
Your REST client will show a result like this:
{
+
<tt>
  "jobId" : "20110901-121343613053",
+
{
  "url" : "http://localhost:8080/smila/jobmanager/jobs/indexUpdateJob/20110901-121343613053/"
+
  "jobId" : "20110901-121343613053",
}
+
  "url" : "http://localhost:8080/smila/jobmanager/jobs/indexUpdate/20110901-121343613053/"
</pre>
+
}
 +
</tt>
  
You will need the "jobId" later on to finish the job run. The job run Id can also be found via the monitoring API for the job:
+
You will need the job run id ("jobId") later on to finish the job run. The job run Id can also be found via the monitoring API for the job:
  
<pre>
+
<tt>
http://localhost:8080/smila/jobmanager/jobs/indexUpdateJob
+
  GET http://localhost:8080/smila/jobmanager/jobs/indexUpdate/
</pre>
+
</tt>
  
 
In the <tt>SMILA.log</tt> file you will see a message like that:
 
In the <tt>SMILA.log</tt> file you will see a message like that:
<pre>
+
<tt>
INFO ... internal.JobManagerImpl    - started job run '20110901-121343613053' for job 'indexUpdateJob'
+
  INFO ... internal.JobRunEngineImpl  - started job run '20110901-121343613053' for job 'indexUpdate'
</pre>
+
</tt>
  
== Configure the File System crawler  ==
+
'''Further information''': The "indexUpdate" workflow uses the [[SMILA/Documentation/Worker/ScriptProcessorWorker|ScriptProcessorWorker]] that executes the JavaScript "add.js" workflow. So, the synchronous script call is embedded in the asynchronous "indexUpdate" workflow. For more details about the "indexUpdate" workflow and "indexUpdate" job definitions see <tt>SMILA/configuration/org.eclipse.smila.jobmanager/workflows.json</tt> and <tt>jobs.json</tt>). For more information about job management in general please check the [[SMILA/Documentation/JobManager|JobManager documentation]].
  
Prepare some local folder on your system whose contents we are going to index in the following. Add some text and HTML files to it. The result could look similar to the following:<tt><br></tt>
+
=== Start the crawl job run  ===
  
  <pre>/home
+
Now that the indexing job is running we need to push some data to it. There is a predefined job for importing the [[SMILA|SMILA Wiki]] pages which we are going to start right now.  
   /johndoe
+
<tt>
    /mydata
+
   POST http://localhost:8080/smila/jobmanager/jobs/crawlSmilaWiki/
      myfile.txt
+
</tt>
      someothertxtfile.txt
+
      myfile.html
+
      someotherhtmlfile.html</pre>
+
  
{| width="100%" style="background-color:#ffcccc; padding-left:30px;"
+
This starts the job <tt>crawlSmilaWiki</tt>, which crawls the SMILA Wiki starting with <tt>http://wiki.eclipse.org/SMILA</tt> and (by applying the configured filters) following only links that have the same prefix. All pages crawled matching this prefix will be pushed to the import job.
|-
+
|
+
Note: Currently, only plain text and HTML files can be crawled and indexed properly.  
+
  
|}
+
Both job runs can be monitored via SMILA's REST API:
 +
* All jobs: [http://localhost:8080/smila/jobmanager/jobs/ http://localhost:8080/smila/jobmanager/jobs/]
 +
* Crawl job: [http://localhost:8080/smila/jobmanager/jobs/crawlSmilaWiki http://localhost:8080/smila/jobmanager/jobs/crawlSmilaWiki]
 +
* Import job: [http://localhost:8080/smila/jobmanager/jobs/indexUpdate http://localhost:8080/smila/jobmanager/jobs/indexUpdate]
  
Open the configuration file at <tt>configuration/org.eclipse.smila.connectivity.framework/file.xml</tt> and adapt the ''BaseDir'' attribute to point to this folder. Make sure to set an absolute path:
+
The crawling of the SMILA Wiki pages should take some time. If all pages are processed, the status of the [http://localhost:8080/smila/jobmanager/jobs/crawlSmilaWiki crawlSmilaWiki]'s job run will change to {{code|SUCCEEDED}}. You can continue with the SMILA search (next chapter) to find out if some of the pages have already made their way into the Solr index.
  
<pre>
+
'''Further information:''' For more information about importing and crawl jobs please see [[SMILA/Documentation#Importing | SMILA Importing ]]. For more information on jobs and tasks in general visit the [[SMILA/Documentation/JobManager|JobManager manual]].
&lt;Process&gt;
+
  &lt;BaseDir&gt;/home/johndoe/mydata&lt;/BaseDir&gt;
+
  ...     
+
&lt;/Process&gt;
+
</pre>
+
  
== Manage CrawlerController via JConsole ==
+
== Search the index ==
 
+
Next step is to start a file system crawler job and let SMILA index the configured folder. Crawler runs can be managed via the JMX protocol, therefore you can connect to SMILA using any JMX client you like. We are going to use JConsole in the following because it is included in the Java SE Development Kit.
+
 
+
Start the JConsole executable in your JDK distribution (<tt><JAVA_HOME>/bin/jconsole</tt>). If the client is up and running, connect to <tt>localhost:9004</tt>.
+
 
+
[[Image:Jconsole.png-0.8.0.png]]
+
 
+
Next, switch to the ''MBeans'' tab, expand the ''SMILA'' node in the ''MBeans'' tree on the left-hand side, and click the ''CrawlerController'' node. This node is used to manage and monitor all crawling activities.
+
 
+
[[Image:Mbeans-overview-0.8.0.png]]
+
 
+
 
+
== Start the File System crawler  ==
+
 
+
To start the File System crawler, select ''SMILA &gt; CrawlerControl &gt; Operations'' on the left-hand side, enter "file" into the first text field next to the ''startCrawlerTask'' button, and "indexUpdateJob" in the second text field. Then click the button:
+
 
+
[[Image:Start-file-crawl-0.9.0.png]]
+
 
+
You should receive a message similar to the following, indicating that the crawler has been successfully started:
+
 
+
[[Image:Start-crawl-file-result-0.9.0.png]]
+
 
+
The following entries will appear in the <tt>SMILA.log</tt> file:
+
 
+
<pre>
+
...
+
INFO  [Thread-21  ]  filesystem.FileSystemCrawler        - Initializing FileSystemCrawler...
+
...
+
INFO  [Thread-21  ]  filesystem.FileSystemCrawler        - Closing FileSystemCrawler...
+
...
+
</pre>
+
 
+
''Error handling'':
+
* If you specified a non-existing data source in the JConsole, you will see an appropriate error dialog message.
+
* If you specified a non-existing or non-started job name, JConsole will show you a success dialog because this error can only be discovered when the first imported data is pushed to the job. So you will see error messages in the log file similar to:
+
<pre>
+
...
+
WARN  bulkbuilder.ConnectivityManagerImpl  - Error while adding record with id 'file:<Path=c:\data\epl-v10.html>' to bulkbuilder
+
      org.eclipse.smila.bulkbuilder.InvalidJobException: Error getting initial task for job 'indexUpdateJob'.
+
        ...
+
      Caused by: org.eclipse.smila.jobmanager.IllegalJobStateException: Job with name 'indexUpdateJob' is not running or is already finishing.
+
ERROR  bulkbuilder.ConnectivityManagerImpl  - Error while adding 2 records to bulkbuilder
+
      org.eclipse.smila.bulkbuilder.InvalidJobException: No job run info for job 'indexUpdateJob', job not defined or not active.
+
...
+
</pre>
+
 
+
* If the File System crawler cannot find the folder to index, the log file also shows an error:
+
<pre>
+
...
+
INFO  [Thread-24  ]  filesystem.FileSystemCrawler        - Initializing FileSystemCrawler...
+
WARN  [Thread-24  ]  performancecounters.CrawlerControllerPerformanceCounterHelper - Agent location [Crawlers/FileSystem/file - 1491048155] is not found
+
WARN  [Thread-24  ]  performancecounters.CrawlerControllerPerformanceCounterHelper - Instance agent agent is null
+
ERROR [Thread-24  ]  impl.CrawlThread                    -
+
org.eclipse.smila.connectivity.framework.CrawlerCriticalException: Folder "/home/doe01/doesnotexist" is not found
+
  at org.eclipse.smila.connectivity.framework.crawler.filesystem.FileSystemCrawler.checkFolders(FileSystemCrawler.java:347)
+
  at org.eclipse.smila.connectivity.framework.crawler.filesystem.FileSystemCrawler.initialize(FileSystemCrawler.java:176)
+
  at org.eclipse.smila.connectivity.framework.impl.CrawlThread.run(CrawlThread.java:214)
+
INFO  [Thread-24  ]  filesystem.FileSystemCrawler        - Closing FileSystemCrawler...
+
...
+
</pre>
+
 
+
The error message above states that the crawler tried to index a folder at <tt>/home/doe01/doesnotexist</tt> but was not able to find it. To solve this, provide data at the mentioned folder or [[#Configure_the_File_System_crawler|adapt the configuration of the File System crawler accordingly]].
+
 
+
=== Manage CrawlerController using the JMX Client  ===
+
 
+
Instead of managing the crawler runs using JConsole it is also possible to use the JMX Client from the SMILA distribution for the same purpose. The JMX Client is a console application that allows managing crawler runs and creating scripts intended for batch crawler execution. It can be found in the <tt>jmxclient</tt> directory of the SMILA distribution. Use the appropriate run script for your platform (i.e. <tt>run.bat</tt> or <tt>run.sh</tt>) to start the application.
+
For example, to start the File System crawler - as described with JConsole above - use the following command:
+
  
 +
To have a look at the index state, e.g. how many documents are already indexed, call:
 
<tt>
 
<tt>
   run crawl file indexUpdateJob
+
   http://localhost:8080/solr/admin/
 
</tt>
 
</tt>
  
For more information please check the [[SMILA/Documentation/Management#JMX_Client|JMX Client documentation]].
+
To search the created index, point your browser to
 +
<tt>
 +
  http://localhost:8080/SMILA/search
 +
</tt>.  
  
== Search the index ==
+
There are currently two stylesheets from which you can select by clicking the respective links in the upper left corner of the header bar: The ''Default'' stylesheet shows a reduced search form with text fields like ''Query'', ''Result Size'', and ''Index'', adequate to query the full-text content of the indexed documents. The ''Advanced'' stylesheet in turn provides a more detailed search form with text fields for meta-data search like for example ''Path'', ''MimeType'', ''Filename'', and other document attributes.
  
To search the index which was created by the crawlers, point your browser to <tt>http://localhost:8080/SMILA/search</tt>. There are currently two stylesheets from which you can select by clicking the respective links in the upper left corner of the header bar: The ''Default'' stylesheet shows a reduced search form with text fields like ''Query'', ''Result Size'', and ''Index Name'', adequate to query the full-text content of the indexed documents. The ''Advanced'' stylesheet in turn provides a more detailed search form with text fields for meta-data search like for example ''Path'', ''MimeType'', ''Filename'', and other document attributes.  
+
'''To use the ''Default'' Stylesheet''':
 +
#Point your browser to <tt>http://localhost:8080/SMILA/search</tt>.
 +
#Enter the search term(s) into the ''Query'' text field (e.g. "SMILA").
 +
# Click ''OK'' to send your query to SMILA.  
  
[[Image:Smila-search-form.png]]
+
'''To use the ''Advanced'' Stylesheet''':
 +
#Point your browser to <tt>http://localhost:8080/SMILA/search</tt>.
 +
#Click ''Advanced'' to switch to the detailed search form.
 +
#For example, to find a file by its name, enter the file name into the ''Filename'' text field, then click ''OK'' to submit your search.
  
Now, let's try the ''Default'' stylesheet and enter our first simple search using a word that you expect to be contained in your dummy files. In this tutorial, we assume that there is a match for the term "data" in the indexed documents. First, select the index on which you want to search from the ''Indexlist'' column on the left-hand side. Currently, there should be only one in the list, namely an index called "test_index". Note that the selected index name will appear in the ''Index Name'' text field of the search form. Then enter the desired term into the ''Query'' text field. And finally, click ''OK'' to send your query to SMILA. Your result could be similar to the following:
+
== Stop indexing job run ==
 
+
[[Image:Searching-for-text-in-file.png]]
+
 
+
Now, let's use the ''Advanced'' stylesheet and search for the name of one the files contained in the indexed folder to check whether it was properly indexed. In our example, we are going to search for a file named <tt>smila-glossary.html</tt>. Click ''Advanced'' to switch to the detailed search form, enter the desired file name into the ''Filename'' text field, then click ''OK'' to submit your search. Your result could be similar to the following:
+
 
+
[[Image:Searching-by-filename.png]]
+
 
+
 
+
== Configure and run the Web crawler ==
+
 
+
Now that we alreday know how to start and configure the File System crawler and how to search indices, configuring and running the Web crawler is rather straightforward.
+
 
+
There's no need to define a new job or start a new job run here, because we want to use the same asynchronous workflow for indexing as before and so we can use the still running job run here.
+
 
+
Let's have a look at the configuration file of the Web crawler which you can find at <tt>configuration/org.eclipse.smila.connectivity.framework/web.xml</tt>:
+
 
+
<source lang="xml">
+
<DataSourceConnectionConfig  ...>
+
  <DataSourceID>web</DataSourceID>
+
  <SchemaID>org.eclipse.smila.connectivity.framework.crawler.web</SchemaID>
+
  <DataConnectionID>
+
    <Crawler>WebCrawler</Crawler>
+
  </DataConnectionID>
+
  <RecordBuffer Size="20" FlushInterval="3000" />
+
  <DeltaIndexing>full</DeltaIndexing>
+
  <Attributes>
+
    ....
+
  </Attributes>
+
  <Process>
+
    <WebSite ProjectName="Example Crawler Configuration" Header="Accept-Encoding: gzip,deflate; Via: myProxy" Referer="http://myReferer">
+
      <UserAgent Name="Crawler" Version="1.0" Description="teddy crawler" Url="http://www.teddy.com" Email="crawler@teddy.com"/>
+
      <CrawlingModel Type="MaxDepth" Value="1000"/>
+
      <CrawlScope Type="Path" />
+
      <CrawlLimits>
+
        ...
+
      </CrawlLimits>
+
      <Seeds FollowLinks="NoFollow">
+
        <Seed>http://wiki.eclipse.org/SMILA</Seed>
+
      </Seeds>
+
      <Filters>
+
        <Filter Type="RegExp" Value=".*action=edit.*" WorkType="Unselect"/>
+
        <Filter Type="RegExp" Value="^((?!/SMILA).)*$" WorkType="Unselect"/>
+
      </Filters>
+
      <MetaTagFilters>
+
        <MetaTagFilter Type="Name" Name="robots" Content="noindex,nofollow" WorkType="Unselect"/>
+
      </MetaTagFilters>     
+
    </WebSite>
+
  </Process>
+
</source>
+
 
+
By default, the Web crawler is configured to index the URL ''http://wiki.eclipse.org/SMILA''. To change this, set the content of the <tt>&lt;Seed&gt;</tt> element to the desired web address and adapt the <tt><Filters></tt> section accordingly. If you require further help on this configuration file refer to the [[SMILA/Documentation/Web_Crawler|Web crawler documentation]]. For example, in the following we changed the web address to the main page of Wikipedia and removed one of the <tt>&lt;Filter&gt;</tt> elements:
+
 
+
<source lang="xml">
+
...
+
<Seeds FollowLinks="NoFollow">
+
  <Seed>http://en.wikipedia.org/wiki/Main_Page</Seed>
+
</Seeds>
+
<Filters>
+
  <Filter Type="RegExp" Value=".*action=edit.*" WorkType="Unselect"/>
+
</Filters>
+
...
+
</source>
+
 
+
To start the crawling process, save the configuration file, go back or reconnect to JConsole, navigate to ''SMILA'' > ''CrawlerController'' > ''Operations'', type "web" and "indexUpdateJob" into the text fields next to the <tt>startCrawlerTask</tt> button, then click the button.
+
 
+
[[Image:Start-web-crawl-0.9.0.png]]
+
 
+
A message like that will pop up after successfull start:
+
 
+
[[Image:Start-crawl-web-result-0.9.0.png]]
+
 
+
Although the default limit for spidered web sites is set to 1,000 in the Web crawler configuration file, it may take a while for the web crawling run to be finished. Click the <tt>getCrawlerTasksState</tt> button to monitor the crawl processing if you want to find out when it has finished. This will produce an output similar to the following:
+
 
+
[[Image:SMILA-One-active-crawl-found-0.8.0.png]]
+
 
+
If you do not want to wait, you may as well stop the crawling run manually. In order to do this, type "web" into the text field next to the <tt>stopCrawlerTask</tt> button, then click this button.
+
 
+
As soon as the Web crawler's run has finished, go back to the search form to [[SMILA/Documentation_for_5_Minutes_to_Success#Search the index|search the generated index]]:
+
 
+
[[Image:Smila-search-form-web.png]]
+
 
+
[[Category:SMILA]]
+
 
+
 
+
== Stop the indexing job run ==
+
  
 
Although there's no need for it, we can finish our previously started indexing job run via REST client now:
 
Although there's no need for it, we can finish our previously started indexing job run via REST client now:
(please replace <job-id> by the job-id you got before when started the job run)
+
(replace <job-id> with the job run id you got before when [[#Start_indexing_job_run|you started the job run]]).
  
<pre>
+
<tt>
POST http://localhost:8080/smila/jobmanager/jobs/indexUpdateJob/<job-id>/finish   
+
  POST http://localhost:8080/smila/jobmanager/jobs/indexUpdate/<job-id>/finish   
</pre>
+
</tt>
  
You can monitor the job run via browser to see that it finished successful:
+
You can monitor the job run via your browser to see that it has finished successfully:
<pre>
+
<tt>
http://localhost:8080/smila/jobmanager/jobs/indexUpdateJob/<job-id>
+
  GET http://localhost:8080/smila/jobmanager/jobs/indexUpdate/<job-id>
</pre>
+
</tt>
  
In the <tt>SMILA.log</tt> file you will see messages like that:
+
In the <tt>SMILA.log</tt> file you will see messages like this:
<pre>
+
<tt>
  INFO ... internal.JobManagerImpl   - finish called for job 'indexUpdateJob', run '20110901-141457584011'
+
  INFO ... internal.JobRunEngineImpl   - finish called for job 'indexUpdate', run '20110901-141457584011'
 
  ...
 
  ...
  INFO ... internal.JobManagerImpl   - Completing job run '20110901-141457584011' for job 'indexUpdateJob' with final state SUCCEEDED
+
  INFO ... internal.JobRunEngineImpl   - Completing job run '20110901-141457584011' for job 'indexUpdate' with final state SUCCEEDED
</pre>
+
</tt>
  
= Just another 5 minutes to change the workflow  =
+
<br/>
 +
<br/>
 +
'''Congratulations, you've just finished the tutorial! '''
  
In previous sections all data collected by crawlers was processed with the same asynchronous "indexUpdate" workflow using the BPEL pipeline "AddPipeline". All data was indexed into the same index named "test_index".
+
You crawled the SMILA Wiki, indexed the pages and searched through them. For more, just continue with the chapter below or visit the [[SMILA/Documentation|SMILA Documentation]].
It is possible, however, to configure SMILA so that data from different data sources will go through different workflows and pipelines and will be indexed into different indices. This will require more advanced configuration features than before but still quite simple ones.
+
  
In the following sections we are going to use the generic asynchronous "importToPipeline" workflow which let you specify the BPEL pipeline to process the data. We create an additional BPEL pipeline for webcrawler records so that webcrawler data will be indexed into a separate index named "web_index".
+
== Further steps ==
  
== Configure LuceneIndexService ==
+
=== Crawl the filesystem ===
  
{|width="100%" style="background-color:#d8e4f1; padding-left:30px;"
+
SMILA has also a predefined job to crawl the file system ("crawlFilesystem"), but you will have to either adapt the predefined job to point it to a valid folder in your filesystem or create your own job.  
|
+
It's very important to shutdown and restart the SMILA engine after the following configuration changes are done because modified configurations are loaded during startup only.
+
|}
+
  
For more information about the LuceneIndexService, please see [[SMILA/Documentation/LuceneIndexService|LuceneIndexService]].
+
We will settle for the second option, because it does not need that you stop and restart SMILA.
  
Let's configure our "web_index" index structure and search template. Add the following code to the end of <tt>configuration/org.eclipse.smila.search.datadictionary/DataDictionary.xml</tt> file before the closing <tt></AnyFinderDataDictionary></tt> tag:
+
==== Create your Job ====
<source lang="xml">
+
POST the following job description to [[SMILA/Documentation/JobDefinitions#List.2C_create.2C_modify_jobs|SMILA's Job API]]. Adapt the <tt>rootFolder</tt> parameter to point to an existing folder on your machine where you have placed some files (e.g. plain text, office docs or HTML files). If your path includes backslashes, escape them with an additional backslash, e.g. <tt>c:\\data\\files</tt>.
  <Index Name="web_index">
+
<tt>
    <Connection xmlns="http://www.anyfinder.de/DataDictionary/Connection" MaxConnections="5"/>
+
POST http://localhost:8080/smila/jobmanager/jobs/
    <IndexStructure xmlns="http://www.anyfinder.de/IndexStructure" Name="web_index">
+
{
      <Analyzer ClassName="org.apache.lucene.analysis.standard.StandardAnalyzer"/>
+
  "name":"crawlFilesAtData",
      <IndexField FieldNo="8" IndexValue="true" Name="MimeType" StoreText="true" Tokenize="true" Type="Text"/>
+
  "workflow":"fileCrawling",
      <IndexField FieldNo="7" IndexValue="true" Name="Size" StoreText="true" Tokenize="true" Type="Text"/>
+
  "parameters":{
      <IndexField FieldNo="6" IndexValue="true" Name="Extension" StoreText="true" Tokenize="true" Type="Text"/>
+
    "tempStore":"temp",
      <IndexField FieldNo="5" IndexValue="true" Name="Title" StoreText="true" Tokenize="true" Type="Text"/>
+
    "dataSource":"file",
      <IndexField FieldNo="4" IndexValue="true" Name="Url" StoreText="true" Tokenize="false" Type="Text">
+
    "rootFolder":"/data",
        <Analyzer ClassName="org.apache.lucene.analysis.WhitespaceAnalyzer"/>
+
    "jobToPushTo":"indexUpdate",
      </IndexField>
+
    "mapping":{
      <IndexField FieldNo="3" IndexValue="true" Name="LastModifiedDate" StoreText="true" Tokenize="false" Type="Text"/>
+
      "fileContent":"Content",
      <IndexField FieldNo="2" IndexValue="true" Name="Path" StoreText="true" Tokenize="true" Type="Text"/>
+
      "filePath":"Path",     
      <IndexField FieldNo="1" IndexValue="true" Name="Filename" StoreText="true" Tokenize="true" Type="Text"/>
+
      "fileName":"Filename",     
      <IndexField FieldNo="0" IndexValue="true" Name="Content" StoreText="true" Tokenize="true" Type="Text"/>
+
      "fileExtension":"Extension",
    </IndexStructure>
+
      "fileLastModified":"LastModifiedDate"
    <Configuration xmlns="http://www.anyfinder.de/DataDictionary/Configuration" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
+
       }
  xsi:schemaLocation="http://www.anyfinder.de/DataDictionary/Configuration ../xml/DataDictionaryConfiguration.xsd">
+
   }
      <DefaultConfig>
+
}
        <Field FieldNo="8">
+
</tt>
          <FieldConfig Constraint="optional" Weight="1" xsi:type="FTText">
+
            <Parameter xmlns="http://www.anyfinder.de/Search/TextField" Operator="OR" Tolerance="exact"/>
+
          </FieldConfig>
+
        </Field>
+
        <Field FieldNo="7">
+
          <FieldConfig Constraint="optional" Weight="1" xsi:type="FTText">
+
            <Parameter xmlns="http://www.anyfinder.de/Search/TextField" Operator="OR" Tolerance="exact"/>
+
          </FieldConfig>
+
        </Field>
+
        <Field FieldNo="6">
+
          <FieldConfig Constraint="optional" Weight="1" xsi:type="FTText">
+
            <Parameter xmlns="http://www.anyfinder.de/Search/TextField" Operator="OR" Tolerance="exact"/>
+
          </FieldConfig>
+
        </Field>       
+
        <Field FieldNo="5">
+
          <FieldConfig Constraint="optional" Weight="1" xsi:type="FTText">
+
            <Parameter xmlns="http://www.anyfinder.de/Search/TextField" Operator="OR" Tolerance="exact"/>
+
          </FieldConfig>
+
        </Field>
+
        <Field FieldNo="4">
+
          <FieldConfig Constraint="optional" Weight="1" xsi:type="FTText">
+
            <Parameter xmlns="http://www.anyfinder.de/Search/TextField" Operator="OR" Tolerance="exact"/>
+
          </FieldConfig>
+
        </Field>
+
        <Field FieldNo="3">
+
          <FieldConfig Constraint="optional" Weight="1" xsi:type="FTText">
+
            <Parameter xmlns="http://www.anyfinder.de/Search/TextField" Operator="OR" Tolerance="exact"/>
+
          </FieldConfig>
+
        </Field>
+
        <Field FieldNo="2">
+
          <FieldConfig Constraint="optional" Weight="1" xsi:type="FTText">
+
            <Parameter xmlns="http://www.anyfinder.de/Search/TextField" Operator="OR" Tolerance="exact"/>
+
          </FieldConfig>
+
        </Field>
+
        <Field FieldNo="1">
+
          <FieldConfig Constraint="optional" Weight="1" xsi:type="FTText">
+
            <Parameter xmlns="http://www.anyfinder.de/Search/TextField" Operator="OR" Tolerance="exact"/>
+
          </FieldConfig>
+
        </Field>
+
        <Field FieldNo="0">
+
          <FieldConfig Constraint="required" Weight="1" xsi:type="FTText">
+
            <NodeTransformer xmlns="http://www.anyfinder.de/Search/ParameterObjects" Name="urn:ExtendedNodeTransformer">
+
              <ParameterSet xmlns="http://www.brox.de/ParameterSet"/>
+
            </NodeTransformer>
+
            <Parameter xmlns="http://www.anyfinder.de/Search/TextField" Operator="AND" Tolerance="exact"/>
+
          </FieldConfig>
+
        </Field>
+
       </DefaultConfig>
+
   </Configuration>
+
  </Index>
+
</source>
+
  
Now we need to add mapping of attribute and attachment names to Lucene "FieldNo" defined in <tt>DataDictionary.xml</tt>. Open <tt>configuration/org.eclipse.smila.lucene/Mappings.xml</tt> file and add the following code to the end of file before closing <tt></Mappings></tt> tag:
+
''Hint: Not all file formats are supported by SMILA out-of-the-box. Have a look [[SMILA/Documentation/TikaPipelet#Supported_document_types | here]] for details.''
  
<source lang="xml">
+
==== Start your jobs ====
  <Mapping indexName="web_index">
+
    <Attributes>
+
      <Attribute name="Filename" fieldNo="1" />
+
      <Attribute name="Path" fieldNo="2" />   
+
      <Attribute name="LastModifiedDate" fieldNo="3" />
+
      <Attribute name="Url" fieldNo="4" />
+
      <Attribute name="Title" fieldNo="5" />   
+
      <Attribute name="Extension" fieldNo="6" />
+
      <Attribute name="Size" fieldNo="7" />
+
      <Attribute name="MimeType" fieldNo="8" />         
+
    </Attributes>
+
    <Attachments>
+
      <Attachment name="Content" fieldNo="0" />     
+
    </Attachments>
+
  </Mapping>
+
</source>
+
  
== Create a new BPEL pipeline ==
+
* Start the <tt>indexUpdate</tt> job (see [[#Start_indexing_job_run|Start indexing job run]]), if you have already stopped it. (If it is still running, that's fine)
 +
<tt>
 +
  POST http://localhost:8080/smila/jobmanager/jobs/indexUpdate/
 +
</tt>
  
We need to add the ''AddWebPipeline'' pipeline to the BPEL WorkflowProcessor. For more information about BPEL WorkflowProcessor please check the [[SMILA/Documentation/BPEL_Workflow_Processor|BPEL WorkflowProcessor]] documentation.
+
* Start your <tt>crawlFilesAtData</tt> job. This new job behaves just like the web crawling job we used above, but its run time might be shorter, depending on how much data actually is at your {{code|rootFolder}}.
Predefined BPEL WorkflowProcessor configuration files are contained in the <tt>configuration/org.eclipse.smila.processing.bpel/pipelines</tt> directory. However, we can add new BPEL pipelines with the SMILA REST API.
+
<tt>
 
+
  POST http://localhost:8080/smila/jobmanager/jobs/crawlFilesAtData/
Start SMILA if it's not yet running, and use your favourite REST client to add the "AddWebPipeline" BPEL pipeline: (the BPEL XML is a little bit unreadable cause we have to escape it for being valid JSON content; after posting the new pipeline you can get a readable version via monitoring REST API - see below)
+
</tt>
 
+
<pre>
+
POST http://localhost:8080/smila/pipeline
+
  {
+
    "name":"AddWebPipeline",
+
    "definition":"<?xml version=\"1.0\" encoding=\"utf-8\" ?>\r\n<process name=\"AddWebPipeline\" targetNamespace=\"http://www.eclipse.org/smila/processor\"\r\n    xmlns=\"http://docs.oasis-open.org/wsbpel/2.0/process/executable\" \r\n    xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\"\r\n    xmlns:proc=\"http://www.eclipse.org/smila/processor\" \r\n    xmlns:rec=\"http://www.eclipse.org/smila/record\">\r\n  <import location=\"processor.wsdl\" namespace=\"http://www.eclipse.org/smila/processor\"\r\n      importType=\"http://schemas.xmlsoap.org/wsdl/\" />\r\n  <partnerLinks>\r\n    <partnerLink name=\"Pipeline\" partnerLinkType=\"proc:ProcessorPartnerLinkType\" myRole=\"service\" />\r\n  </partnerLinks>\r\n  <extensions>\r\n    <extension namespace=\"http://www.eclipse.org/smila/processor\" mustUnderstand=\"no\" />\r\n  </extensions>\r\n  <variables>\r\n    <variable name=\"request\" messageType=\"proc:ProcessorMessage\" />\r\n  </variables>\r\n  <sequence>\r\n    <receive name=\"start\" partnerLink=\"Pipeline\" portType=\"proc:ProcessorPortType\" operation=\"process\"\r\n        variable=\"request\" createInstance=\"yes\" />    \r\n    <if name=\"conditionIsText\">\r\n      <condition>starts-with($request.records/rec:Record[1]/rec:Val[@key=\"MimeType\"],\"text/\")</condition>\r\n      <sequence name=\"processTextBasedContent\">          \r\n        <if name=\"conditionIsHtml\">\r\n          <condition>starts-with($request.records/rec:Record[1]/rec:Val[@key=\"MimeType\"],\"text/html\")\r\n            or starts-with($request.records/rec:Record[1]/rec:Val[@key=\"MimeType\"],\"text/xml\")\r\n          </condition>\r\n        </if>        \r\n        <extensionActivity>\r\n          <proc:invokePipelet name=\"invokeHtml2Txt\">\r\n            <proc:pipelet class=\"org.eclipse.smila.processing.pipelets.HtmlToTextPipelet\" />\r\n            <proc:variables input=\"request\" output=\"request\" />\r\n            <proc:configuration>\r\n              <rec:Val key=\"inputType\">ATTACHMENT</rec:Val>\r\n              <rec:Val key=\"outputType\">ATTACHMENT</rec:Val>\r\n              <rec:Val key=\"inputName\">Content</rec:Val>\r\n              <rec:Val key=\"outputName\">Content</rec:Val>\r\n              <rec:Val key=\"meta:title\">Title</rec:Val>      \r\n            </proc:configuration>                      \r\n          </proc:invokePipelet>\r\n        </extensionActivity> \r\n        <extensionActivity>\r\n          <proc:invokePipelet name=\"invokeLucenePipelet\">\r\n            <proc:pipelet class=\"org.eclipse.smila.lucene.pipelets.LuceneIndexPipelet\" />\r\n            <proc:variables input=\"request\" output=\"request\" />\r\n            <proc:configuration>\r\n              <rec:Val key=\"indexname\">web_index</rec:Val>\r\n                <rec:Val key=\"executionMode\">ADD</rec:Val>\r\n              </proc:configuration>\r\n          </proc:invokePipelet>\r\n        </extensionActivity>\r\n      </sequence>        \r\n    </if>    \r\n    <reply name=\"end\" partnerLink=\"Pipeline\" portType=\"proc:ProcessorPortType\" operation=\"process\" variable=\"request\" />\r\n    <exit />\r\n  </sequence>\r\n</process>\r\n"
+
  }
+
</pre>
+
 
+
Note that we used "web_index" index name for the LuceneService in the BPEL above:
+
<source lang="xml">
+
...
+
<proc:configuration>
+
  <rec:Val key="indexname">web_index</rec:Val>
+
  ...
+
</proc:configuration>
+
...
+
</source>
+
 
+
You can monitor the defined BPEL pipelines via browser, so you should find your new pipeline there:
+
<pre>
+
http://localhost:8080/smila/pipeline
+
</pre>
+
 
+
== Create and start a new indexing job ==
+
 
+
We define an indexing job based on the predefined asynchronous workflow "importToPipeline" (see <tt>SMILA/configuration/org.eclipse.smila.jobmanager/workflows.json</tt>). This indexing job will process the imported data by using our new BPEL pipeline "AddWebPipeline".
+
 
+
The "importToPipeline" workflow contains a [[SMILA/Documentation/Worker/PipelineProcessingWorker|PipelineProcessingWorker worker]] which is not configured for dedicated BPEL pipelines, so the BPEL pipelines handling adds and deletes have to be set via job parameter.
+
 
+
Use your favourite REST Client to create an appropriate job definition:
+
 
+
<pre>
+
POST http://localhost:8080/smila/jobmanager/jobs/
+
  {
+
    "name":"indexWebJob",
+
    "parameters":{    
+
      "tempStore": "temp",
+
      "addPipeline": "AddWebPipeline",
+
      "deletePipeline": "DeletePipeline"
+
    },
+
    "workflow":"importToPipeline"
+
  }
+
</pre>
+
 
+
Note that the "DeletePipeline" is not needed for our test szenario here, but we must fulfill all undefined workflow parameters.
+
 
+
Afterwards, start a job run for the defined job:
+
 
+
<pre>
+
POST http://localhost:8080/smila/jobmanager/jobs/indexWebJob
+
</pre>
+
 
+
== Put it  all together ==
+
 
+
Ok, now it seems that we have finally finished configuring SMILA for using separate BPEL pipelines for file system and web crawling and index data from these crawlers into different indices.
+
Here is what we have done so far:
+
# We added the <tt>web_index</tt> index to the Lucence configuration.
+
# We created a new BPEL pipeline for Web crawler data referencing the new Lucene index.
+
# We used a separate job for web indexing that references the new BPEL pipeline.
+
 
+
Now, [[#Configure and run the Web crawler|run the Web crawler]] again, remember to use "indexWebJob" as job name parameter!
+
 
+
Go back to your browser at http://localhost:8080/SMILA/search, select the new index "web_index" and run a search:
+
 
+
[[Image:Web_index-search.png]]
+
 
+
= Configuration overview =
+
 
+
SMILA configuration files are located in the <tt>configuration</tt> directory of the SMILA application.
+
The following lists the configuration files and documentation links relevant to this tutorial, regarding SMILA components:
+
 
+
'''Crawler'''
+
* configuration folder: <tt>org.eclipse.smila.connectivity.framework</tt>
+
** <tt>file.xml</tt> (FileSystem Crawler)
+
** <tt>web.xml</tt> (Web Crawler)
+
* Documentation
+
** [[SMILA/Documentation/Filesystem_Crawler|Filesystem Crawler]]
+
** [[SMILA/Documentation/Web_Crawler|Web Crawler]]
+
  
'''Jobmanager'''
+
==== Search for your new data ====
* configuration folder: <tt>org.eclipse.smila.jobmanager</tt>
+
#After the job run's finished, wait a bit, then check whether the data has been indexed (see [[#Search_the_index|Search the index]]).
** <tt>workflows.json</tt> (Predefined asynchronous workflows)
+
#It is also a good idea to check the log file for errors.
* Documentation
+
** [[SMILA/Documentation/JobManager|JobManager]]
+
** [[SMILA/Documentation/Worker/PipelineProcessingWorker|PipelineProcessingWorker]]
+
* REST API: http://localhost:8080/SMILA/jobmanager
+
  
'''BPEL Pipelines'''
+
=== 5 more minutes to change the workflow ===
* configuration folder: <tt>org.eclipse.smila.processing.bpel</tt>
+
** <tt>pipelines/*.bpel</tt> (Predefined BPEL pipelines)
+
* Documentation
+
** [[SMILA/Documentation/BPEL_Workflow_Processor|BPELWorkflowProcessor]]
+
** [[SMILA/Documentation/Processing/JSON_REST_API|BPEL Pipeline REST API]]
+
* REST API: http://localhost:8080/SMILA/pipeline
+
  
'''Lucene Index Service'''
+
The [[SMILA/Documentation/5 more minutes to change the workflow|5 more minutes to change the workflow]] show how you can configure the system so that data from different data sources will go through different workflows and scripts and will be indexed into different indices.
* DataDictionary
+
** configuration folder: <tt>org.eclipse.smila.search.datadictionary</tt>
+
*** <tt>DataDictionary.xml</tt>
+
* Lucene Mappings
+
** configuration folder: <tt>org.eclipse.smila.lucene</tt>
+
*** <tt>Mappings.xml</tt>
+
* Documentation
+
** [[SMILA/Documentation/LuceneIndexService|LuceneIndexService]]
+

Latest revision as of 08:58, 15 April 2015


On this page we describe the necessary steps to install and run SMILA in order to create a search index on the SMILA Eclipsepedia pages and search them.

If you have any troubles or the results differ from what is described here, check the FAQ.

Supported Platforms

The following platforms are supported:

  • Linux 32 Bit
  • Linux 64 Bit
  • Mac OS X 64 Bit (Cocoa)
  • Windows 32 Bit
  • Windows 64 Bit

Download and start SMILA

Download the SMILA package matching your operation system and unpack it to an arbitrary folder. This will result in the following folder structure:

/<SMILA>
  /configuration    
  ...
  SMILA
  SMILA.ini

Preconditions

To be able to start SMILA, check the following preconditions first:

JRE

You will have to provide a JRE executable to be able to run SMILA. The JVM version should be Java 7 (or newer). You may either:

  • add the path of your local JRE executable to the PATH environment variable
    or
  • add the argument -vm <path/to/jre/executable> right at the top of the file SMILA.ini.
    Make sure that -vm is indeed the first argument in the file, that there is a line break after it and that there are no leading or trailing blanks. It should look similar to the following:
-vm
d:/java/jre7/bin/java
...

Linux

When using Linux, make sure that the file SMILA has executable permissions. If not, set the permission by running the following commands in a console:

chmod +x ./SMILA

MacOS

When using MAC, switch to SMILA.app/Contents/MacOS/ and set the permission by running the following command in a console:

chmod a+x ./SMILA

Start SMILA

To start SMILA, simply start the SMILA executable.

You can see that SMILA has fully started if the following line is printed on the OSGI console:

 ...
 HTTP server started successfully on port 8080

and you can access SMILA's REST API at http://localhost:8080/smila/.

If it doesn't work, check the log file (SMILA.log) for possible errors.

Stop SMILA

To stop SMILA, type exit into the OSGI console and press Enter:

 osgi> exit

Start Indexing Job and Crawl Import

Now we're going to crawl and process the SMILA Eclipsepedia pages, Finally we index and search them by using the embedded Solr integration.

Install a REST client

We're going to use SMILA's REST API to start and stop jobs, so you need a REST client. In REST Tools you find a selection of recommended browser plugins if you haven't got a suitable REST client yet.

Start the indexing job run

We are going to start the predefined indexing job "indexUpdate" based on the predefined asynchronous workflow with the same name. This indexing job will process the imported data.

Use your favorite REST Client to start a job run for the job "indexUpdate":

 POST http://localhost:8080/smila/jobmanager/jobs/indexUpdate/

Your REST client will show a result like this:

{
  "jobId" : "20110901-121343613053",
  "url" : "http://localhost:8080/smila/jobmanager/jobs/indexUpdate/20110901-121343613053/"
}

You will need the job run id ("jobId") later on to finish the job run. The job run Id can also be found via the monitoring API for the job:

 GET http://localhost:8080/smila/jobmanager/jobs/indexUpdate/

In the SMILA.log file you will see a message like that:

 INFO ... internal.JobRunEngineImpl   - started job run '20110901-121343613053' for job 'indexUpdate'

Further information: The "indexUpdate" workflow uses the ScriptProcessorWorker that executes the JavaScript "add.js" workflow. So, the synchronous script call is embedded in the asynchronous "indexUpdate" workflow. For more details about the "indexUpdate" workflow and "indexUpdate" job definitions see SMILA/configuration/org.eclipse.smila.jobmanager/workflows.json and jobs.json). For more information about job management in general please check the JobManager documentation.

Start the crawl job run

Now that the indexing job is running we need to push some data to it. There is a predefined job for importing the SMILA Wiki pages which we are going to start right now.

 POST http://localhost:8080/smila/jobmanager/jobs/crawlSmilaWiki/

This starts the job crawlSmilaWiki, which crawls the SMILA Wiki starting with http://wiki.eclipse.org/SMILA and (by applying the configured filters) following only links that have the same prefix. All pages crawled matching this prefix will be pushed to the import job.

Both job runs can be monitored via SMILA's REST API:

The crawling of the SMILA Wiki pages should take some time. If all pages are processed, the status of the crawlSmilaWiki's job run will change to SUCCEEDED. You can continue with the SMILA search (next chapter) to find out if some of the pages have already made their way into the Solr index.

Further information: For more information about importing and crawl jobs please see SMILA Importing . For more information on jobs and tasks in general visit the JobManager manual.

Search the index

To have a look at the index state, e.g. how many documents are already indexed, call:

 http://localhost:8080/solr/admin/

To search the created index, point your browser to

 http://localhost:8080/SMILA/search

.

There are currently two stylesheets from which you can select by clicking the respective links in the upper left corner of the header bar: The Default stylesheet shows a reduced search form with text fields like Query, Result Size, and Index, adequate to query the full-text content of the indexed documents. The Advanced stylesheet in turn provides a more detailed search form with text fields for meta-data search like for example Path, MimeType, Filename, and other document attributes.

To use the Default Stylesheet:

  1. Point your browser to http://localhost:8080/SMILA/search.
  2. Enter the search term(s) into the Query text field (e.g. "SMILA").
  3. Click OK to send your query to SMILA.

To use the Advanced Stylesheet:

  1. Point your browser to http://localhost:8080/SMILA/search.
  2. Click Advanced to switch to the detailed search form.
  3. For example, to find a file by its name, enter the file name into the Filename text field, then click OK to submit your search.

Stop indexing job run

Although there's no need for it, we can finish our previously started indexing job run via REST client now: (replace <job-id> with the job run id you got before when you started the job run).

 POST http://localhost:8080/smila/jobmanager/jobs/indexUpdate/<job-id>/finish  

You can monitor the job run via your browser to see that it has finished successfully:

 GET http://localhost:8080/smila/jobmanager/jobs/indexUpdate/<job-id>

In the SMILA.log file you will see messages like this:

INFO ... internal.JobRunEngineImpl   - finish called for job 'indexUpdate', run '20110901-141457584011'
...
INFO ... internal.JobRunEngineImpl   - Completing job run '20110901-141457584011' for job 'indexUpdate' with final state SUCCEEDED



Congratulations, you've just finished the tutorial!

You crawled the SMILA Wiki, indexed the pages and searched through them. For more, just continue with the chapter below or visit the SMILA Documentation.

Further steps

Crawl the filesystem

SMILA has also a predefined job to crawl the file system ("crawlFilesystem"), but you will have to either adapt the predefined job to point it to a valid folder in your filesystem or create your own job.

We will settle for the second option, because it does not need that you stop and restart SMILA.

Create your Job

POST the following job description to SMILA's Job API. Adapt the rootFolder parameter to point to an existing folder on your machine where you have placed some files (e.g. plain text, office docs or HTML files). If your path includes backslashes, escape them with an additional backslash, e.g. c:\\data\\files.

POST http://localhost:8080/smila/jobmanager/jobs/
{
 "name":"crawlFilesAtData",
 "workflow":"fileCrawling",
 "parameters":{
   "tempStore":"temp",
   "dataSource":"file",
   "rootFolder":"/data",
   "jobToPushTo":"indexUpdate",
   "mapping":{
     "fileContent":"Content",
     "filePath":"Path",       
     "fileName":"Filename",       
     "fileExtension":"Extension",
     "fileLastModified":"LastModifiedDate"
     }
  }
}

Hint: Not all file formats are supported by SMILA out-of-the-box. Have a look here for details.

Start your jobs

  • Start the indexUpdate job (see Start indexing job run), if you have already stopped it. (If it is still running, that's fine)

  POST http://localhost:8080/smila/jobmanager/jobs/indexUpdate/

  • Start your crawlFilesAtData job. This new job behaves just like the web crawling job we used above, but its run time might be shorter, depending on how much data actually is at your rootFolder.

 POST http://localhost:8080/smila/jobmanager/jobs/crawlFilesAtData/

Search for your new data

  1. After the job run's finished, wait a bit, then check whether the data has been indexed (see Search the index).
  2. It is also a good idea to check the log file for errors.

5 more minutes to change the workflow

The 5 more minutes to change the workflow show how you can configure the system so that data from different data sources will go through different workflows and scripts and will be indexed into different indices.

Back to the top