Skip to main content
Jump to: navigation, search

SMILA/5 Minutes Tutorial

Revision as of 06:09, 13 November 2008 by (Talk | contribs) (8. Configure and run the web crawler.)

This page contains installation instructions for the SMILA application and helps you with your first steps in SMILA. Please note that in this tutorial as well as in the SMILA application you may sometimes come across the abbreviation EILF, which refers to the former name of the SMILA project.

1. Download and unpack the SMILA application.


2. Start the SMILA engine.

To start the SMILA engine open a terminal, navigate to the directory that contains the extracted files, and run the SMILA (EILF) executable. Wait until the engine is fully started. If everything is OK, you should see output similar to the one on the following screenshot:


3. Check the log file.

You can check what's happening in the background by opening the SMILA log file in an editor. This file is named SMILA.log (EILF.log) and can be found in the same directory as the SMILA executable.


4. Configure the crawling jobs.

Now when the SMILA engine is up and running we can start the crawling jobs. Crawling jobs are managed over the JMX protocol, that means that we can connect to SMILA with a JMX client of your choice. We will use JConsole for that purpose since this JMX client is already available as a default with the Sun Java distribution.

Start the JConsole executable in your JDK distribution. If the client is up and running, select the PID in the Connect window and click Connect.


Next, switch to the MBeans tab, expand the SMILA (EILF) node in the MBeans tree on the left side of the window, and click the org.eclipse.smila.connectivity.framework.CrawlerController node. This node is used to manage and monitor all crawling activities. Click a sub node and find the crawling attributes on the right pane.


5. Start the file system crawler.

To start a file system crawler, open the Operations tab on the right pane, type "file" into the text field next to the startCrawl button and click the button.


You should receive a message similar to the following, indicating that the crawler has been successfully started:


Now we can check the log file to see what happened:


6. Configuring the file system crawler.

Maybe you have already noticed the following error message in your log output after starting the file system crawler:

2008-09-11 18:14:36,059 [Thread-13] ERROR impl.CrawlThread - org.eclipse.eilf.connectivity.framework.CrawlerCriticalException: Folder "c:\data" is not found

The error message above states that the crawler tried to index a folder at c:\data but was not able to find this it. To solve this, let's create a folder with sample data, say ~/tmp/data, put some dummy text files into it, and configure the file system crawler to index it. To configure the crawler to index the new directory instead of c:\data, open the configuration file of the crawler at configuration/org.eclipse.smila.connectivity.framework/file. Modify the BaseDir attribute by setting its value to an absolute path that points to your sample directory. Don't forget to save the file.


Now start the file system crawler with JConsole again (see step 5). This time there should be something interesting in the log file:


It looks like something was indexed. In the next step we'll try to search on the index that was created.

7. Searching on the indices.

To search on the indices that were created by the crawlers, point your browser to http://localhost:8080/AnyFinder/SearchForm. Find the name of all available indices in the left column below the Indexlist header. Currently, there should be only one index in the list. Click its name to open the search form:


Now let's try to search for a word from which you know that it occurs in your dummy files. In this tutorial, we know that there was a word "data" in a file named sample.txt.


There was also a file named file 1.txt in the sample folder. Let's check whether it was indexed. Type "1.txt" in the Filename field and click the search icon again:


8. Configure and run the web crawler.

Now that we know how to start and configure the file system crawler and how to search on indices configuring and running the web crawler is straightforward: The configuration file of the web crawler is located at configuration/org.eclipse.smila.connectivity.framework directory and is named web:


By default the web crawler is configured to index the URL website. To change this, open the file in an editor of your choice and set the content of the <Seed> element to the desired web site. Detailed information on the configuration of the web crawler is also available at the Web crawler configuration page.

To start the crawling process, save the configuration file, open the Operations tab in JConsole again, type "web" into the text field next to the startCrawl button, and click the button.


Note that the Operations tab in JConsole also provides buttons to stop a crawler, get the list of active crawlers and the current status of a particular crawling job. As an example the following screenshot shows the result after the getActiveCrawlsStatus button has been clicked while the web crawler is running:


When the web crawler's job is finished, you can search on the generated index just like described above with the file system crawler (see step 7).


5 Minutes for Changing Workflow

In previous sections all data collected by crawlers was processed with the same workflow and indexed into the same index, test_index. It's possible to configure SMILA so that data from different data sources will go through different workflows and will be indexed into different indices. This will require more advanced configuration than before but still is quite simple. Let's create additional workflow for webcrawler records so that webcrawler data will be indexed into separate index, say web_index.

1. Modify Listener rules.

First, lets modify the default add rule in the Listener and add another rule that will make webcrawler records to be processed by separate BPEL workflow. For more information about Listener, please see the section Listener of the QueueWorker documentation. Listener configuration is placed at the configuration/org.eclipse.eilf.connectivity.queue.worker/ListenerConfig.xml Open that file and edit the <Condition> tag of the Default ADD Rule. The result should be as follows:

<Rule Name="Default ADD Rule" WaitMessageTimeout="10" Workers="2">
  <Source BrokerId="broker1" Queue="EILF.connectivity"/>
  <Condition>Operation='ADD' and NOT(DataSourceID LIKE 'web%')</Condition>
    <Synchronize Filter="no-filter"/>
    <Process Workflow="AddPipeline"/>

Now add the following new rule to this file:

<Rule Name="Web ADD Rule" WaitMessageTimeout="10" Workers="2">
  <Source BrokerId="broker1" Queue="EILF.connectivity"/>
  <Condition>Operation='ADD' and DataSourceID LIKE 'web%'</Condition>
    <Synchronize Filter="no-filter"/>
    <Process Workflow="AddWebPipeline"/>

Notice that we modified condition in the Default ADD Rule to skip webcrawler data. Webcrawler data will be processed by new Web ADD Rule. Web ADD Rule defines that webcrawler data will be processed by AddWebPipeline workflow, so next we need to create AddWebPipeline workflow.

2. Create workflow for the BPEL WorkflowProcessor

We need to add the AddWebPipeline workflow to BPEL WorkflowProcessor. For more information about BPEL WorkflowProcessor please check the BPEL WorkflowProcessor documentation. BPEL WorkflowProcessor configuration files are placed at the configuration/org.eclipse.eilf.processing.bpel/pipelines directory. There is a file addpipeline.bpel that defines AddPipeline process. Let's create the addwebpipeline.bpel file that will define AddWebPipeline process and put the following code into it:

<?xml version="1.0" encoding="utf-8" ?>
<process name="AddWebPipeline" targetNamespace=""
  xmlns="" xmlns:xsd=""
  xmlns:proc="" expressionLanguage="urn:oasis:names:tc:wsbpel:2.0:sublang:xpath1.0"
  <import location="processor.wsdl" namespace=""
    importType="" />
    <partnerLink name="Pipeline" partnerLinkType="proc:ProcessorPartnerLinkType" myRole="service" />
    <extension namespace="" mustUnderstand="no" />
    <variable name="request" messageType="proc:ProcessorMessage" />
    <receive name="start" partnerLink="Pipeline" portType="proc:ProcessorPortType" operation="process" variable="request"
      createInstance="yes" />
    <extensionActivity name="invokeMimeTypeIdentification">
        <proc:service name="MimeTypeIdentifier" />
        <proc:variables input="request" output="request" />
    <extensionActivity name="convertDocument">
        <proc:pipelet class="org.eclipse.eilf.processing.pipelets.aperture.AperturePipelet" />
        <proc:variables input="request" output="request" />
    <extensionActivity name="invokeLuceneService">
        <proc:service name="LuceneIndexService" />
        <proc:variables input="request" output="request" />
          <rec:An n="org.eclipse.eilf.lucene.LuceneIndexService">
            <rec:V n="indexName">web_index</rec:V>
            <rec:V n="executionMode">ADD</rec:V>
    <reply name="end" partnerLink="Pipeline" portType="proc:ProcessorPortType" operation="process" variable="request" />
    <exit />

Note that we use "web_index" index name for the LuceneService in the code above:

<rec:An n="org.eclipse.eilf.lucene.LuceneIndexService">
  <rec:V n="indexName">web_index</rec:V>
  <rec:V n="executionMode">ADD</rec:V>

We need to add our pipeline description to the deploy.xml file placed in the same directory. Add the following code to the end of deploy.xml before the </deploy> tag:

<process name="proc:AddWebPipeline">
  <provide partnerLink="Pipeline">
    <service name="proc:AddWebPipeline" port="ProcessorPort" />

Now we need to add our web_index to LuceneIndexService configuration.

3. LuceneIndexService configuration

For more information about LuceneIndexService, please see LuceneIndexService

Let's configure our web_index index structure and search template. Add the following code to the end of configuration/ file before the </AnyFinderDataDictionary> tag:

<Index Name="web_index">
  <Connection xmlns="" MaxConnections="5"/>
  <IndexStructure xmlns="" Name="web_index">
    <Analyzer ClassName="org.apache.lucene.analysis.standard.StandardAnalyzer"/>
    <IndexField FieldNo="8" IndexValue="true" Name="MimeType" StoreText="true" Tokenize="true" Type="Text"/>
    <IndexField FieldNo="7" IndexValue="true" Name="Size" StoreText="true" Tokenize="true" Type="Text"/>
    <IndexField FieldNo="6" IndexValue="true" Name="Extension" StoreText="true" Tokenize="true" Type="Text"/>
    <IndexField FieldNo="5" IndexValue="true" Name="Title" StoreText="true" Tokenize="true" Type="Text"/>
    <IndexField FieldNo="4" IndexValue="true" Name="Url" StoreText="true" Tokenize="false" Type="Text">
      <Analyzer ClassName="org.apache.lucene.analysis.WhitespaceAnalyzer"/>
    <IndexField FieldNo="3" IndexValue="true" Name="LastModifiedDate" StoreText="true" Tokenize="false" Type="Text"/>
    <IndexField FieldNo="2" IndexValue="true" Name="Path" StoreText="true" Tokenize="true" Type="Text"/>
    <IndexField FieldNo="1" IndexValue="true" Name="Filename" StoreText="true" Tokenize="true" Type="Text"/>
    <IndexField FieldNo="0" IndexValue="true" Name="Content" StoreText="true" Tokenize="true" Type="Text"/>
    <Field FieldNo="0" Name="ID"/>
  <Configuration xmlns="" xmlns:xsi="" xsi:schemaLocation=" ../xml/DataDictionaryConfiguration.xsd">
      <Field FieldNo="8">
        <FieldConfig Constraint="optional" Weight="1" xsi:type="FTText">
          <Parameter xmlns="" Operator="OR" Tolerance="exact"/>
      <Field FieldNo="7">
        <FieldConfig Constraint="optional" Weight="1" xsi:type="FTText">
          <Parameter xmlns="" Operator="OR" Tolerance="exact"/>
      <Field FieldNo="6">
        <FieldConfig Constraint="optional" Weight="1" xsi:type="FTText">
          <Parameter xmlns="" Operator="OR" Tolerance="exact"/>
      <Field FieldNo="5">
        <FieldConfig Constraint="optional" Weight="1" xsi:type="FTText">
          <Parameter xmlns="" Operator="OR" Tolerance="exact"/>
      <Field FieldNo="4">
        <FieldConfig Constraint="optional" Weight="1" xsi:type="FTText">
          <Parameter xmlns="" Operator="OR" Tolerance="exact"/>
      <Field FieldNo="3">
        <FieldConfig Constraint="optional" Weight="1" xsi:type="FTText">
          <Parameter xmlns="" Operator="OR" Tolerance="exact"/>
      <Field FieldNo="2">
        <FieldConfig Constraint="optional" Weight="1" xsi:type="FTText">
          <Parameter xmlns="" Operator="OR" Tolerance="exact"/>
      <Field FieldNo="1">
        <FieldConfig Constraint="optional" Weight="1" xsi:type="FTText">
          <Parameter xmlns="" Operator="OR" Tolerance="exact"/>
      <Field FieldNo="0">
        <FieldConfig Constraint="required" Weight="1" xsi:type="FTText">
          <NodeTransformer xmlns="" Name="urn:ExtendedNodeTransformer">
            <ParameterSet xmlns=""/>
          <Parameter xmlns="" Operator="AND" Tolerance="exact"/>
    <Result Name="">
      <ResultField FieldNo="8" Name="MimeType"/>
      <ResultField FieldNo="7" Name="Size"/>
      <ResultField FieldNo="6" Name="Extension"/>
      <ResultField FieldNo="5" Name="Title"/>
      <ResultField FieldNo="4" Name="Url"/>
      <ResultField FieldNo="3" Name="LastModifiedDate"/>
      <ResultField FieldNo="2" Name="Path"/>
      <ResultField FieldNo="1" Name="Filename"/>
    <HighlightingResult Name="">
      <HighlightingResultField FieldNo="0" Name="Content" xsi:type="HLTextField">
        <HighlightingTransformer Name="urn:Sentence">
          <ParameterSet xmlns="">
            <Parameter Name="MaxLength" xsi:type="Integer">
            <Parameter Name="MaxHLElements" xsi:type="Integer">
            <Parameter Name="MaxSucceedingCharacters" xsi:type="Integer">
            <Parameter Name="SucceedingCharacters" xsi:type="String">
            <Parameter Name="SortAlgorithm" xsi:type="String">
            <Parameter Name="TextHandling" xsi:type="String">
        <HighlightingParameter xmlns=""/>

Now we need to add mapping of attribute and attachment names to Lucene "FieldNo" defined in DataDictionary.xml. Open configuration/org.eclipse.eilf.lucene/Mappings.xml file and add the following code to the end of file before </Mappings> tag:

<Mapping indexName="web_index">
    <Attribute name="Filename" fieldNo="1" />
    <Attribute name="Path" fieldNo="2" />    
  <Attribute name="LastModifiedDate" fieldNo="3" />
  <Attribute name="Url" fieldNo="4" />
  <Attribute name="Title" fieldNo="5" />    
  <Attribute name="Extension" fieldNo="6" />
  <Attribute name="Size" fieldNo="7" />
  <Attribute name="MimeType" fieldNo="8" />           
    <Attachment name="Text" fieldNo="0" />      

4. Put it all together

Ok, now it seems that we finally finished configuring SMILA for using separate workflows for Filesystem and Web crawlers and index data from these crawlers into different indices. Here is what we have done so far:

  1. Modified Listener rules to use different workflows for Filesystem and Web crawlers
  2. Created new BPEL workflow for Web crawler
  3. Added webcrawler index to the lucence configurations.

No we can start SMILA again and look what's happening when we start Web crawler:

Web index.png

It seems that Web crawler data is now indexed to the web_index as expected. Now we can also search on the web_index from browser:

Web index-search.png

Configuration overview

SMILA configuration files are placed into configuration directory of the SMILA application. Following figure shows configuration files relevant to this tutorial, regarding SMILA components and data lifecycle. SMILA components names are black-colored, directories containing configuration files and filenames are blue-colored.


Back to the top