Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "SMILA/Documentation/Crawler"

m
 
(20 intermediate revisions by 7 users not shown)
Line 1: Line 1:
 +
{{note|This is deprecated for SMILA 1.0, the connectivity framework has been replaced by the new [[SMILA/Documentation#Importing | Importing framework]].}}
 +
 
[[Image:CrawlerWorkflow.png|thumb|Crawler Wokflow]]
 
[[Image:CrawlerWorkflow.png|thumb|Crawler Wokflow]]
  
== What does Crawler do ==
+
== Overview ==
  
A Crawler gathers information about resources, both content and metadata of interest like size or mime type.
+
A crawler gathers information about resources, both content and metadata of interest like size or MIME type. SMILA currently comes with three types of crawlers, each adequate for a different datasource type, namely Web crawler, JDBC Database crawler, and File System crawler, to allow gathering information from the internet, databases, or files from a hard disk. Furthermore, the Connectivity Framework provides an API for developers which allows them to create their own crawlers.
  
== Crawler types ==  
+
== API ==
  
SMILA contains three types of crawlers, each for a different data source type, namely WebCrawler, JDBC DatabaseCrawler, and FileSystemCrawler to facilitate gathering information from the internet, databases, or files from a hard disk.
+
A crawler has to implement two interfaces: <tt>Crawler</tt> and <tt>CrawlerCallback</tt>. The easiest way to achieve this is to extend the abstract base class <tt>AbstractCrawler</tt> located in bundle <tt>org.eclipse.smila.connectivity.framework</tt>. This class already contains handling for the crawlers Id and an OSGI service activate method. The crawler  method <tt>getNext()</tt> is designed to support an array of Datareference objects, as this reduces the number of method calls. In general there are no restrictions on the size of the array, in fact the size could vary on multiple method calls. This allows a crawler to internally implement a Producer/Consumer pattern. A <tt>Crawler</tt> implementation that is restricted to work as an iterator only can also enforce this by always returning an array of size one.
  
Furthermore, the Connectivity Framework provides an API for developers to create own crawlers.
+
Javadoc: [http://build.eclipse.org/rt/smila/javadoc/current/org/eclipse/smila/connectivity/framework/Crawler.html org.eclipse.smila.connectivity.framework.Crawler]
  
== Crawler configuration ==
+
== Architecture ==
  
A Crawler is started with a specific, named configuration, that defines what information is to be crawled (e.g. content, kinds of metadata) and where to find that data (e.g. file system path, JDBC Connection String).
+
Crawlers are managed and instantiated by the CrawlerController. The CrawlerController communicates with the crawler via interface <tt>Crawler</tt>, only. The crawler's <tt>getNext()</tt> method returns <tt>DataReference</tt> objects to the CrawlerController. <tt>DataReference</tt> is also an interface implemented by class <tt>org.eclipse.smila.connectivity.framework.util.internal.DataReferenceImpl</tt>. A DataReference, as the name suggests, is only a reference to data provided by the crawler. This is mainly a performance issue, as due to the use of DeltaIndexing it may not be neccessary to transfer all the data from the crawler to the CrawlerController and to ConnectivityManager. Therefore a DataReference contains only the minumum data needed to perform DeltaIndexing: an Id and a hash token. To access the whole object it provideds method <tt>getRecord()</tt> that returns a complete Record object containing Id, attributes, annotations and attachments. To create the Record object, the DataReference communicates with the crawler via interface <tt>CrawlerCallback</tt>, as each DataReference has a reference to the crawler that created it.
 +
 
 +
The following chart shows the crawler architecture and how data is shared with the CrawlerController:
 +
[[Image:Crawler Architecture.png|Crawler Architecture]]
 +
 
 +
Package <tt>org.eclipse.smila.connectivity.framework.util</tt> provides some factory classes for crawlers to create Ids, hashes and DataReference objects. More utility classes are planned to be implemented, that allow easy realization of crawlers using an iterator or producer/consumer pattern.
 +
 
 +
== Configuration ==
 +
 
 +
A crawler is started with a specific, named configuration, that defines what information is to be crawled (e.g. content, kinds of metadata) and where to find that data (e.g. file system path, JDBC Connection String). See each crawler documentation for details on configuration options.
 +
 
 +
 
 +
Each crawler can define its own configuration because crawlers need different information to execute specifc crawl jobs. As example a JDBC crawler needs information about which database and which table should be crawled and which columns should be returned.
 +
 
 +
Therefore the crawler developer defines a schema that contains all interesting information. This schema is based on a root schema that is supported by the SMILA framework. It declares the generic framework/frame which has to be used to send DataSourceConnectionConfigs (a crawl task) to the SMILA framework.
 +
The root-schema can be found in:
 +
configuration\org.eclipse.smila.connectivity.framework.schema/schemas/RootDataSourceConnectionConfigSchema.xsd.
 +
 
 +
The root schema looks like as follows:
 +
 
 +
[[Image:RootdatasourceConnectionConfig.png]]
 +
 
 +
;DataSourceID
 +
:A description string that is used in the whole framework to separate and address information that apply to the same crawl job
 +
 
 +
;SchemaID
 +
:The SchemaID contains the whole bundle name of the crawler (e.g. File System crawler: org.eclipse.smila.connectivity.framework.crawler.filesystem).<br /> The SMILA Framework uses this information to gather the schema for the validation of the  DataSourceConnectionConfig that should be executed.
 +
 
 +
;DataConnectionID
 +
:This tag describes if an agent or crawler should be used. It contains either of the following tags:
 +
:*<b>Agent</b>
 +
:*<b>Crawler</b>
 +
:The name that is used in these tags is the Service name of the agent/crawler.
 +
 
 +
;RecordBuffer
 +
:Here you can specify settings to optimize record transfer to ConnectivityManager
 +
:*Size - the number of records to be send to ConnectivityyManager in one block. Default is 1.
 +
:*FlushInterval - a time interval in milliseconds after which to send the current elements of the RecordBuffer to ConnectivityManager. Default is 1000.
 +
 
 +
;DeltaIndexing:
 +
:Configuration options for delta indexing that are to be interpreted by the CrawlerController. The following values are supported:
 +
:*<tt>full</tt> - delta indexing is fully activated. Records are checked if they need to be updated, entries for new/updated records are added to the deltaIndexingManager, delta-delete is executed if no error occured
 +
:*<tt>additive</tt> - as <tt>full</tt> but delta-delete is not executed
 +
:*<tt>initial</tt> - For an initial import in an empty index or a new source in an existing index performance can be optimized by NOT checking if a record needs to be updated (we know that all records are new) but adding an entry in the DeltaIndexingManager for each Record. This allows later runs using <tt>full</tt> or <tt>additive</tt> to make use of DeltaIndexing infformation.
 +
:*<tt>disabled</tt> - delta indexing is fully disabled. No checks are done, no entries are created/updated, no Delta-Delete is executed. Later runs cannot benefit from DeltaIndexing
 +
 
 +
;CompoundHandling:
 +
:Configuration options for CompoundHandling. See [[SMILA/Documentation/CompoundManagement#Configuration|CompoundManagement]] for details.
 +
 
 +
;Attributes
 +
:Placeholder for each crawler's attribute definition. <br />Each crawler can define here which attributes it can return. An attribute is a specific information of an entry in the datasource that is crawled by the crawler (E.g. In a filesystem an entry is a file, and attributes of an file are Size, Content, etc.)
 +
 
 +
;Process
 +
:Placeholder for Tags that the crawler developer can define. <br /> In this Tag all information can be transferred for a crawl task that are necessary to start a crawling process. These information are maybe: starting urls/folder, and which entries should be crawled ( e.g. queries/wildcards/include/excludes).
 +
 
 +
 
 +
=== Further Information: ===
 +
# See for each crawler attributes and process tags
 +
# [[SMILA/Development Guidelines/How to implement a crawler|How to implement a crawler]]
  
 
== Crawler lifecycle ==
 
== Crawler lifecycle ==
  
The CrawlerController manages the life cycle of the crawler (e.g. start, stop, abort) and may instantiate multiple Crawlers concurrently, even of the same type.
+
The CrawlerController manages the life cycle of the crawler (e.g. start, stop, abort) and may instantiate multiple crawlers concurrently, even of the same type. This is realised by using OSGi ComponentFactories. Each crawler does not automatically start an OSGi service, but registers only a crawler ComponentFactory with the CrawlerController. Via the ComponentFactory the CrawlerController can instantiate crawlers on demand.
 +
 
 +
Here is a template for a crawler OSGi component definition:
 +
<source lang="xml">
 +
<component name="%CRAWLER_TYPE%" immediate="false" factory="CrawlerFactory">
 +
    <implementation class="%CRAWLER_IMPLEMENTATION_CLASS%" />
 +
    <service>
 +
        <provide interface="org.eclipse.smila.connectivity.framework.Crawler"/>
 +
    </service>   
 +
</component>
 +
</source>
  
 
== See also ==
 
== See also ==
  
* [[SMILA/Documentation/Filesystem Crawler|Filesystem Crawler]]
+
More information about the different crawlers can be found here:
* [[SMILA/Documentation/Web Crawler|Web Crawler]]
+
* [[SMILA/Documentation/Filesystem Crawler|File System crawler]]
* [[SMILA/Documentation/JDBC Crawler|JDBC Crawler]]
+
* [[SMILA/Documentation/Web Crawler|Web crawler]]
 +
* [[SMILA/Documentation/JDBC Crawler|JDBC crawler]]
  
 
[[Category:SMILA]]
 
[[Category:SMILA]]
[[Category:Crawler]]
 

Latest revision as of 11:30, 28 October 2014

Note.png
This is deprecated for SMILA 1.0, the connectivity framework has been replaced by the new Importing framework.


Crawler Wokflow

Overview

A crawler gathers information about resources, both content and metadata of interest like size or MIME type. SMILA currently comes with three types of crawlers, each adequate for a different datasource type, namely Web crawler, JDBC Database crawler, and File System crawler, to allow gathering information from the internet, databases, or files from a hard disk. Furthermore, the Connectivity Framework provides an API for developers which allows them to create their own crawlers.

API

A crawler has to implement two interfaces: Crawler and CrawlerCallback. The easiest way to achieve this is to extend the abstract base class AbstractCrawler located in bundle org.eclipse.smila.connectivity.framework. This class already contains handling for the crawlers Id and an OSGI service activate method. The crawler method getNext() is designed to support an array of Datareference objects, as this reduces the number of method calls. In general there are no restrictions on the size of the array, in fact the size could vary on multiple method calls. This allows a crawler to internally implement a Producer/Consumer pattern. A Crawler implementation that is restricted to work as an iterator only can also enforce this by always returning an array of size one.

Javadoc: org.eclipse.smila.connectivity.framework.Crawler

Architecture

Crawlers are managed and instantiated by the CrawlerController. The CrawlerController communicates with the crawler via interface Crawler, only. The crawler's getNext() method returns DataReference objects to the CrawlerController. DataReference is also an interface implemented by class org.eclipse.smila.connectivity.framework.util.internal.DataReferenceImpl. A DataReference, as the name suggests, is only a reference to data provided by the crawler. This is mainly a performance issue, as due to the use of DeltaIndexing it may not be neccessary to transfer all the data from the crawler to the CrawlerController and to ConnectivityManager. Therefore a DataReference contains only the minumum data needed to perform DeltaIndexing: an Id and a hash token. To access the whole object it provideds method getRecord() that returns a complete Record object containing Id, attributes, annotations and attachments. To create the Record object, the DataReference communicates with the crawler via interface CrawlerCallback, as each DataReference has a reference to the crawler that created it.

The following chart shows the crawler architecture and how data is shared with the CrawlerController: Crawler Architecture

Package org.eclipse.smila.connectivity.framework.util provides some factory classes for crawlers to create Ids, hashes and DataReference objects. More utility classes are planned to be implemented, that allow easy realization of crawlers using an iterator or producer/consumer pattern.

Configuration

A crawler is started with a specific, named configuration, that defines what information is to be crawled (e.g. content, kinds of metadata) and where to find that data (e.g. file system path, JDBC Connection String). See each crawler documentation for details on configuration options.


Each crawler can define its own configuration because crawlers need different information to execute specifc crawl jobs. As example a JDBC crawler needs information about which database and which table should be crawled and which columns should be returned.

Therefore the crawler developer defines a schema that contains all interesting information. This schema is based on a root schema that is supported by the SMILA framework. It declares the generic framework/frame which has to be used to send DataSourceConnectionConfigs (a crawl task) to the SMILA framework. The root-schema can be found in: configuration\org.eclipse.smila.connectivity.framework.schema/schemas/RootDataSourceConnectionConfigSchema.xsd.

The root schema looks like as follows:

RootdatasourceConnectionConfig.png

DataSourceID
A description string that is used in the whole framework to separate and address information that apply to the same crawl job
SchemaID
The SchemaID contains the whole bundle name of the crawler (e.g. File System crawler: org.eclipse.smila.connectivity.framework.crawler.filesystem).
The SMILA Framework uses this information to gather the schema for the validation of the DataSourceConnectionConfig that should be executed.
DataConnectionID
This tag describes if an agent or crawler should be used. It contains either of the following tags:
  • Agent
  • Crawler
The name that is used in these tags is the Service name of the agent/crawler.
RecordBuffer
Here you can specify settings to optimize record transfer to ConnectivityManager
  • Size - the number of records to be send to ConnectivityyManager in one block. Default is 1.
  • FlushInterval - a time interval in milliseconds after which to send the current elements of the RecordBuffer to ConnectivityManager. Default is 1000.
DeltaIndexing
Configuration options for delta indexing that are to be interpreted by the CrawlerController. The following values are supported:
  • full - delta indexing is fully activated. Records are checked if they need to be updated, entries for new/updated records are added to the deltaIndexingManager, delta-delete is executed if no error occured
  • additive - as full but delta-delete is not executed
  • initial - For an initial import in an empty index or a new source in an existing index performance can be optimized by NOT checking if a record needs to be updated (we know that all records are new) but adding an entry in the DeltaIndexingManager for each Record. This allows later runs using full or additive to make use of DeltaIndexing infformation.
  • disabled - delta indexing is fully disabled. No checks are done, no entries are created/updated, no Delta-Delete is executed. Later runs cannot benefit from DeltaIndexing
CompoundHandling
Configuration options for CompoundHandling. See CompoundManagement for details.
Attributes
Placeholder for each crawler's attribute definition.
Each crawler can define here which attributes it can return. An attribute is a specific information of an entry in the datasource that is crawled by the crawler (E.g. In a filesystem an entry is a file, and attributes of an file are Size, Content, etc.)
Process
Placeholder for Tags that the crawler developer can define.
In this Tag all information can be transferred for a crawl task that are necessary to start a crawling process. These information are maybe: starting urls/folder, and which entries should be crawled ( e.g. queries/wildcards/include/excludes).


Further Information:

  1. See for each crawler attributes and process tags
  2. How to implement a crawler

Crawler lifecycle

The CrawlerController manages the life cycle of the crawler (e.g. start, stop, abort) and may instantiate multiple crawlers concurrently, even of the same type. This is realised by using OSGi ComponentFactories. Each crawler does not automatically start an OSGi service, but registers only a crawler ComponentFactory with the CrawlerController. Via the ComponentFactory the CrawlerController can instantiate crawlers on demand.

Here is a template for a crawler OSGi component definition:

<component name="%CRAWLER_TYPE%" immediate="false" factory="CrawlerFactory">
    <implementation class="%CRAWLER_IMPLEMENTATION_CLASS%" />
    <service>
         <provide interface="org.eclipse.smila.connectivity.framework.Crawler"/>
    </service>    
</component>

See also

More information about the different crawlers can be found here:

Back to the top