Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "SMILA/Documentation/Crawler"

(Configuration)
(Configuration)
Line 170: Line 170:
  
 
== Configuration ==
 
== Configuration ==
Configuration
 
  
 
A Crawler is started with a specific, named configuration, that defines what information is to be crawled (e.g. content, kinds of metadata) and where to find that data (e.g. file system path, JDBC Connection String). See each Crawler documentation for details on configuration options.
 
A Crawler is started with a specific, named configuration, that defines what information is to be crawled (e.g. content, kinds of metadata) and where to find that data (e.g. file system path, JDBC Connection String). See each Crawler documentation for details on configuration options.
  
  
Each Crawler can define its own configuration because crawlers need different information to execute a specifc crawl job. As Example a JDBC Crawler need information about which database and which table should be crawled and which Columns should be returned.
+
Each crawler can define its own configuration because crawlers need different information to execute specifc crawl jobs. As example a JDBC-Crawler need information about which database and which table should be crawled and which columns should be returned.
  
Therefore the Crawler developer defines a schema that contains all interesting information. This schema is based on a root schema that is supported by the smila framework. It declares the generic framework/frame which has to be used to send DataSourceConnectionConfig (a crawl task) to the smila framework.
+
Therefore the crawler developer defines a schema that contains all interesting information. This schema is based on a root schema that is supported by the smila framework. It declares the generic framework/frame which has to be used to send DataSourceConnectionConfigs (a crawl task) to the smila framework.
 
The root-schema can be found in:
 
The root-schema can be found in:
 
configuration\org.eclipse.smila.connectivity.framework.schema/schemas/RootDataSourceConnectionConfigSchema.xsd.
 
configuration\org.eclipse.smila.connectivity.framework.schema/schemas/RootDataSourceConnectionConfigSchema.xsd.
Line 186: Line 185:
  
 
;DataSourceID
 
;DataSourceID
:a description string that is used in the whole framework to separate and address information that are apply to the same crawl job
+
:a description string that is used in the whole framework to separate and address information that apply to the same crawl job
 
;SchemaID
 
;SchemaID
:The Schemaid contains the whole bundle name of the crawler (e.g. FileSystem-Crawler: org.eclipse.smila.connectivity.framework.crawler.filesystem).<br /> The smila Framework uses this information to gather the schema for the validation of the  DataSourceConnectionConfig that should be executed.
+
:The SchemaID contains the whole bundle name of the crawler (e.g. FileSystem-Crawler: org.eclipse.smila.connectivity.framework.crawler.filesystem).<br /> The smila Framework uses this information to gather the schema for the validation of the  DataSourceConnectionConfig that should be executed.
 
;DataConnectionID
 
;DataConnectionID
 
;:Crawler
 
;:Crawler
: This tags describes which Crawler should be used for the crawl job. The name that is used in this tag is the Service name of the crawler.
+
: This tags describes which crawler should be used for the crawl job. The name that is used in this tag is the Service name of the crawler.
 
;:Agent
 
;:Agent
:placeholder for upcoming implementation
+
:placeholder for upcoming implementations
 
;CompoundHandling:
 
;CompoundHandling:
:placeholder for upcoming implementation
+
:placeholder for upcoming implementations
  
 
;Attributes
 
;Attributes
Line 201: Line 200:
  
 
;Process
 
;Process
:placeholder for Tags that the Crawler Developer can define. <br /> In this Tag all information can be transferred for a crawl task that are necessary to start a crawling process. These information are maybe starting urls/folder, and which entries should be crawled ( e.g. queries/wildcards/include/excludes).
+
:placeholder for Tags that the Crawler developer can define. <br /> In this Tag all information can be transferred for a crawl task that are necessary to start a crawling process. These information are maybe: starting urls/folder, and which entries should be crawled ( e.g. queries/wildcards/include/excludes).
  
  

Revision as of 12:05, 8 April 2009

Crawler Wokflow

Overview

A Crawler gathers information about resources, both content and metadata of interest like size or mime type. SMILA currently comes with three types of crawlers, each for a different data source type, namely WebCrawler, JDBC DatabaseCrawler, and FileSystemCrawler to facilitate gathering information from the internet, databases, or files from a hard disk. Furthermore, the Connectivity Framework provides an API for developers to create own crawlers.


API

A Crawler has to implement two interfaces: Crawler and CrawlerCallback. The easiest way to achieve this is to extend the abstract base class AbstractCrawler located in bundle org.eclipse.smila.connectivity.framework. This class already contains handling for the crawlers Id and an OSGI service activate method. The Crawler method getNext() is designed to support an array of Datareference objects, as this reduces the number of method calls. In general there are no restrictions on the size of the array, in fact the size could vary on multiple method calls. This allows a Crawler to internally implement a Producer/Consumer pattern. A Crawler implementation that is restricted to work as an iterator only can also enforce this by always returning an array of size one.

public interface Crawler {
 
  /**
   * Returns the ID of this Crawler.
   * 
   * @return a String containing the ID of this Crawler
   * 
   * @throws CrawlerException
   *           if any error occurs
   */
  String getCrawlerId() throws CrawlerException;
 
  /**
   * Returns an array of DataReference objects. The size of the returned array may vary from call to call. The maximum
   * size of the array is determined by configuration or by the implementation class.
   * 
   * @return an array of DataReference objects or null, if no more DataReference exist
   * 
   * @throws CrawlerException
   *           if any error occurs
   * @throws CrawlerCriticalException
   *           the crawler critical exception
   */
  DataReference[] getNext() throws CrawlerException, CrawlerCriticalException;
 
  /**
   * Initialize.
   * 
   * @param config
   *          the DataSourceConnectionConfig
   * 
   * @throws CrawlerException
   *           the crawler exception
   * @throws CrawlerCriticalException
   *           the crawler critical exception
   */
  void initialize(DataSourceConnectionConfig config) throws CrawlerException, CrawlerCriticalException;
 
  /**
   * Ends crawl, allowing the Crawler implementation to close any open resources.
   * 
   * @throws CrawlerException
   *           if any error occurs
   */
  void close() throws CrawlerException;
 
}


/**
 * A callback interface to access metadata and attachments of crawled data.
 */
public interface CrawlerCallback {
  /**
   * Returns the MObject for the given id.
   * 
   * @param id
   *          the record id
   * @return the MObject
   * @throws CrawlerException
   *           if any non critical error occurs
   * @throws CrawlerCriticalException
   *           if any critical error occurs           
   */
  MObject getMObject(Id id) throws CrawlerException, CrawlerCriticalException;
 
  /**
   * Returns an array of String[] containing the names of the available attachments for the given id.
   * 
   * @param id
   *          the record id
   * @return an array of String[] containing the names of the available attachments
   * @throws CrawlerException
   *           if any non critical error occurs
   * @throws CrawlerCriticalException
   *           if any critical error occurs  
   */
  String[] getAttachmentNames(Id id) throws CrawlerException, CrawlerCriticalException;
 
  /**
   * Returns the attachment for the given Id and name pair.
   * 
   * @param id
   *          the record id
   * @param name
   *          the name of the attachment
   * @return a byte[] containing the attachment
   * @throws CrawlerException
   *           if any non critical error occurs
   * @throws CrawlerCriticalException
   *           if any critical error occurs  
   */
  byte[] getAttachment(Id id, String name) throws CrawlerException, CrawlerCriticalException;
 
  /**
   * Disposes the record with the given Id.
   * 
   * @param id
   *          the record id
   */
  void dispose(Id id);
}

For completeness, here is the interface DataReference:

/**
 * A proxy object interface to a record provided by a Crawler. The object contains the Id and the hash value of the
 * record but no additional data. The complete record can be loaded via the CrawlerCallback.
 */
public interface DataReference {
  /**
   * Returns the Id of the referenced record.
   * 
   * @return the Id of the referenced record
   */
  Id getId();
 
  /**
   * Returns the hash of the referenced record as a String.
   * 
   * @return the hash of the referenced record as a String
   */
  String getHash();
 
  /**
   * Returns the complete Record object via the CrawlerCallback.
   * 
   * @return the complete record
   * @throws CrawlerException
   *           if any non critical error occurs
   * @throws CrawlerCriticalException
   *           if any critical error occurs
   * @throws InvalidTypeException
   *           if the hash attribute cannot be set
   */
  Record getRecord() throws CrawlerException, CrawlerCriticalException, InvalidTypeException;
 
  /**
   * Disposes the referenced record object.
   */
  void dispose();
}


Architecture

Crawlers are managed and instantiated by thew CrawlerController. The CrawlerController communicates with the Crawler via interface Crawler, only. The Crawler's getNext() method returns DataReference objects to the CrawlerController. DataReference is also an interface implemented by class org.eclipse.smila.connectivity.framework.util.internal.DataReferenceImpl. A DataReference, as the name suggests, is only a reference to data provided by the Crawler. This is mainly a performance issue, as due to the use of DeltaIndexing it may not be neccessary to transfer all the data from the Crawler to the CrawlerController and to ConnectivityManager. Therefore a DataReference contains only the minumum data needed to perform DeltaIndexing: an Id and a hash token. To access the whole object it provideds method getRecord() that returns a complete Record object containing Id, attributes, annotations and attachments. To create the Record object, the DataReference communicates with the Crawler via interface CrawlerCallback, as each DataReference has a reference to the Crawler that created it.

The following chart shows the Crawler Architecture and how data is shared with the CrawlerController: Crawler Architecture

Package org.eclipse.smila.connectivity.framework.util provides some factory classes for crawlers to create Ids, hashes and DataReference objects. More utility classes are planned to be implemented, that allow easy realisation of Crawlers using an Iterator or Producer/Consumer pattern.


Configuration

A Crawler is started with a specific, named configuration, that defines what information is to be crawled (e.g. content, kinds of metadata) and where to find that data (e.g. file system path, JDBC Connection String). See each Crawler documentation for details on configuration options.


Each crawler can define its own configuration because crawlers need different information to execute specifc crawl jobs. As example a JDBC-Crawler need information about which database and which table should be crawled and which columns should be returned.

Therefore the crawler developer defines a schema that contains all interesting information. This schema is based on a root schema that is supported by the smila framework. It declares the generic framework/frame which has to be used to send DataSourceConnectionConfigs (a crawl task) to the smila framework. The root-schema can be found in: configuration\org.eclipse.smila.connectivity.framework.schema/schemas/RootDataSourceConnectionConfigSchema.xsd.

The root schema looks like as follows:

RootdatasourceConnectionConfig.png

DataSourceID
a description string that is used in the whole framework to separate and address information that apply to the same crawl job
SchemaID
The SchemaID contains the whole bundle name of the crawler (e.g. FileSystem-Crawler: org.eclipse.smila.connectivity.framework.crawler.filesystem).
The smila Framework uses this information to gather the schema for the validation of the DataSourceConnectionConfig that should be executed.
DataConnectionID
Crawler
This tags describes which crawler should be used for the crawl job. The name that is used in this tag is the Service name of the crawler.
Agent
placeholder for upcoming implementations
CompoundHandling
placeholder for upcoming implementations
Attributes
placeholder for each Crawler.
Each Crawler can define here which Attributes it can return. An attribute is a specific information of an entry in the data source that is crawled by the crawler (E.g. In a filesystem an entry is a file, and attributes of an file are Size, Content, etc.)
Process
placeholder for Tags that the Crawler developer can define.
In this Tag all information can be transferred for a crawl task that are necessary to start a crawling process. These information are maybe: starting urls/folder, and which entries should be crawled ( e.g. queries/wildcards/include/excludes).


Further Information:

  1. see for each Crawler Attributes and Process Tags
  2. How to implement a Crawler

Crawler lifecycle

The CrawlerController manages the life cycle of the crawler (e.g. start, stop, abort) and may instantiate multiple Crawlers concurrently, even of the same type. This is realised by using OSGi ComponentFactories. Each Crawler doea not automatically start an OSGi service, but registers only a Crawler ComponentFactory with the CrawlerController. Via the ComponmentFactory the CrawlerController can instantiate Crawlers on demand.

Here is a template for a Crawler OSGi component definition

<component name="%CRAWLER_TYPE%" immediate="false" factory="CrawlerFactory">
    <implementation class="%CRAWLER_IMPLEMENTATION_CLASS%" />
    <service>
         <provide interface="org.eclipse.smila.connectivity.framework.Crawler"/>
    </service>    
</component>

See also

More information about the different Crawlers can be found here:

Back to the top