Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "SMILA/Documentation/Web Crawler"

m (Crawling configuration explanation)
m (Crawling configuration explanation)
Line 16: Line 16:
 
** <tt>Crawler</tt> – implementation class of a Crawler.
 
** <tt>Crawler</tt> – implementation class of a Crawler.
 
** <tt>Agent</tt> – implementation class of an Agent.
 
** <tt>Agent</tt> – implementation class of an Agent.
 +
* <tt>CompoundHandling</tt> – specify if packed data (like a zip containing files) should be unpack and files within should be crawled (YES or NO).
 +
* <tt>Attributes</tt> – list all attributes which describe a website.
 +
** <tt>FieldAttribute (URL, Title, Content):
 +
*** <tt>Type</tt> (required) – the data type (String, Integer or Date).
 +
*** <tt>Name</tt> (required) – attributes name.
 +
*** <tt>HashAttribute</tt> – specify if a hash should be created (true or false).
 +
*** <tt>KeyAttribute</tt> – creates a key for this object, for example for record id (true or false).
 +
*** <tt>Attachment</tt> – specify if the attribute return the data as attachment of record.
 +
** <tt>MetaAttribute</tt> (MetaData, ResponseHeader, MetaDataWithResponseGeaderFallBack, MimeType):
 +
*** <tt>Type</tt> (required) – the data type (String)
 +
*** <tt>Name</tt> (required) – attributes name
  
 
== See also ==
 
== See also ==

Revision as of 06:20, 19 March 2009

What does Web Crawler do

A WebCrawler collects data from the internet. Starting with an initial URL it recursively crawls all linked Websites. Due to the manifold capabilities of webpage structures and much linking to other pages, the configuration of this crawler enables you to limit the downloaded data to match your needs.

Crawling configuration

Defining Schema: org.eclipse.smila.connectivitiy.framework.crawler.web/schemas/WebIndexOrder.xsd

Crawling configuration explanation

The root element of crawling configuration is IndexOrderConfiguration and contains the following sub elements:

  • DataSourceID – the identification of a data source.
  • SchemaID – specify the schema for a crawler job.
  • DataConnectionID – describes which agent crawler should be used.
    • Crawler – implementation class of a Crawler.
    • Agent – implementation class of an Agent.
  • CompoundHandling – specify if packed data (like a zip containing files) should be unpack and files within should be crawled (YES or NO).
  • Attributes – list all attributes which describe a website.
    • FieldAttribute (URL, Title, Content):
      • <tt>Type (required) – the data type (String, Integer or Date).
      • Name (required) – attributes name.
      • HashAttribute – specify if a hash should be created (true or false).
      • KeyAttribute – creates a key for this object, for example for record id (true or false).
      • Attachment – specify if the attribute return the data as attachment of record.
    • MetaAttribute (MetaData, ResponseHeader, MetaDataWithResponseGeaderFallBack, MimeType):
      • Type (required) – the data type (String)
      • Name (required) – attributes name

See also

Back to the top