Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "SMILA/Documentation/Web Crawler"

m (Minimal configuration example)
 
(35 intermediate revisions by 8 users not shown)
Line 1: Line 1:
== What does Web Crawler do ==
+
{{note|This is deprecated for SMILA 1.0, the connectivity framework has been replaced by the new [[SMILA/Documentation#Importing | Importing framework]].}}
  
A WebCrawler collects data from the internet. Starting with an initial URL it recursively crawls all linked Websites. Due to the manifold capabilities of webpage structures and much linking to other pages, the configuration of this crawler enables you to limit the downloaded data to match your needs.
+
== Overview ==
 +
 
 +
The Web crawler fetches data from HTTP servers. Starting with an initial URL, it crawls all linked websites recursively.
  
 
== Crawling configuration ==
 
== Crawling configuration ==
  
The configuration is found at <tt>configuration/org.eclipse.smila.connectivity.framework/web</tt>
+
The example configuration file is located at <tt>configuration/org.eclipse.smila.connectivity.framework/web.xml</tt>
Defining Schema: <tt>org.eclipse.smila.connectivitiy.framework.crawler.web/schemas/WebIndexOrder.xsd</tt>
+
 
 +
Defining Schema: <tt>org.eclipse.smila.connectivitiy.framework.crawler.web/schemas/WebDataSourceConnectionConfigSchema.xsd</tt>
  
 
== Crawling configuration explanation ==
 
== Crawling configuration explanation ==
  
The root element of crawling configuration is IndexOrderConfiguration and contains the following sub elements:
+
See [[SMILA/Documentation/Crawler#Configuration]] for the generic parts of the configuration file.
 +
 
 +
The root element of the configuration is <tt>DataSourceConnectionConfig</tt> and contains the following sub elements:
  
 
* <tt>DataSourceID</tt> – the identification of a data source.
 
* <tt>DataSourceID</tt> – the identification of a data source.
 
* <tt>SchemaID</tt> – specify the schema for a crawler job.
 
* <tt>SchemaID</tt> – specify the schema for a crawler job.
 
* <tt>DataConnectionID</tt> – describes which agent crawler should be used.
 
* <tt>DataConnectionID</tt> – describes which agent crawler should be used.
** <tt>Crawler</tt> – implementation class of a Crawler.
+
** <tt>Crawler</tt> – implementation class of a crawler.
** <tt>Agent</tt> – implementation class of an Agent.
+
** <tt>Agent</tt> – implementation class of an agent.
* <tt>CompoundHandling</tt> – specify if packed data (like a zip containing files) should be unpack and files within should be crawled (YES or NO).
+
* <tt>CompoundHandling</tt> – specify if packed data (like a ZIP containing files) should be unpack and files within should be crawled (YES or NO).
 
* <tt>Attributes</tt> – list all attributes which describe a website.
 
* <tt>Attributes</tt> – list all attributes which describe a website.
** <tt>FieldAttribute</tt> (URL, Title, Content):
+
** <tt>Attribute</tt>:
*** <tt>Type</tt> (required) – the data type (String, Integer or Date).
+
*** attributes:
*** <tt>Name</tt> (required) – attributes name.
+
**** <tt>Type</tt> (required) – the data type (String, Integer or Date).
*** <tt>HashAttribute</tt> – specify if a hash should be created (true or false).
+
**** <tt>Name</tt> (required) – attributes name.
*** <tt>KeyAttribute</tt> – creates a key for this object, for example for record id (true or false).
+
**** <tt>HashAttribute</tt> – specify if the attribute is used for the hash used for delta indexing (''true'' or ''false''). Must be true for at least one attribute which must always have a value.
*** <tt>Attachment</tt> – specify if the attribute return the data as attachment of record.
+
**** <tt>KeyAttribute</tt> – specify if the attribute is used for creating the record ID (''true'' or ''false''). Must be true for at least one attribute. All key attributes must identify the file uniquely, so usually you will set it ''true'' for the attribute containing ''Url'' FieldAttribute.
** <tt>MetaAttribute</tt> (MetaData, ResponseHeader, MetaDataWithResponseGeaderFallBack, MimeType):
+
**** <tt>Attachment</tt> – specify if the attribute return the data as attachment of record.
*** <tt>Type</tt> (required) – the data type (String)
+
*** sub elements:
*** <tt>Name</tt> (required) – attributes name
+
**** <tt>FieldAttribute</tt>: Content of element is one of
*** <tt>Attachment</tt> - specify if the attribute return the data as attachment of record.
+
***** ''Url'': URL of the web page. NOTE: Must currently be mapped to an attribute named "Url". Mapping to additional attributes are allowed.
**** <tt>ReturnType</tt> structure the metadata will be returned
+
***** ''Title'': The title of the web page from the <title> tag.
***** <tt>MetaDataString</tt> default structure, metadata is returned as single string, for example:
+
***** ''Content'': The content of the web page. Original binary content, if mapped to an attachment, else it is tried to convert it to a string using the encoding reported in the response headers.
 +
***** ''MimeType'': Mime type of website as reported in response headers.
 +
**** <tt>MetaAttribute</tt>
 +
***** sub elements <tt>MetaName</tt>: Key of value to get from metadata.
 +
***** attribute <tt>Type</tt>: one of ''MetaData'', ''ResponseHeader'', ''MetaDataWithResponseHeaderFallBack'': read from HTML meta tags, response header or both
 +
***** attribute <tt>ReturnType</tt>: structure the metadata will be returned. One of:
 +
::::* <tt>MetaDataString</tt>: default structure, metadata is returned as single string, for example:
 
<source lang="xml">
 
<source lang="xml">
<A n="ResponseHeaser">
+
<Val key="ResponseHeader">Content-type: text/html</Val>
  <L>
+
    <V>Content-type: text/html</V>
+
  </L>
+
  ...
+
</A>
+
 
</source>
 
</source>
:::* <tt>MetaDataValue</tt> only values of metadata are returned, for example:
+
::::* <tt>MetaDataValue</tt>: only values of metadata are returned, for example:
 
<source lang="xml">
 
<source lang="xml">
<A n="ResponseHeader">
+
<Val key="ResponseHeader">text/html</Val>
  <L>
+
    <V>text/html</V>
+
  <L>
+
</A>
+
 
</source>
 
</source>
:::* <tt>MetaDataMObject</tt> metadata is returned as MObject containing attributes with metadata names and values, for example:
+
::::* <tt>MetaDataMObject</tt>: metadata is returned as MObject containing attributes with metadata names and values, for example:
 
<source lang="xml">
 
<source lang="xml">
<A n="ResponseHeader">
+
<Map key="ResponseHeader">
   <O>
+
   <Val key="Content-Type">text/html</Val>
    <A n="Content-Type">
+
  ...
      <L>
+
</Map>
        <V>text/html</V>
+
      </L>
+
    </A>
+
    ...
+
  </O>
+
</A>
+
 
</source>
 
</source>
 
* <tt>Process</tt> – this element is responsible for selecting data
 
* <tt>Process</tt> – this element is responsible for selecting data
 
** <tt>Website</tt> - contains all important information for accessing and crawling a website.
 
** <tt>Website</tt> - contains all important information for accessing and crawling a website.
 
*** <tt>ProjectName</tt> - defines project name
 
*** <tt>ProjectName</tt> - defines project name
*** <tt>Sitemaps</tt> - for supporting Google site maps. <tt>sitemap.xml</tt>, <tt>sitemap.xml.gz</tt> and <tt>sitemap.gz</tt> formats are supported. Links extracted from <tt><loc></tt> tags are added to the current level links. Crawler looks for the sitemap file at the root directory of the web server and then caches it for the particular host.
+
*** <tt>Sitemaps</tt> - for supporting Google site maps. <tt>sitemap.xml</tt>, <tt>sitemap.xml.gz</tt> and <tt>sitemap.gz</tt> formats are supported. See [[https://www.google.com/webmasters/tools/docs/en/protocol.html]]. Links extracted from <tt><loc></tt> tags are added to the current level links. Crawler looks for the sitemap file at the root directory of the web server and then caches it for the particular host to avoid parsing the sitemap again for the URL already processed.
 
*** <tt>Header</tt> - request headers separated by semicolon. Headers should be in format <tt>"<header_name>:<header_content>"</tt>, separated by semicolon.
 
*** <tt>Header</tt> - request headers separated by semicolon. Headers should be in format <tt>"<header_name>:<header_content>"</tt>, separated by semicolon.
*** <tt>Referer</tt> - to include <tt>"Referer: URL"</tt> header in http request.
+
*** <tt>Referer</tt> - to include <tt>"Referer: URL"</tt> header in HTTP request. See [[http://en.wikipedia.org/wiki/Referer]]
*** <tt>EnableCookies</tt> - enable or disable cookies for crawling process (true or false)
+
*** <tt>EnableCookies</tt> - enable or disable cookies for crawling process (true or false). See [[http://en.wikipedia.org/wiki/HTTP_cookie]]
 
*** <tt>UserAgent</tt> - element used to identify crawler to the server as a specific user agent origination the request. The <tt>UserAgent</tt> string generated looks like the following: <tt>Name/Version (Description, Url, Email)</tt>
 
*** <tt>UserAgent</tt> - element used to identify crawler to the server as a specific user agent origination the request. The <tt>UserAgent</tt> string generated looks like the following: <tt>Name/Version (Description, Url, Email)</tt>
 
**** <tt>Name</tt> (required)
 
**** <tt>Name</tt> (required)
Line 73: Line 69:
 
**** <tt>URL</tt>
 
**** <tt>URL</tt>
 
**** <tt>Email</tt>
 
**** <tt>Email</tt>
*** <tt>Robotstxt</tt> element used for supporting <tt>robots.txt</tt> information. The so-named Robots Exclusion Standard tells crawler how to crawl a website – or rather which resources should not be crawled. See [[http://www.robotstxt.org/]]
+
*** <tt>Robotstxt</tt> element used for supporting <tt>robots.txt</tt> information. The Robots Exclusion Standard tells crawler how to crawl a website – or rather which resources should not be crawled. See [[http://www.robotstxt.org/]]
 
**** <tt>Policy</tt>: there are five types of policies offered on how to deal with robots.txt rules:
 
**** <tt>Policy</tt>: there are five types of policies offered on how to deal with robots.txt rules:
 
****# <tt>Classic</tt>. Simply obey the robots.txt rules. Recommended unless you have special permission to collect a site more aggressively.
 
****# <tt>Classic</tt>. Simply obey the robots.txt rules. Recommended unless you have special permission to collect a site more aggressively.
Line 145: Line 141:
 
****# <tt>RegExp</tt>: filters urls based on a regular expression.
 
****# <tt>RegExp</tt>: filters urls based on a regular expression.
 
****# <tt>ContentType</tt>: filters content type on a regular expression. Use this filter to abort the download of content-types other than those wanted.
 
****# <tt>ContentType</tt>: filters content type on a regular expression. Use this filter to abort the download of content-types other than those wanted.
***** <tt>WordType</tt>: Select or Unselect, the way how filter should work.
+
***** <tt>WorkType</tt>: Select or Unselect, the way how filter should work.
 
***** <tt>Value</tt>: the filter value that will be used to check if the given value matches the filter or not.
 
***** <tt>Value</tt>: the filter value that will be used to check if the given value matches the filter or not.
 
**** <tt>Refinements</tt>: must be nested into the Filter element. It allows to modify filter settings under certain circumstances. Following refinements may be applied to the filters:
 
**** <tt>Refinements</tt>: must be nested into the Filter element. It allows to modify filter settings under certain circumstances. Following refinements may be applied to the filters:
Line 162: Line 158:
  
 
<source lang="xml">
 
<source lang="xml">
<CrawlJob xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="../org.eclipse.smila.connectivity.framework.crawler.web/schemas/WebIndexOrder.xsd">
+
<DataSourceConnectionConfig
 +
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"  
 +
  xsi:noNamespaceSchemaLocation="../org.eclipse.smila.connectivity.framework.crawler.web/schemas/WebDataSourceConnectionConfigSchema.xsd">
 
   <DataSourceID>web</DataSourceID>
 
   <DataSourceID>web</DataSourceID>
 
   <SchemaID>org.eclipse.smila.connectivity.framework.crawler.web</SchemaID>
 
   <SchemaID>org.eclipse.smila.connectivity.framework.crawler.web</SchemaID>
Line 178: Line 176:
 
     <Attribute Type="String" Name="Content" HashAttribute="true" Attachment="true" MimeTypeAttribute="Content">
 
     <Attribute Type="String" Name="Content" HashAttribute="true" Attachment="true" MimeTypeAttribute="Content">
 
       <FieldAttribute>Content</FieldAttribute>
 
       <FieldAttribute>Content</FieldAttribute>
 +
    </Attribute>
 +
    <Attribute Type="String" Name="MimeType">
 +
      <FieldAttribute>MimeType</FieldAttribute>
 
     </Attribute>
 
     </Attribute>
 
     <Attribute Type="String" Name="MetaData" Attachment="false">
 
     <Attribute Type="String" Name="MetaData" Attachment="false">
 
       <MetaAttribute Type="MetaData"/>
 
       <MetaAttribute Type="MetaData"/>
    </Attribute>
 
    <Attribute Type="String" Name="MimeType">
 
      <MetaAttribute Type="ResponseHeader" ReturnType="MetaDataValue">
 
        <MetaName>Content-Type</MetaName>
 
      </MetaAttribute>
 
 
     </Attribute>
 
     </Attribute>
 
     <Attribute Type="String" Name="ResponseHeader" Attachment="false">
 
     <Attribute Type="String" Name="ResponseHeader" Attachment="false">
Line 220: Line 216:
 
     </WebSite>
 
     </WebSite>
 
   </Process>
 
   </Process>
</CrawlJob>
+
</DataSourceConnectionConfig>
 
</source>
 
</source>
  
Line 244: Line 240:
 
     <Robotstxt Policy="Ignore" />
 
     <Robotstxt Policy="Ignore" />
 
       <CrawlLimits>
 
       <CrawlLimits>
<SizeLimits MaxDocumentDownload="15"/>
+
    <SizeLimits MaxDocumentDownload="15"/>
 
       </CrawlLimits>
 
       </CrawlLimits>
 
   <Authentication>
 
   <Authentication>
Line 250: Line 246:
 
       <FormElements>
 
       <FormElements>
 
         <FormElement Key="referer" Value=""/>
 
         <FormElement Key="referer" Value=""/>
      <FormElement Key="CookieDate" Value="1"/>
+
          <FormElement Key="CookieDate" Value="1"/>
      <FormElement Key="Privacy" Value="1"/>
+
          <FormElement Key="Privacy" Value="1"/>
      <FormElement Key="UserName" Value="User"/>
+
          <FormElement Key="UserName" Value="User"/>
      <FormElement Key="PassWord" Value="Password"/>
+
          <FormElement Key="PassWord" Value="Password"/>
      <FormElement Key="submit" Value="Enter"/>
+
          <FormElement Key="submit" Value="Enter"/>
 
       </FormElements>
 
       </FormElements>
 
     </HtmlForm>
 
     </HtmlForm>
Line 267: Line 263:
  
 
<source lang="xml">
 
<source lang="xml">
WebSite ProjectName="First WebSite">
+
<WebSite ProjectName="First WebSite">
 
   <UserAgent Name="Brox Crawler" Version="1.0" Description="Brox Crawler" Url="http://www.example.com" Email="crawler@example.com"/>
 
   <UserAgent Name="Brox Crawler" Version="1.0" Description="Brox Crawler" Url="http://www.example.com" Email="crawler@example.com"/>
<CrawlingModel Type="MaxIterations" Value="20"/>
+
    <CrawlingModel Type="MaxIterations" Value="20"/>
<CrawlScope Type="Broad">   
+
    <CrawlScope Type="Broad">   
<CrawlLimits>
+
    <CrawlLimits>
  <SizeLimits MaxBytesDownload="0" MaxDocumentDownload="100" MaxTimeSec="3600" MaxLengthBytes="1000000" />
+
      <SizeLimits MaxBytesDownload="0" MaxDocumentDownload="100" MaxTimeSec="3600" MaxLengthBytes="1000000" />
  <TimeoutLimits Timeout="10000" />
+
      <TimeoutLimits Timeout="10000" />
  <WaitLimits Wait="0" RandomWait="false" MaxRetries="8" WaitRetry="0"/>
+
      <WaitLimits Wait="0" RandomWait="false" MaxRetries="8" WaitRetry="0"/>
</CrawlLimits>
+
    </CrawlLimits>
 
   <Seeds FollowLinks="Follow"
 
   <Seeds FollowLinks="Follow"
  <Seed>http://localhost/</Seed>
+
      <Seed>http://localhost/</Seed>
  <Seed>http://localhost/otherseed</Seed>
+
      <Seed>http://localhost/otherseed</Seed>
 
   </Seeds>
 
   </Seeds>
<Authentication>
+
    <Authentication>
  <Rfc2617 Host="localhost" Port="80" Realm="Restricted area" Login="user" Password="pass"/>      
+
      <Rfc2617 Host="localhost" Port="80" Realm="Restricted area" Login="user" Password="pass"/>                                                    
  <HtmlForm CredentialDomain="http://localhost:8081/admin/" LoginUri="/j_security_check" HttpMethod="GET">
+
      <HtmlForm CredentialDomain="http://localhost:8081/admin/" LoginUri="/j_security_check" HttpMethod="GET">
 
       <FormElements>
 
       <FormElements>
      <FormElement Key="j_username" Value="admin"/>
+
          <FormElement Key="j_username" Value="admin"/>
      <FormElement Key="j_password" Value=""/>
+
          <FormElement Key="j_password" Value=""/>
      <FormElement Key="submit" Value="Login"/>
+
          <FormElement Key="submit" Value="Login"/>
    </FormElements>
+
        </FormElements>
  </HtmlForm>
+
      </HtmlForm>
</Authentication>
+
    </Authentication>
 
</WebSite>
 
</WebSite>
 
<WebSite ProjectName="Second WebSite">
 
<WebSite ProjectName="Second WebSite">
 
   <UserAgent Name="Mozilla" Version="5.0" Description="X11; U; Linux x86_64; en-US; rv:1.8.1.4" />
 
   <UserAgent Name="Mozilla" Version="5.0" Description="X11; U; Linux x86_64; en-US; rv:1.8.1.4" />
<Robotstxt Policy="Classic" AgentNames="mozilla, googlebot"/>
+
    <Robotstxt Policy="Classic" AgentNames="mozilla, googlebot"/>
<CrawlingModel Type="MaxDepth" Value="100"/>
+
    <CrawlingModel Type="MaxDepth" Value="100"/>
<CrawlScope Type="Host"/>
+
    <CrawlScope Type="Host"/>
<CrawlLimits>
+
    <CrawlLimits>
  <WaitLimits Wait="5" RandomWait="true"/>
+
      <WaitLimits Wait="5" RandomWait="true"/>
</CrawlLimits>
+
    </CrawlLimits>
<Seeds FollowLinks="NoFollow">
+
    <Seeds FollowLinks="NoFollow">
<Seed>http://example.com</Seed>
+
        <Seed>http://example.com</Seed>
</Seeds>
+
    </Seeds>
<Filters>
+
    <Filters>
<Filter Type="BeginningPath" WorkType="Unselect" Value="/something/">
+
        <Filter Type="BeginningPath" WorkType="Unselect" Value="/something/">
<Refinements>
+
            <Refinements>
<TimeOfDay From="09:00:00" To="23:00:00"/>
+
                <TimeOfDay From="09:00:00" To="23:00:00"/>
<Port Number="80"/>
+
                <Port Number="80"/>
</Refinements>
+
            </Refinements>
</Filter>
+
        </Filter>
<Filter Type="RegExp" WorkType="Unselect" Value="news"/>
+
        <Filter Type="RegExp" WorkType="Unselect" Value="news"/>
<Filter Type="ContentType" WorkType="Unselect" Value="image/jpeg"/>
+
        <Filter Type="ContentType" WorkType="Unselect" Value="image/jpeg"/>
</Filters>
+
    </Filters>
 
</WebSite>
 
</WebSite>
 
</source>
 
</source>
Line 319: Line 315:
 
<source lang="xml">
 
<source lang="xml">
  
1.<WebSite ProjectName="Example Crawler Configuration" Header="Accept-Encoding: gzip,deflate; Via: myProxy" Referer="http://myReferer">
+
<WebSite ProjectName="Example Crawler Configuration" Header="Accept-Encoding: gzip,deflate; Via: myProxy" Referer="http://myReferer">
 
   <UserAgent Name="Crawler" Version="1.0" Description="Test crawler" Url="http://www.example.com" Email="crawler@example.com"/>
 
   <UserAgent Name="Crawler" Version="1.0" Description="Test crawler" Url="http://www.example.com" Email="crawler@example.com"/>
 
     <Robotstxt Policy="Custom" Value="/home/user/customRobotRules.txt" AgentNames="agent1;agent2"/>
 
     <Robotstxt Policy="Custom" Value="/home/user/customRobotRules.txt" AgentNames="agent1;agent2"/>
Line 366: Line 362:
 
<source lang="xml">
 
<source lang="xml">
 
<Record xmlns="http://www.eclipse.org/smila/record" version="1.0">
 
<Record xmlns="http://www.eclipse.org/smila/record" version="1.0">
<Id xmlns="http://www.eclipse.org/smila/id" version="1.0">
+
  <Val key="_recordid">web:&lt;Url=http://en.wikipedia.org/wiki/Main_Page&gt;</Val>
                <!-- Element name must be Source, not _Source, it's made due to syntax coloring problem in wiki -->
+
  <Val key="Url">http://en.wikipedia.org/wiki/Main_Page</Val>
<_Source>web</_Source>
+
  <Val key="Content">
<Key name="Url">http://en.wikipedia.org/wiki/Main_Page</Key>
+
            Whole content of wikipedia main page.
</Id>
+
            To much to post here.
<A n="Url">
+
  </Val>
<L>
+
  <Val key="Title">Wikipedia, the free encyclopedia</Val>
<V>http://en.wikipedia.org/wiki/Main_Page</V>
+
  <Seq n="MetaData">
</L>
+
    <Val>base:null</Val>
</A>
+
    <Val>noCache:false</Val>
<A n="Content">
+
    <Val>noFollow:false</Val>
<L>
+
    <Val>noIndex:false</Val>
<V>
+
    <Val>refresh:false</Val>
Whole content of wikipedia main page.
+
    <Val>refreshHref:null</Val>
To much to post here.
+
    <Val>
</V>
+
        keywords:Main Page,1266,1815,1919,1935,1948 NCAA Men's
</L>
+
        Division I Ice Hockey Tournament,1991,1993,2009,2009
</A>
+
        Bangladesh Rifles revolt,Althea Byfield
<A n="Title">
+
    </Val>
<L>
+
    <Val>generator:MediaWiki 1.15alpha</Val>
<V>Wikipedia, the free encyclopedia</V>
+
    <Val>content-type:text/html; charset=utf-8</Val>
</L>
+
    <Val>content-style-type:text/css</Val>
</A>
+
  </Seq>
<A n="MetaData">
+
  <Val key="MimeType">text/html</Val>
<L>
+
  <Seq key="ResponseHeader">
<V>base:null</V>
+
    <Val>Server:Apache</Val>
<V>noCache:false</V>
+
    <Val>Date:Thu, 26 Feb 2009 14:33:37 GMT</Val>
<V>noFollow:false</V>
+
  </Seq>
<V>noIndex:false</V>
+
  <Seq key="MetaDataWithResponseHeaderFallBack">
<V>refresh:false</V>
+
    <Val>Age:2</Val>
<V>refreshHref:null</V>
+
    <Val>Content-Language:en</Val>
<V>
+
    <Val>Content-Length:57974</Val>
keywords:Main Page,1266,1815,1919,1935,1948 NCAA Men's
+
    <Val>Last-Modified:Thu, 26 Feb 2009 14:31:46 GMT</Val>
Division I Ice Hockey Tournament,1991,1993,2009,2009
+
    <Val>
Bangladesh Rifles revolt,Althea Byfield
+
        X-Cache-Lookup:MISS from knsq25.knams.wikimedia.org:80
</V>
+
    </Val>
<V>generator:MediaWiki 1.15alpha</V>
+
    <Val>Connection:Keep-Alive</Val>
<V>content-type:text/html; charset=utf-8</V>
+
    <Val>X-Cache:MISS from knsq25.knams.wikimedia.org</Val>
<V>content-style-type:text/css</V>
+
    <Val>Server:Apache</Val>
</L>
+
    <Val>X-Powered-By:PHP/5.2.4-2ubuntu5wm1</Val>
</A>
+
    <Val>
<A n="MimeType">
+
        Cache-Control:private, s-maxage=0, max-age=0,
<L>
+
        must-revalidate
<V>text/html; charset=utf-8</V>
+
    </Val>
</L>
+
    <Val>Date:Thu, 26 Feb 2009 14:33:37 GMT</Val>
</A>
+
    <Val>Vary:Accept-Encoding,Cookie</Val>
<A n="ResponseHeader">
+
    <Val>
<L>
+
        X-Vary-Options:Accept-Encoding;list-contains=gzip,Cookie;string-contains=enwikiToken;string-contains=enwikiLoggedOut;string-contains=enwiki_session;string-contains=centralauth_Token;string-contains=centralauth_Session;string-contains=centralauth_LoggedOut
<V>Server:Apache</V>
+
    </Val>
<V>Date:Thu, 26 Feb 2009 14:33:37 GMT</V>
+
    <Val>
</L>
+
        Via:1.1 sq39.wikimedia.org:3128 (squid/2.7.STABLE6), 1.0
</A>
+
        knsq29.knams.wikimedia.org:3128 (squid/2.7.STABLE6), 1.0
<A n="MetaDataWithResponseHeaderFallBack">
+
        knsq25.knams.wikimedia.org:80 (squid/2.7.STABLE6), 1.0
<L>
+
        HAN-HB-FW-001
<V>Age:2</V>
+
    </Val>
<V>Content-Language:en</V>
+
    <Val>Content-Type:text/html; charset=utf-8</Val>
<V>Content-Length:57974</V>
+
    <Val>Proxy-Connection:Keep-Alive</Val>
<V>Last-Modified:Thu, 26 Feb 2009 14:31:46 GMT</V>
+
    <Val>base:null</Val>
<V>
+
    <Val>noCache:false</Val>
X-Cache-Lookup:MISS from knsq25.knams.wikimedia.org:80
+
    <Val>noFollow:false</Val>
</V>
+
    <Val>noIndex:false</Val>
<V>Connection:Keep-Alive</V>
+
    <Val>refresh:false</Val>
<V>X-Cache:MISS from knsq25.knams.wikimedia.org</V>
+
    <Val>refreshHref:null</Val>
<V>Server:Apache</V>
+
    <Val>
<V>X-Powered-By:PHP/5.2.4-2ubuntu5wm1</V>
+
        keywords:Main Page,1266,1815,1919,1935,1948 NCAA Men's
<V>
+
        Division I Ice Hockey Tournament,1991,1993,2009,2009
Cache-Control:private, s-maxage=0, max-age=0,
+
        Bangladesh Rifles revolt,Althea Byfield
must-revalidate
+
    </Val>
</V>
+
    <Val>generator:MediaWiki 1.15alpha</Val>
<V>Date:Thu, 26 Feb 2009 14:33:37 GMT</V>
+
    <Val>content-type:text/html; charset=utf-8</Val>
<V>Vary:Accept-Encoding,Cookie</V>
+
    <Val>content-style-type:text/css</Val>
<V>
+
  </Seq>
X-Vary-Options:Accept-Encoding;list-contains=gzip,Cookie;string-contains=enwikiToken;string-contains=enwikiLoggedOut;string-contains=enwiki_session;string-contains=centralauth_Token;string-contains=centralauth_Session;string-contains=centralauth_LoggedOut
+
  <Val key="_HASH_TOKEN">eb1eff85a3e3d4ad4ffd0dd9d4883e3d1f7f988019ca9bfa4a4df2e7659aa6</Val>
</V>
+
  <Attachment>Content</Attachment>
<V>
+
Via:1.1 sq39.wikimedia.org:3128 (squid/2.7.STABLE6), 1.0
+
knsq29.knams.wikimedia.org:3128 (squid/2.7.STABLE6), 1.0
+
knsq25.knams.wikimedia.org:80 (squid/2.7.STABLE6), 1.0
+
HAN-HB-FW-001
+
</V>
+
<V>Content-Type:text/html; charset=utf-8</V>
+
<V>Proxy-Connection:Keep-Alive</V>
+
<V>base:null</V>
+
<V>noCache:false</V>
+
<V>noFollow:false</V>
+
<V>noIndex:false</V>
+
<V>refresh:false</V>
+
<V>refreshHref:null</V>
+
<V>
+
keywords:Main Page,1266,1815,1919,1935,1948 NCAA Men's
+
Division I Ice Hockey Tournament,1991,1993,2009,2009
+
Bangladesh Rifles revolt,Althea Byfield
+
</V>
+
<V>generator:MediaWiki 1.15alpha</V>
+
<V>content-type:text/html; charset=utf-8</V>
+
<V>content-style-type:text/css</V>
+
</L>
+
</A>
+
<A n="_HASH_TOKEN">
+
<L>
+
<V>
+
eb1eff85a3e3d4ad4ffd0dd9d4883e3d1f7f988019ca9bfa4a4df2e7659aa6
+
</V>
+
</L>
+
</A>
+
<Attachment>Content</Attachment>
+
 
</Record>
 
</Record>
 
</source>
 
</source>
 +
 +
== Additional performance counters ==
 +
 +
The FileSystemCrawler adds some specific counters to the common counters:
 +
* bytes: number of bytes read from web server
 +
* pages: number of web pages read
 +
* averageHttpFetchTime: average time for fetching a page from the server.
 +
* producerExceptions: number of webserver related errors
  
 
== See also ==
 
== See also ==
Line 480: Line 452:
 
* [[SMILA/Documentation/Filesystem Crawler|Filesystem Crawler]]
 
* [[SMILA/Documentation/Filesystem Crawler|Filesystem Crawler]]
 
* [[SMILA/Documentation/JDBC Crawler|JDBC Crawler]]
 
* [[SMILA/Documentation/JDBC Crawler|JDBC Crawler]]
 +
 +
== External links ==
 +
 +
* [http://www.robotstxt.org/robotstxt.html The Web Robots Pages - robots.txt reference]
 +
* [https://www.google.com/webmasters/tools/docs/en/protocol.html Google Sitemap Protocol]
 +
* [http://en.wikipedia.org/wiki/Referer HTTP Referer Header]
 +
* [http://en.wikipedia.org/wiki/HTTP_cookie HTTP Cookie Header]
  
 
__FORCETOC__
 
__FORCETOC__
  
 
[[Category:SMILA]]
 
[[Category:SMILA]]

Latest revision as of 11:31, 28 October 2014

Note.png
This is deprecated for SMILA 1.0, the connectivity framework has been replaced by the new Importing framework.


Overview

The Web crawler fetches data from HTTP servers. Starting with an initial URL, it crawls all linked websites recursively.

Crawling configuration

The example configuration file is located at configuration/org.eclipse.smila.connectivity.framework/web.xml

Defining Schema: org.eclipse.smila.connectivitiy.framework.crawler.web/schemas/WebDataSourceConnectionConfigSchema.xsd

Crawling configuration explanation

See SMILA/Documentation/Crawler#Configuration for the generic parts of the configuration file.

The root element of the configuration is DataSourceConnectionConfig and contains the following sub elements:

  • DataSourceID – the identification of a data source.
  • SchemaID – specify the schema for a crawler job.
  • DataConnectionID – describes which agent crawler should be used.
    • Crawler – implementation class of a crawler.
    • Agent – implementation class of an agent.
  • CompoundHandling – specify if packed data (like a ZIP containing files) should be unpack and files within should be crawled (YES or NO).
  • Attributes – list all attributes which describe a website.
    • Attribute:
      • attributes:
        • Type (required) – the data type (String, Integer or Date).
        • Name (required) – attributes name.
        • HashAttribute – specify if the attribute is used for the hash used for delta indexing (true or false). Must be true for at least one attribute which must always have a value.
        • KeyAttribute – specify if the attribute is used for creating the record ID (true or false). Must be true for at least one attribute. All key attributes must identify the file uniquely, so usually you will set it true for the attribute containing Url FieldAttribute.
        • Attachment – specify if the attribute return the data as attachment of record.
      • sub elements:
        • FieldAttribute: Content of element is one of
          • Url: URL of the web page. NOTE: Must currently be mapped to an attribute named "Url". Mapping to additional attributes are allowed.
          • Title: The title of the web page from the <title> tag.
          • Content: The content of the web page. Original binary content, if mapped to an attachment, else it is tried to convert it to a string using the encoding reported in the response headers.
          • MimeType: Mime type of website as reported in response headers.
        • MetaAttribute
          • sub elements MetaName: Key of value to get from metadata.
          • attribute Type: one of MetaData, ResponseHeader, MetaDataWithResponseHeaderFallBack: read from HTML meta tags, response header or both
          • attribute ReturnType: structure the metadata will be returned. One of:
  • MetaDataString: default structure, metadata is returned as single string, for example:
<Val key="ResponseHeader">Content-type: text/html</Val>
  • MetaDataValue: only values of metadata are returned, for example:
<Val key="ResponseHeader">text/html</Val>
  • MetaDataMObject: metadata is returned as MObject containing attributes with metadata names and values, for example:
<Map key="ResponseHeader">
  <Val key="Content-Type">text/html</Val>
  ...
</Map>
  • Process – this element is responsible for selecting data
    • Website - contains all important information for accessing and crawling a website.
      • ProjectName - defines project name
      • Sitemaps - for supporting Google site maps. sitemap.xml, sitemap.xml.gz and sitemap.gz formats are supported. See [[1]]. Links extracted from <loc> tags are added to the current level links. Crawler looks for the sitemap file at the root directory of the web server and then caches it for the particular host to avoid parsing the sitemap again for the URL already processed.
      • Header - request headers separated by semicolon. Headers should be in format "<header_name>:<header_content>", separated by semicolon.
      • Referer - to include "Referer: URL" header in HTTP request. See [[2]]
      • EnableCookies - enable or disable cookies for crawling process (true or false). See [[3]]
      • UserAgent - element used to identify crawler to the server as a specific user agent origination the request. The UserAgent string generated looks like the following: Name/Version (Description, Url, Email)
        • Name (required)
        • Version
        • Description
        • URL
        • Email
      • Robotstxt element used for supporting robots.txt information. The Robots Exclusion Standard tells crawler how to crawl a website – or rather which resources should not be crawled. See [[4]]
        • Policy: there are five types of policies offered on how to deal with robots.txt rules:
          1. Classic. Simply obey the robots.txt rules. Recommended unless you have special permission to collect a site more aggressively.
          2. Ignore. Completely ignore robots.txt rules.
          3. Custom. Obey your own, custom, robots.txt instead of those discovered on the relevant site. The attribute Value must contain the path to a locally available robots.txt file in this case.
          4. Set. Limit robots names which rules are followed to the given set. Value attribute must handle robots names separated by semicolon in this case.
        • Value: specifies the filename with the robots.txt rules for Custom policy and set of agent names for the Set policy.
        • AgentNames: specifies the list of agents we advertise. This list should be started with the same name as UserAgent Name (for example: crawler user-agent name that is used for the crawl job)
      • CrawlingModel: there are two models available:
        • Type: the model type (MaxBreadth or MaxDepth)
        1. MaxBreadth: crawling a web site through a limited number of links.
        2. MaxDepth: crawling a web site with specifying the maximum crawling depth.
        • Value: parameter (Integer)
      • CrawlScope: decides for each discovered URI if it is within the scope of the current crawl.
      • Type: following scope are provided:
        1. Broad: accept all. This scope does not impose any limits on the hosts, domains, or URI paths crawled.
        2. Domain: accept if on same 'domain' as seeds (start URL). This scope limits discovered URIs to the set of domains defined by the provided seeds. That is any URI discovered belonging to a domain from which one of the seed came is within scope. Using the seed 'brox.de', a domain scope will fetch 'bugs.brox.de', 'confluence.brox.de', etc. It will fetch all discovered URIs from 'brox.de' and from any subdomain of 'brox.de'.
        3. Host: accept if on exact host as seeds. This scope limits discovered URIs to the set of hosts defined by the provided seeds. If the seed is 'www.brox.de', then we'll only fetch items discovered on this host. The crawler will not go to 'bugs.brox.de'.
        4. Path: accept if on same host and a shared path-prefix as seeds. This scope goes yet further and limits the discovered URIs to a section of paths on hosts defined by the seeds. Of course any host that has a seed **:pointing at its root (i.e. www.sample.com/index.html) will be included in full where as a host whose only seed is www.sample2.com/path/index.html **:will be limited to URIs under /path/.
        • Filters: every scope can have additional filters to select URI that will be considered to be within or out of scope ( see the section Filters for details)
      • CrawlLimits: In addition to limits imposed on the scope of the crawl it is possible to enforce arbitrary limits on the duration and extent of the crawling process with the following setting:
        • SizeLimits:
          • MaxBytesDownload: stop after a fixed number of bytes have been downloaded (0 means unlimited).
          • MaxDocumentDownload: stop after downloading a fixed number of documents (0 means unlimited).
          • MaxTimeSec: stop after a certain number of seconds have elapsed (0 means unlimited). These are not supposed to be hard limits. Once one of these limits is reached, it will trigger a graceful termination of the crawl job, which means that URIs already being crawled will be completed. As a result the set limit will be exceeded by some amount.
          • MaxLengthBytes: maximum number of bytes to download per document. Will truncate file once this limit is reached.
        • TimeoutLimits: Whenever crawler connects to or reads from a remote host, it checks the timeouts and aborts the operation if any is exceeded. This prevents anomalous occurrences such as hanging reads or infinite connects.
          • Timeout: This limit is the total time need to connect and get the download website, and such represents the total of a ConnectTimeout plus a ReadTimeout.
          • ConnectTimeout: Connect timeout in seconds. TCP connections that take longer to establish will be aborted.
          • ReadTimeout: Read (and write) timeout in seconds. Reads that take longer will fail. The default value for read timeout is 900 seconds.
        • WaitLimis:
          • Wait: Wait the specified number of seconds between the retrievals. Use of this option is recommended, as it lightens the server load by making the *:requests less frequent. Specifying a large value for this option is useful if the network or the destination host is down, so that crawler can wait *:long enough to reasonably expect the network error to be fixed before the retry.
          • RandomWait: Some web sites may perform log analysis to identify retrieval programs by looking for statistically significant similarities in the time between requests. This option causes the time between requests to vary between 0 and 2 * wait seconds, where wait was specified using the wait setting, in order to mask crawler's presence from such analysis.
          • MaxRetries: How often to retry URLs that failed.
          • WaitRetry: How long to wait between such retries.
      • Proxy: specifies the HTTP proxy server to be used.
        • ProxyServer:
          • Host
          • Port
          • Login
          • Password
      • Authentication: The Authentication element is used to gain access to areas of websites requiring authentication. Three types of authentication are available: RFC2617 (BASIC and DIGEST types of authentication), HTTP POST or GET of an HTML Form and SSL Certificate based client authentication.
        • RFC2617:
          • Host and
          • Port: equate to the canonical root URI of RFC2617.
          • Realm: realm as per RFC2617. The realm string must match exactly the realm name presented in the authentication challenge served up by the web server.
          • Login: username for login.
          • Password: password to this restricted area.
        • HMTLFrom:
          • CredentialDomain: same as the RFC2617 canonical root URI of RFC2617.
          • HttpMethod: POST or GET
          • LoginUrl: relative or absolute URI to the page that the HTML Form submits to (Not the page that contains the HTML Form)
          • FormItems: listing of HTML Form key/value pairs
        • SSLCertificate:
          • ProtocolName: name of the protocol to be used, e.g. "https".
          • Port: port number
          • TruststoreUrl: location of the file containing one or several trusted certificates.
          • TruststorePassword
          • KeystoneUrl: location of the file containing a private key/public certificate pair.
          • KeystonePassword
      • Seeds: contains a list of Seed elements
        • FollowLinks: enables analyzing URL of pages that otherwise would be ignored:
        1. NoFollow: do not analyze anything that matches any "Unselect" filter.
        2. Follow: analyze everything that matches some "Unselect" filter, do not index anything
        3. FollowLinksWithCorrespondingSelectFilter: index pages that match both "Select" and "Unselect" filters, and analyze everything else that matches **:some "Unselect" filter.
        • Seed: defines site’s start path from which crawling process begin.
      • Filters: contains a list of Filter elements and optional refinements elements.
        • Filter: used to define filters for pages that should be crawled and indexed.
          • Type: the following filter types are available:
          1. BeginningPath: filters paths which begin with the specified characters.
          2. RegExp: filters urls based on a regular expression.
          3. ContentType: filters content type on a regular expression. Use this filter to abort the download of content-types other than those wanted.
          • WorkType: Select or Unselect, the way how filter should work.
          • Value: the filter value that will be used to check if the given value matches the filter or not.
        • Refinements: must be nested into the Filter element. It allows to modify filter settings under certain circumstances. Following refinements may be applied to the filters:
        1. Port: match only those URIs for the given port number.
        2. TimeOfDay: if this refinement is applied, the filter will only be in effect between the hours specified each day. From and To attributes must be in HH:mm:ss format (e.g. 23:00:00)
          • From: time when filter becomes enabled.
          • To: till this time the filter will be enabled.
      • MetaTagFilters: contains a list of MetaTagFilter elements.
        • MetaTagFilter: defines filter for omitting content by meta tags.
          • Type: type of meta-tag to match: Name or Http-Equiv.
          • Name: name of the tag e.g. "author" for the Type "Name".
          • Content: the tag contents.
          • WorkType: Select or Unselect

Crawling configuration example

<DataSourceConnectionConfig
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
  xsi:noNamespaceSchemaLocation="../org.eclipse.smila.connectivity.framework.crawler.web/schemas/WebDataSourceConnectionConfigSchema.xsd">
  <DataSourceID>web</DataSourceID>
  <SchemaID>org.eclipse.smila.connectivity.framework.crawler.web</SchemaID>
  <DataConnectionID>
    <Crawler>WebCrawlerDS</Crawler>
  </DataConnectionID>
  <CompoundHandling>No</CompoundHandling>
  <Attributes>
    <Attribute Type="String" Name="Url" KeyAttribute="true">
      <FieldAttribute>Url</FieldAttribute>
    </Attribute>
    <Attribute Type="String" Name="Title">
      <FieldAttribute>Title</FieldAttribute>
    </Attribute>
    <Attribute Type="String" Name="Content" HashAttribute="true" Attachment="true" MimeTypeAttribute="Content">
      <FieldAttribute>Content</FieldAttribute>
    </Attribute>
    <Attribute Type="String" Name="MimeType">
      <FieldAttribute>MimeType</FieldAttribute>
    </Attribute>
    <Attribute Type="String" Name="MetaData" Attachment="false">
      <MetaAttribute Type="MetaData"/>
    </Attribute>
    <Attribute Type="String" Name="ResponseHeader" Attachment="false">
      <MetaAttribute Type="ResponseHeader">
        <MetaName>Date</MetaName>
        <MetaName>Server</MetaName>
      </MetaAttribute>
    </Attribute>
    <Attribute Type="String" Name="MetaDataWithResponseHeaderFallBack" Attachment="false">
      <MetaAttribute Type="MetaDataWithResponseHeaderFallBack"/>
    </Attribute>
  </Attributes>
  <Process>
    <WebSite ProjectName="Example Crawler Configuration" Header="Accept-Encoding: gzip,deflate; Via: myProxy" Referer="http://myReferer">
      <UserAgent Name="Crawler" Version="1.0" Description="teddy crawler" Url="http://www.teddy.com" Email="crawler@teddy.com"/>
      <CrawlingModel Type="MaxDepth" Value="1000"/>
      <CrawlScope Type="Domain">
        <Filters>
          <Filter Type="BeginningPath" WorkType="Select" Value="/"/>
        </Filters>
      </CrawlScope>
      <CrawlLimits>
        <!-- Warning: The amount of files returned is limited to 1000 -->
        <SizeLimits MaxBytesDownload="0" MaxDocumentDownload="1000" MaxTimeSec="3600" MaxLengthBytes="100000"/>
        <TimeoutLimits Timeout="10000"/>
        <WaitLimits Wait="0" RandomWait="false" MaxRetries="8" WaitRetry="0"/>
      </CrawlLimits>
      <Seeds FollowLinks="Follow">
        <Seed>http://en.wikipedia.org/</Seed>
      </Seeds>
      <Filters>
        <Filter Type="RegExp" Value=".*action=edit.*" WorkType="Unselect"/>
      </Filters>
    </WebSite>
  </Process>
</DataSourceConnectionConfig>

Minimal configuration example

This example demonstrates minimal configuration required for crawler.

<WebSite ProjectName="Minimal Configuration">
  <Seeds>
    <Seed>http://localhost/test/</Seed>
  </Seeds>
</WebSite>

Html form login example

his example demonstrates how to login to Invision Power Board powered forum. Number of downloaded pages is limited to 15. robots.txt information is ignored. Crawler will advertise itself as Mozilla/5.0.

<WebSite ProjectName="Login To Invision Powerboard Forum Example">
  <UserAgent Name="Mozilla" Version="5.0" Description="" Url="" Email=""/>
    <Robotstxt Policy="Ignore" />
      <CrawlLimits>
    <SizeLimits MaxDocumentDownload="15"/>
      </CrawlLimits>
  <Authentication>
    <HtmlForm CredentialDomain="http://forum.example.com/index.php?act=Login&amp;CODE=00" LoginUri="http://forum.example.com/index.php?act=Login&amp;CODE=01" HttpMethod="POST">
      <FormElements>
        <FormElement Key="referer" Value=""/>
          <FormElement Key="CookieDate" Value="1"/>
          <FormElement Key="Privacy" Value="1"/>
          <FormElement Key="UserName" Value="User"/>
          <FormElement Key="PassWord" Value="Password"/>
          <FormElement Key="submit" Value="Enter"/>
      </FormElements>
    </HtmlForm>
  </Authentication>
  <Seeds FollowLinks="Follow">
    <Seed><![CDATA[http://forum.example.com/index.php?act=Login&CODE=00]]></Seed>
  </Seeds>
</WebSite>

Multiple website configuration

<WebSite ProjectName="First WebSite">
  <UserAgent Name="Brox Crawler" Version="1.0" Description="Brox Crawler" Url="http://www.example.com" Email="crawler@example.com"/>
    <CrawlingModel Type="MaxIterations" Value="20"/>
    <CrawlScope Type="Broad">  
    <CrawlLimits>
      <SizeLimits MaxBytesDownload="0" MaxDocumentDownload="100" MaxTimeSec="3600" MaxLengthBytes="1000000" />
      <TimeoutLimits Timeout="10000" />
      <WaitLimits Wait="0" RandomWait="false" MaxRetries="8" WaitRetry="0"/>
    </CrawlLimits>
  <Seeds FollowLinks="Follow"
      <Seed>http://localhost/</Seed>
      <Seed>http://localhost/otherseed</Seed>
  </Seeds>
    <Authentication>
      <Rfc2617 Host="localhost" Port="80" Realm="Restricted area" Login="user" Password="pass"/>                                                      
      <HtmlForm CredentialDomain="http://localhost:8081/admin/" LoginUri="/j_security_check" HttpMethod="GET">
      <FormElements>
          <FormElement Key="j_username" Value="admin"/>
          <FormElement Key="j_password" Value=""/>
          <FormElement Key="submit" Value="Login"/>
        </FormElements>
      </HtmlForm>
    </Authentication>
</WebSite>
<WebSite ProjectName="Second WebSite">
  <UserAgent Name="Mozilla" Version="5.0" Description="X11; U; Linux x86_64; en-US; rv:1.8.1.4" />
    <Robotstxt Policy="Classic" AgentNames="mozilla, googlebot"/>
    <CrawlingModel Type="MaxDepth" Value="100"/>
    <CrawlScope Type="Host"/>
    <CrawlLimits>
      <WaitLimits Wait="5" RandomWait="true"/>
    </CrawlLimits>
    <Seeds FollowLinks="NoFollow">
        <Seed>http://example.com</Seed>
    </Seeds>
    <Filters>
        <Filter Type="BeginningPath" WorkType="Unselect" Value="/something/">
            <Refinements>
                <TimeOfDay From="09:00:00" To="23:00:00"/>
                <Port Number="80"/>
            </Refinements>
        </Filter>
        <Filter Type="RegExp" WorkType="Unselect" Value="news"/>
        <Filter Type="ContentType" WorkType="Unselect" Value="image/jpeg"/>
    </Filters>
</WebSite>

Complex website configuration example

<WebSite ProjectName="Example Crawler Configuration" Header="Accept-Encoding: gzip,deflate; Via: myProxy" Referer="http://myReferer">
  <UserAgent Name="Crawler" Version="1.0" Description="Test crawler" Url="http://www.example.com" Email="crawler@example.com"/>
    <Robotstxt Policy="Custom" Value="/home/user/customRobotRules.txt" AgentNames="agent1;agent2"/>
    <CrawlingModel Type="MaxIterations" Value="20"/>
    <CrawlScope Type="Broad">
      <Filters>
        <Filter Type="BeginningPath" WorkType="Select" Value="/test.html"/>
      </Filters>
    </CrawlScope>
    <CrawlLimits>
      <SizeLimits MaxBytesDownload="0" MaxDocumentDownload="1" MaxTimeSec="3600" MaxLengthBytes="1000000" />
      <TimeoutLimits Timeout="10000" />
      <WaitLimits Wait="0" RandomWait="false" MaxRetries="8" WaitRetry="0"/>
    </CrawlLimits>
    <Proxy>
      <ProxyServer Host="example.com" Port="3128" Login="user" Password="pass"/>
    </Proxy>
    <Authentication>
      <Rfc2617 Host="somehost.com" Port="80" Realm="realm string" Login="user" Password="pass"/>
    </Authentication>
    <Seeds FollowLinks="NoFollow">
      <Seed>http://example.com</Seed>
    </Seeds>
    <Filters>
      <Filter Type="BeginningPath" WorkType="Unselect" Value="/something/">
        <Refinements>
          <TimeOfDay From="09:00:00" To="23:00:00"/><Port Number="80"/>
        </Refinements>
      </Filter>
      <Filter Type="RegExp" WorkType="Unselect" Value="news"/>
      <Filter Type="ContentType" WorkType="Unselect" Value="image/jpeg"/>
    </Filters>
    <MetaTagFilters>
      <MetaTagFilter Type="Name" Name="author" Content="Blocked Author" WorkType="Unselect"/>
    </MetaTagFilters>
</WebSite>

Output example for default configuration

If you crawl with the default configuration file, you’ll receive the following record:

<Record xmlns="http://www.eclipse.org/smila/record" version="1.0">
  <Val key="_recordid">web:&lt;Url=http://en.wikipedia.org/wiki/Main_Page&gt;</Val>
  <Val key="Url">http://en.wikipedia.org/wiki/Main_Page</Val>
  <Val key="Content">
            Whole content of wikipedia main page.
            To much to post here.
  </Val>
  <Val key="Title">Wikipedia, the free encyclopedia</Val>
  <Seq n="MetaData">
    <Val>base:null</Val>
    <Val>noCache:false</Val>
    <Val>noFollow:false</Val>
    <Val>noIndex:false</Val>
    <Val>refresh:false</Val>
    <Val>refreshHref:null</Val>
    <Val>
        keywords:Main Page,1266,1815,1919,1935,1948 NCAA Men's
        Division I Ice Hockey Tournament,1991,1993,2009,2009
        Bangladesh Rifles revolt,Althea Byfield
    </Val>
    <Val>generator:MediaWiki 1.15alpha</Val>
    <Val>content-type:text/html; charset=utf-8</Val>
    <Val>content-style-type:text/css</Val>
  </Seq>
  <Val key="MimeType">text/html</Val>
  <Seq key="ResponseHeader">
    <Val>Server:Apache</Val>
    <Val>Date:Thu, 26 Feb 2009 14:33:37 GMT</Val>
  </Seq>
  <Seq key="MetaDataWithResponseHeaderFallBack">
    <Val>Age:2</Val>
    <Val>Content-Language:en</Val>
    <Val>Content-Length:57974</Val>
    <Val>Last-Modified:Thu, 26 Feb 2009 14:31:46 GMT</Val>
    <Val>
        X-Cache-Lookup:MISS from knsq25.knams.wikimedia.org:80
    </Val>
    <Val>Connection:Keep-Alive</Val>
    <Val>X-Cache:MISS from knsq25.knams.wikimedia.org</Val>
    <Val>Server:Apache</Val>
    <Val>X-Powered-By:PHP/5.2.4-2ubuntu5wm1</Val>
    <Val>
        Cache-Control:private, s-maxage=0, max-age=0,
        must-revalidate
    </Val>
    <Val>Date:Thu, 26 Feb 2009 14:33:37 GMT</Val>
    <Val>Vary:Accept-Encoding,Cookie</Val>
    <Val>
        X-Vary-Options:Accept-Encoding;list-contains=gzip,Cookie;string-contains=enwikiToken;string-contains=enwikiLoggedOut;string-contains=enwiki_session;string-contains=centralauth_Token;string-contains=centralauth_Session;string-contains=centralauth_LoggedOut
    </Val>
    <Val>
        Via:1.1 sq39.wikimedia.org:3128 (squid/2.7.STABLE6), 1.0
        knsq29.knams.wikimedia.org:3128 (squid/2.7.STABLE6), 1.0
        knsq25.knams.wikimedia.org:80 (squid/2.7.STABLE6), 1.0
        HAN-HB-FW-001
    </Val>
    <Val>Content-Type:text/html; charset=utf-8</Val>
    <Val>Proxy-Connection:Keep-Alive</Val>
    <Val>base:null</Val>
    <Val>noCache:false</Val>
    <Val>noFollow:false</Val>
    <Val>noIndex:false</Val>
    <Val>refresh:false</Val>
    <Val>refreshHref:null</Val>
    <Val>
        keywords:Main Page,1266,1815,1919,1935,1948 NCAA Men's
        Division I Ice Hockey Tournament,1991,1993,2009,2009
        Bangladesh Rifles revolt,Althea Byfield
    </Val>
    <Val>generator:MediaWiki 1.15alpha</Val>
    <Val>content-type:text/html; charset=utf-8</Val>
    <Val>content-style-type:text/css</Val>
  </Seq>
  <Val key="_HASH_TOKEN">eb1eff85a3e3d4ad4ffd0dd9d4883e3d1f7f988019ca9bfa4a4df2e7659aa6</Val>
  <Attachment>Content</Attachment>
</Record>

Additional performance counters

The FileSystemCrawler adds some specific counters to the common counters:

  • bytes: number of bytes read from web server
  • pages: number of web pages read
  • averageHttpFetchTime: average time for fetching a page from the server.
  • producerExceptions: number of webserver related errors

See also

External links

Back to the top