Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "SMILA/Documentation/Importing/Crawler/Web"

m (Web Fetcher Worker)
m (Web Extractor Worker)
Line 92: Line 92:
 
* Worker name: <tt>webExtractor</tt>
 
* Worker name: <tt>webExtractor</tt>
 
* Parameters:  
 
* Parameters:  
** <tt>waitBetweenRequests</tt>: ''(opt.)'' long value in milliseconds on how long to wait between HTTP requests
+
** <tt>waitBetweenRequests</tt>: ''(opt., see Web Crawler)''
 
** <tt>filters</tt>: ''(opt., see Web Crawler)''
 
** <tt>filters</tt>: ''(opt., see Web Crawler)''
 
*** <tt>maxCrawlDepth</tt>: ''(opt., see Web Crawler)''
 
*** <tt>maxCrawlDepth</tt>: ''(opt., see Web Crawler)''

Revision as of 08:09, 25 May 2012

Currently, the web crawler workers are implemented very simplistic so that we can test the importing framework. A more sophisticated implementation will follow soon (hopefully).

Web Crawler Worker

  • Worker name: webCrawler
  • Parameters:
    • dataSource: (req.) name of data source, used only to mark produced records currently.
    • startUrl: (req.) URL to start crawling at. Must be a valid URL, no additional escaping is done.
    • waitBetweenRequests: (opt.) long value in milliseconds on how long to wait between HTTP requests
    • filters: (opt.) A map containing filter settings, i.e. instructions which links to include or exclude from the crawl. This parameter is optional.
      • maxCrawlDepth: the maximum crawl depth when following links.
      • followRedirects: whether to follow redirects or not (default: false).
      • maxRedirects: maximum number of allowed redirects when following redirects is enabled
      • urlPatterns: regex patterns for filtering crawled elements on the basis of their URL
        • include: if include patterns are specified, at least one of them must match the URL. If no include patterns are specified, this is handled as if all URLs are included.
        • exclude: if at least one exclude pattern matches the URL, the crawled element is filtered out
    • mapping (req.) specifies how to map link properties to record attributes
      • httpUrl (req.) mapping attribute for the URL
      • httpMimetype (opt.) mapping attribute for the mime type
      • httpCharset (opt.) mapping attribute for character set
      • httpContenttype (opt.) mapping attribute for the content type
      • httpLastModified (opt.) mapping attribute for the link's last modified date
      • httpSize (opt.) mapping attribute for the link content's size (in bytes)
      • httpContent (opt.) mapping attribute for the link content
  • Task generator: runOnceTrigger
  • Input slots:
    • linksToCrawl: Records describing links to crawl.
  • Output slots:
    • linksToCrawl: Records describing outgoing links from the crawled resources. Should be connected to the same bucket as the input slot.
    • crawledRecords: Records describing crawled resources. For resources of mimetype text/html the records have the content attached. For other resources, use a webFetcher worker later in the workflow to get the content.

Internal structure

To make it easier to extend and improve the web crawler it is divided internally into components. Each of them is a single OSGi service that handles one part of the crawl functionality and can be exchanged individually to improve a single part of the functionality. The architecture looks like this:

SMILA-Importing-Web-Crawler-Internal.png

The WebCrawler worker is started with one input bulk that contains records with URLS to crawl. (The exception to this rule is the start of the crawl process where it gets a task without an input bulk, which causes it to generate an input record from the task parameters or configuration in a later version.). Then the components are executed like this:

  • First a VisitedLinksService is asked if this link was already crawled by someone else in this crawl job run. If so, the record is just dropped and no output is produced.
  • Otherwise, the Fetcher is called to get metadata and content, if the content type of the resource is suitable for link extraction. Else the content will only be fetched in the WebFetcher worker later in the crawl workflow to save IO load.
  • If the resource could be fetched without problems, the RecordProducer is called to decide if this record should be written to the crawledLinks output bulk. The producer could also modify the records or split them into multiple records, if necessary for the use case.
  • If the content of the resource was fetched, the LinkExtractor is called to extract outgoing links (e.g. look for <A> tags). It can produce multiple link records containing one absolute outgoing URL each.
  • If outgoing links were found the LinkFilter is called to remove links that should not be followed (e.g. because they are on a different site) or remove duplicates.

Finally, when all records from an input bulk have been processed, all links visited in this task must be marked as "visited" in the VisitedLinksService.

Outgoing links are separated into multiple bulks to improve scaling: The outgoing links from the initial task that crawls the startUrl will be written to an own bulk each, while outgoing links from later tasks will be separated into bulks of 10 links each. The crawled records are divided into bulks of 100 records at most, but this will usually not have an effect as each incoming link produces one record at most.

Implementation details

  • ObjectStoreVisitedLinksService (implements VisitedLinksService): Uses the ObjectStoreService to store which links have been visited, similar to the ObjectStoreDeltaService. It uses a configuration file with the same properties in the same configuration directory, but named visitedlinksstore.properties.
  • SimpleFetcher: Uses a GET request to read the URL. Currently, authentication is not supported. Writes content to attachment httpContent, if the resource is of mimetype text/html and set the following attributes:
    • httpSize: value from HTTP header Content-Length (-1, if not set), as a Long value.
    • httpContenttype: value from HTTP header Content-Type, if set.
    • httpMimetype: mimetype part of HTTP header Content-Type, if set.
    • httpCharset: charset part of HTTP header Content-Type, if set.
    • httpLastModified: value from HTTP header Last-Modified, if set, as a DateTime value.
    • _isCompound: set to true for resources that are identified as extractable compound objects by the running CompoundExtractor service.
  • SimpleRecordProducer: Set record source and calculates _deltaHash value for DeltaChecker worker (first wins):
    • if content is attached, calculate a digest.
    • if httpLastModified attribute is set, use it as the hash.
    • if httpSize attribute is set, concatenate value of httpMimetype attribute and use it as hash
    • if nothing works, create a UUID to force updating.
  • SimpleLinkExtractor (implements LinkExtractor: Simple link extraction from HTML <A href="..."> tags using the tagsoup HTML parser.
  • DefaultLinkFilter: Links are normalized (e.g. fragment parts from URLs ("#...") are removed) and filtered against the specified filter configuration.

Web Fetcher Worker

  • Worker name: webFetcher
  • Parameters:
    • waitBetweenRequests: (opt., see Web Crawler)
    • filters:
      • followRedirects: (opt., see Web Crawler)
      • maxRedirects: (opt., see Web Crawler)
    • mapping (req., see Web Crawler)
      • httpUrl (req., see Web Crawler)
      • httpMimetype (req., see Web Crawler)
      • httpContent (req., see Web Crawler)
      • httpCharset (opt., see Web Crawler)
      • httpContenttype (opt., see Web Crawler)
      • httpLastModified (opt., see Web Crawler)
      • httpSize (opt., see Web Crawler)
  • Input slots:
    • linksToFetch: Records describing crawled resources, with or without the content of the resource.
  • Output slots:
    • fetchedLinks: The incoming records with the content of the resource attached.

The fetcher tries to get the content of a web resource identified by attribute httpUrl, if attachment httpContent is not yet set. Like the SimpleFetcher above it does not do redirects or authentication or other fancy stuff to read the resource, but just uses a simple GET request.

Web Extractor Worker

  • Worker name: webExtractor
  • Parameters:
    • waitBetweenRequests: (opt., see Web Crawler)
    • filters: (opt., see Web Crawler)
      • maxCrawlDepth: (opt., see Web Crawler)
      • followRedirects: (opt., see Web Crawler)
      • maxRedirects: (opt., see Web Crawler)
      • urlPatterns: (opt., see Web Crawler)
        • include: (opt., see Web Crawler)
        • exclude: (opt., see Web Crawler)
    • mapping (req., see Web Crawler)
      • httpUrl (req., see Web Crawler) URLs of compounds have the compound link as prefix, e.g. http://example.com/compound.zip/compound-element.txt
      • httpMimetype (req., see Web Crawler)
      • httpCharset (opt., see Web Crawler)
      • httpContenttype (opt., see Web Crawler)
      • httpLastModified (opt., see Web Crawler)
      • httpSize (opt., see Web Crawler)
      • httpContent (opt., see Web Crawler)
  • Input slots:
    • compounds
  • Output slots:
    • files

For each input record, an input stream to the described web resource is created and fed into the CompoundExtractor service. The produced records are converted to look like records produced by the file crawler. Additional internal attributes that are set:

  • _deltaHash: computed as in the WebCrawler worker
  • _compoundRecordId: record ID of top-level compound this element was extracted from
  • _isCompound: set to true for elements that are compounds themselves.* _compoundPath: sequence of httpUrl attribute values of the compound objects needed to navigate to the compound element.

The crawler attributes httpContenttype, httpMimetype and httpCharset are currently not set by the WebExtractor worker.

If the element is not a compound itself, its content is added as attachment httpContent.

Sample web crawl job

Job definition for crawling from start URL "http://wiki.eclipse.org/SMILA", pushing the imported records to job "indexUpdateJob". An include pattern is defined to make sure that we only crawl URLs from "below" our start URL.

  {
   "name":"crawlWebJob",
   "workflow":"webCrawling",
   "parameters":{
     "tempStore":"temp",
     "dataSource":"web",
     "startUrl":"http://wiki.eclipse.org/SMILA/",
     "jobToPushTo":"indexUpdateJob",
     "mapping":{
       "httpContent":"Content",
       "httpUrl":"filePath"
     },
     "filters":{              
       "urlPatterns":{         
         "include":["http://wiki\\.eclipse\\.org/SMILA/.*"]  
       }
     }
   }
 }

Copyright © Eclipse Foundation, Inc. All Rights Reserved.