Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "SMILA/Documentation/Importing/Concept"

(Delta Delete)
Line 59: Line 59:
 
* <tt>full</tt>: This is the default mode with the aim to keep the import target in sync with the data source: delta information is checked and recorded, and the delta-delete is done.
 
* <tt>full</tt>: This is the default mode with the aim to keep the import target in sync with the data source: delta information is checked and recorded, and the delta-delete is done.
  
If one of the first two values (<tt>none</tt> and <tt>setOnly</tt>) is used, the DeltaChecker worker just writes every records from the input bulk to the output bulk without really doing anything. Thus, to improve the performance even more the DeltaChecker worker can be removed from the workflow completely and the output of the crawler worker can be processed by the fetcher worker immediately.
+
If one of the first two values (<tt>none</tt> and <tt>setOnly</tt>) is used, the DeltaChecker worker just writes every records from the input bulk to the output bulk without really doing anything. Thus, to improve the performance even more, for jobs using these modes the DeltaChecker worker can be removed from the workflow completely and the output of the crawler worker can be processed by the fetcher worker immediately.

Revision as of 06:22, 13 February 2012

Using JobManager to run imports

The idea is to apply the jobmanagement framework for doing crawl jobs, too. The advantages are:

  • we don't need a separate execution framework for crawling anymore
  • integrators can use same programming model for creating crawler components than for processing workers.
  • same control and monitoring APIs for crawling and processing
  • better performance through inherent asynchronicity
  • better error tolerance through inherent failsafety
  • Parallelization of crawling process possible

Overview

We can reach this goal by splitting up the crawl process into several workers. Basically, a crawling workflow always looks like this:

SMILA-importing-workflow.png

Workers with names starting with "(DS)" are specific for the crawled data source type. E.g. to crawl a file system you apparently need a different crawler worker than for a web server. Not each component may be necessary for each data source type, and it is possible to adapt components to add or remove functionality.

The crawling job is separated from the processing (e.g. indexing) workflow. A final worker in the crawl workflow pushes all records to the other workflow. This makes it possible to have several datasources being crawled into a single index. Also, in update crawl runs it is easier to detect when the actual crawling is done so that it can be determined which records have to be deleted because they were not visited in this run. We assume that the processing job is just running all the time.

Components

  • Crawler: (data source specific)
    • two output slots:
      • one for crawled resources (files, web pages), that match configurable filters (name patterns, extensions, size, mimetype, ...)
      • one for resources still to crawl (outgoing links, sub-directories). This one leads to follow-up tasks for the same worker.
    • The worker can create multiple output bulks for each slot per task so that the following workers can parallelize better.
    • In general, it doesn't get content of a resource, but only the path or URL (or whatever identifies it) and metadata of the resources, to minimize IO load especially during "update runs" where most of the resources have not changed and therefore need not need to be fetched.
    • If it has to fetch the content anyway (e.g. Web Crawler has to parse HTML to find follow-up links), it may add it to the crawled records to prevent additional fetching.
    • The crawler can use or contain a "VisitedLinks" service if the data source can have cycles or multiple paths to the same resource (typical: web crawler) so that it does not produce multiple records that represent the same resource.
  • DeltaChecker:
    • Checks with DeltaService whether a resource is new in the data source or has been changed since the last run, depending on some of the metadata produced by the crawler (modification date from file system or HTTP headers).
    • If resource has not changed since the last run, this is marked in DeltaService and the record is not written to the output bulk.
    • Else the record is written to the output bulk (an additional attribute describes if it's a new record or one to update) and must be pushed to the processing workflow in the end.
  • Fetcher: (data source specific)
    • Worker that gets the content of the resource, if the record does not contain it already.
    • Detect compounds (like archive files (zip, tgz), for example) and does not fetch the content, but just copy the records containing the IDs (URL/file name wtc.) to a compound output bulk for later extraction, as we do not want to put extremely large compound objects into bulks.
  • Compound extractor: (data source specific)
    • for handling compounds: fetch the compound data to a local temp filesystem, extract it and add the records to output bulks, just like the ones written by the fetcher.
    • extracting is generic, but fetching the compound file to a local temp directory will be data source specific - the compound is not fetched by the fetcher, because we don't want to add extremely large files to record bulks.
  • Update Pusher:
    • Push resulting records to BulkBuilder and mark them as updated in the DeltaService. In the completion phase of the job run, it can performs the "delta-delete" operation (see below).

The crawl job is started as a runOnce job, i.e. the jobmanager creates an initial task that causes the crawler to start crawling (data source configuration and start links would be given as job parameters, we may need some additional component to manage data source configurations), which creates follow-up crawl and record tasks. When all tasks have been processed the job run finishes automatically (because it was started in runOnce mode).


Delta Delete

Optionally, a final "completion run" can be triggered that causes the UpdatePusher to examine the DeltaService for records that have not been visisted in this job run, because they have been removed from the data source and therefore should be removed from the target of the import process, too. For each of these records one "to-delete" record is pushed to the BulkBuilder. To support an efficient delta-delete on large data sources it will be necessary to parallelize this "delta-delete" operation, because it could take to long to scan the complete DeltaService and create the to-delete records sequentially. To support this the DeltaService should put the state entries for one source in different partitions or shards that can be scanned independently. Then one task can be created for each shard to scan it for deleted records, and they can be processed in parallel by multiple instances of the UpdatePusher.

The usage of the DeltaService is determined by a job parameter named deltaStateUsage. It can one of four values:

  • none: the DeltaService is disabled, no entries are written for imported records, and no delta-delete is performed. This make the import faster if the delta information is not needed for the application. However, as no initial delta information is recorded in this mode, switching to another mode for incremental updates does usually not make sense.
  • setOnly: the delta information for each imported record is written to the DeltaService, but neither a check of the already existing information nor the delta-delete is performed. This is useful to make an initial import faster. Afterwards incremental updates can be done using one of the following modes.
  • setAndCheck: existing delta information is checked to prevent unchanged objects from being re-imported unnecessarily, and the delta information for all imported records is written. Only the final delta-delete step is ommitted. This is useful if objects removed from the data source do not need to be removed from the import target, too.
  • full: This is the default mode with the aim to keep the import target in sync with the data source: delta information is checked and recorded, and the delta-delete is done.

If one of the first two values (none and setOnly) is used, the DeltaChecker worker just writes every records from the input bulk to the output bulk without really doing anything. Thus, to improve the performance even more, for jobs using these modes the DeltaChecker worker can be removed from the workflow completely and the output of the crawler worker can be processed by the fetcher worker immediately.

Back to the top