Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

SMILA/Documentation/Importing/Concept

Using JobManager to run imports

The idea is to apply the jobmanagement framework for doing crawl jobs, too. The advantages are:

  • we don't need a separate execution framework for crawling anymore
  • integrators can use same programming model for creating crawler components than for processing workers.
  • same control and monitoring APIs for crawling and processing
  • better performance through inherent asynchronicity
  • better error tolerance through inherent failsafety
  • Parallelization of crawling process possible

We can reach this goal by splitting up the crawl process into several workers. Basically, a crawling workflow always looks like this:

SMILA-importing-workflow.png

Workers with names starting with "(DS)" are specific for the crawled data source type. E.g. to crawl a file system you apparently need a different crawler worker than for a web server. Not each component may be necessary for each data source type, and it is possible to adapt components to add or remove functionality.

The crawling job is separated from the processing (e.g. indexing) workflow. A final worker in the crawl workflow pushes all records to the other workflow. This makes it possible to have several datasources being crawled into a single index. Also, in update crawl runs it is easier to detect when the actual crawling is done so that it can be determined which records have to be deleted because they were not visited in this run. We assume that the processing job is just running all the time.

The components are:

  • Crawler: (data source specific)
    • two output slots:
      • one for crawled resources (files, web pages), that match configurable filters (name patterns, extensions, size, mimetype, ...)
      • one for resources still to crawl (outgoing links, sub-directories). This one leads to follow-up tasks for the same worker.
    • The worker can create multiple output bulks for each slot per task so that the following workers can parallelize better.
    • In general, it doesn't get content of a resource, but only the path or URL (or whatever identifies it) and metadata of the resources, to minimize IO load especially during "update runs" where most of the resources have not changed and therefore need not need to be fetched.
    • If it has to fetch the content anyway (e.g. Web Crawler has to parse HTML to find follow-up links), it may add it to the crawled records to prevent additional fetching.
    • The crawler can use or contain a "VisitedLinks" service if the data source can have cycles or multiple paths to the same resource (typical: web crawler) so that it does not produce multiple records that represent the same resource.
  • DeltaChecker:
    • Checks with DeltaService whether a resource is new in the data source or has been changed since the last run, depending on some of the metadata produced by the crawler (modification date from file system or HTTP headers).
    • If resource has not changed since the last run, this is marked in DeltaService and the record is not written to the output bulk.
    • Else the record is written to the output bulk (an additional attribute describes if it's a new record or one to update) and must be pushed to the processing workflow in the end.
  • Fetcher: (data source specific)
    • Worker that gets the content of the resource, if the record does not contain it already.
    • Detect compounds (like archive files (zip, tgz), for example) and does not fetch the content, but just copy the records containing the IDs (URL/file name wtc.) to a compound output bulk for later extraction, as we do not want to put extremely large compound objects into bulks.
  • Compound extractor: (data source specific)
    • for handling compounds: fetch the compound data to a local temp filesystem, extract it and add the records to output bulks, just like the ones written by the fetcher.
    • extracting is generic, but fetching the compound file to a local temp directory will be data source specific - the compound is not fetched by the fetcher, because we don't want to add extremely large files to record bulks.
  • Update Pusher:
    • Push resulting records to BulkBuilder and mark them as updated in the DeltaService.

The crawl job is started as a runOnce job, i.e. the jobmanager creates an initial task that causes the crawler to start crawling (data source configuration and start links would be given as job parameters, we may need some additional component to manage data source configurations), which creates follow-up crawl and record tasks. When all tasks have been processed the job run finishes automatically (because it was started in runOnce mode). Then a "finish action" needs to be triggered (TODO: this needs still to be specified) that causes that the DeltaService is scanned for records that have not been visited in this run because they do not exist anymore in the data source. For these records delete commands must be sent to the processing job.

Note.png
The previously described step of deleting records that have not been visited (because they've been deleted from the data source) has not yet been implemented.


Depending on the implementation of the DeltaService it can be useful to try to parallelise the "find deleted records" operation, too. To support this the DeltaService could put the state entries for one source in different partitions or shards that can be scanned independently. Then one task can be created for each shard to scan it for deleted records, and they can be processed in parallel by multiple instances of the same "finish worker".

Copyright © Eclipse Foundation, Inc. All Rights Reserved.