Jump to: navigation, search

SMILA/Documentation/Importing/Crawler/File

< SMILA‎ | Documentation
Revision as of 13:22, 29 November 2011 by Juergen.schumacher.attensity.com (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Currently, the file system workers are implemented very simplistic so that we can test the importing framework. A sophisticated implementation will follow soon.

File Crawler

  • Worker name: fileCrawler
  • Parameters:
    • dataSource
    • rootFolder
  • Input slots:
    • directoriesToCrawl
  • Output slots:
    • directoriesToCrawl
    • filesToCrawl

The File Crawler starts crawling in the rootFolder and produces one record for each subdirectory in the bucket connected to directoriesToCrawl (and each record goes to an own bulk), and one record per file in the folder in bucket connected to filesToCrawl (a new bulk is started each 1000 files). The bucket in slot directoriesToCrawl should be connected to the input slot of the File Crawler so that the subdirectories are crawled in followup tasks. The file records do not yet contain the file content but only metadata attributes:

  • file.name
  • file.folder
  • file.path
  • file.extension
  • file.size
  • file.last-modified (also written to attribute _deltaHash for delta checking)

The _recordid is the same the file.path currently, and _sourceid is set from the dataSource task parameter.

File Fetcher

  • Worker name: fileCrawler
  • Parameters: none
  • Input slots:
    • filesToFetch
  • Output slots:
    • files

For each input record, reads the file referenced in attribute file.path and adds the content as attachment file.content