Jump to: navigation, search

Difference between revisions of "SMILA/Documentation/Importing/Crawler/File"

(Processing)
Line 44: Line 44:
  
 
The directory and file records are collected in bulks, whose size can be configured via <tt>maxFilesPerBulk</tt> and <tt>minFilesPerBulk</tt>:
 
The directory and file records are collected in bulks, whose size can be configured via <tt>maxFilesPerBulk</tt> and <tt>minFilesPerBulk</tt>:
* On top level (root folder), each subdirectory record goes to an own bulk. (-> distribution for immediate scale up)
+
* On top level (root folder), each subdirectory record goes to a separate bulk. Parameter <tt>minFilesPerBulk</tt> is not considered (-> distribution for immediate scale up)  
 
* <tt>minFilesPerBulk</tt>
 
* <tt>minFilesPerBulk</tt>
** ''not configured:'' each subdirectory record goes to an own bulk.
+
** ''not configured:'' each subdirectory record goes to a separate bulk.
** ''configured:'' when <tt>minFilesPerBulk</tt> is not reached with all files of current folder, we step into the subfolder(s) to reach the configured minimum size. If min size is reached, all remaining files of the current subfolder are also put in the same bulk until <tt>maxFilesPerBulk</tt> is reached.
+
** ''configured:'' when <tt>minFilesPerBulk</tt> is not reached with all files of the current folder, we step into the subfolder(s) to reach the configured minimum size. If min size is reached, all remaining files of the current subfolder are also put in the same bulk until <tt>maxFilesPerBulk</tt> is reached. Remaining subfolders go each to a separate bulk
 
* <tt>maxFilesPerBulk</tt>
 
* <tt>maxFilesPerBulk</tt>
 
** ''not configured:'' a new file record bulk is started each 1000 files.
 
** ''not configured:'' a new file record bulk is started each 1000 files.
** ''configured:'' a new file record bulk is started when configured value is reached.
+
** ''configured:'' a new file record bulk is started when the configured value is reached.
 +
 
 +
Please note that both parameters must be >= 0 and also that <tt>minFilesPerBulk</tt> must be < <tt>maxFilesPerBulk</tt>. Otherwise your job will fail.
  
 
''Source'':
 
''Source'':

Revision as of 07:29, 21 May 2012

File Crawler, File Fetcher and File Extractor worker are used for importing files from a file system. For a big picture and the worker's interaction have a look at the Importing Concept.

File Crawler

The File Crawler crawls files from a root folder and the subdirectories below.

Configuration

The File Crawler worker is usually the first worker in a workflow and the job is started in runOnce mode.

  • Worker name: fileCrawler
  • Parameters:
    • dataSource: (req.) value for attribute _source, needed e.g. by the delta service
    • rootFolder: (req.) crawl starting point
    • filters (opt.) filters with conditions to in- or exclude files and folders from import
      • maxFileSize: maximum file size, files that are bigger are filtered out
      • maxFolderDepth: starting from the root folder, this is the maximum depth to crawl into subdirectories
      • followSymbolicLinks: whether to follow symbolic links to files/folders or not
      • filePatterns: regex patterns for filtering crawled files on the basis of their file name
        • include: if include patterns are specified, at least one of them must match the file name. If no include patterns are specified, this is handled as if all file names are included.
        • exclude: if at least one exclude pattern matches the file name, the crawled file is filtered out
      • folderPatterns: regex patterns for filtering crawled folders and files on the basis of their file path
        • include: Only relevant for crawled files: If include patterns are specified, at least one of them must match the file path. If no include patterns are specified, this is handled as if all file paths are included.
        • exclude: Only relevant for crawled folders: If at least one exclude pattern matches the folder name, the folder (and its subdirectories) will not be imported.
    • mapping (req.) specifies how to map file properties to record attributes
      • filePath (opt.) mapping attribute for the complete file path
      • fileFolder (opt.) mapping attribute for the file folder
      • fileName (opt.) mapping attribute for the file name
      • fileExtension (opt.) mapping attribute for the file extension
      • fileSize (opt.) mapping attribute for the file size (in bytes)
      • fileLastModified (opt.) mapping attribute for the file's last modified date
    • maxFilesPerBulk (opt.) maximum number of files in one bulk. (default: 1000)
    • minFilesPerBulk (opt.) minimum number of files in one bulk. (if not set, all files of current folder are put into one bulk until maxFilesPerBulk is reached.)
  • Task generator: runOnceTrigger
  • Input slots:
    • directoriesToCrawl
  • Output slots:
    • directoriesToCrawl
    • filesToCrawl
Processing

The File Crawler starts crawling in the rootFolder. It produces one record for each subdirectory in the bucket connected to directoriesToCrawl and one record per file in the bucket connected to filesToCrawl. The bucket in slot directoriesToCrawl should be connected to the input slot of the File Crawler so that the subdirectories are crawled in followup tasks. The resulting records do not yet contain the file content but only metadata attributes configured in the mapping.

The directory and file records are collected in bulks, whose size can be configured via maxFilesPerBulk and minFilesPerBulk:

  • On top level (root folder), each subdirectory record goes to a separate bulk. Parameter minFilesPerBulk is not considered (-> distribution for immediate scale up)
  • minFilesPerBulk
    • not configured: each subdirectory record goes to a separate bulk.
    • configured: when minFilesPerBulk is not reached with all files of the current folder, we step into the subfolder(s) to reach the configured minimum size. If min size is reached, all remaining files of the current subfolder are also put in the same bulk until maxFilesPerBulk is reached. Remaining subfolders go each to a separate bulk
  • maxFilesPerBulk
    • not configured: a new file record bulk is started each 1000 files.
    • configured: a new file record bulk is started when the configured value is reached.

Please note that both parameters must be >= 0 and also that minFilesPerBulk must be < maxFilesPerBulk. Otherwise your job will fail.

Source:

The attribute _source is set from the task parameter dataSource which has no further meaning currently, but it is needed by the delta service.

Compounds:

If the runnning CompoundExtractor service identifies an object as a extractable compound, it is marked with attribute _isCompound set to true.

File Fetcher

For each input record, reads the file referenced in attribute filePath and adds the content as attachment fileContent.

Configuration
  • Worker name: fileCrawler
  • Parameters:
    • mapping (req.) needed to get the file path and to add the fetched file content
      • filePath (req.) to read the attribute that contains the file path
      • fileContent (req.) attachment name where the file content is written to
  • Input slots:
    • filesToFetch
  • Output slots:
    • files


File Extractor Worker

Used for extracting compounds (zip, tgz, etc.) in file crawling.

Configuration
  • Worker name: fileExtractor
  • Parameters:
    • filters (opt., see File Crawler)
      • maxFileSize: (opt., see File Crawler)
      • filePatterns: (opt., see File Crawler)
        • include: (opt., see File Crawler)
        • exclude: (opt., see File Crawler)
      • folderPatterns: (opt., see File Crawler)
        • include: (opt., see File Crawler)
        • exclude: (opt.) The behaviour is slightly different here to that of the File Crawler: If an exclude pattern matches the folder path of an extracted file, then the file is filtered out. But according to the pattern, files from subdirectories may be imported!
    • mapping (req.)
      • filePath (req., see File Crawler): needed to get the file path of the compound file to extract
      • fileFolder (opt., see File Crawler)
      • fileName (opt., see File Crawler)
      • fileExtension (opt., see File Crawler)
      • fileSize (opt., see File Crawler)
      • fileLastModified (opt., see File Crawler)
      • fileContent (req., see File Fetcher)
  • Input slots:
    • compounds
  • Output slots:
    • files
Processing

For each input record, an input stream to the described file is created and fed into the CompoundExtractor service to extract the compound elements. If an element is a compound itself, it is also extracted. If it is not a compound, a new record is created. The produced records are converted to look like records produced by the file crawler resp. fetcher, with the attributes and attachment set that are specified in the mapping configuration. Additionally, the following attributes are set:

  • _deltaHash: computed as by the FileCrawler worker
  • _compoundRecordId: record ID of top-level compound this element was extracted from
  • _isCompound: set to true for elements that are compounds themselves.
  • _compoundPath: sequence of filePath attribute values of the compound objects needed to navigate to the compound element.

Examples

Example of a file crawl job:

 {
   "name":"crawlFileJob",
   "workflow":"fileCrawling",
   "parameters":{
     "tempStore":"temp",
     "dataSource":"files",
     "rootFolder":"/temp",
     "jobToPushTo":"buildBulks",
     "mapping":{
       "fileContent":"fileContent",
       "filePath":"filePath",
       "fileFolder":"fileFolder",
       "fileName":"fileName",
       "fileSize":"fileSize",
       "fileExtension":"fileExtension",
       "fileLastModified":"fileLastModified"
     },
     "filters":{
       "maxFileSize":1000000000,
       "maxFolderDepth":10,
       "followSymbolicLinks":true,
       "filePatterns":{
         "include":[".*"],
         "exclude":["invalid.txt", "invalid.pdf"]  
       },          
       "folderPatterns":{
         "include":[".*"],
         "exclude":["invalid-dir"]  
       }
     }
   }
 }