Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

SMILA/Documentation/Importing/Crawler/File

File Crawler, File Fetcher and File Extractor worker are used for importing files from a file system. For a big picture and the worker's interaction have a look at the Importing Concept.

File Crawler

The File Crawler crawls files from a root folder and the subdirectories below.

Configuration

The File Crawler worker is usually the first worker in a workflow and the job is started in runOnce mode.

  • Worker name: fileCrawler
  • Parameters:
    • dataSource: (req.) value for attribute _source, needed e.g. by the delta service
    • rootFolder: (req.) crawl starting point
    • filters (opt.) filters with conditions to in- or exclude files and folders from import
      • maxFileSize: maximum file size, files that are bigger are filtered out
      • maxFolderDepth: starting from the root folder, this is the maximum depth to crawl into subdirectories. (Hint: Folder structures in compounds are not taken into account here)
      • followSymbolicLinks: whether to follow symbolic links to files/folders or not
      • filePatterns: regex patterns for filtering crawled files on the basis of their file name
        • include: if include patterns are specified, at least one of them must match the file name. If no include patterns are specified, this is handled as if all file names are included.
        • exclude: if at least one exclude pattern matches the file name, the crawled file is filtered out
        • (Hint: the patterns need to use forward slashes as directory seperators, even if your file system uses backslashes as folder delimiters)
      • folderPatterns: regex patterns for filtering crawled folders and files on the basis of their complete folder path. (Hint: Contrary to the file patterns a folder pattern must match the complete path, it doesn't work if it just matches the folder name!)
        • include: Only relevant for crawled files: If include patterns are specified, at least one of them must match the file path. If no include patterns are specified, this is handled as if all file paths are included.
        • exclude: Only relevant for crawled folders: If at least one exclude pattern matches the folder name, the folder (and its subdirectories) will not be imported.
        • (Hint: the patterns need to use forward slashes as directory seperators, even if your file system uses backslashes as folder delimiters)
    • mapping (req.) specifies how to map file properties to record attributes
      • filePath (opt.) mapping attribute for the complete file path (Hint: required for standard import workflow because File Fetcher and File Extractor worker need this, see below)
      • fileFolder (opt.) mapping attribute for the file folder (complete path without file name)
      • fileName (opt.) mapping attribute for the file name
      • fileExtension (opt.) mapping attribute for the file extension
      • fileSize (opt.) mapping attribute for the file size (in bytes)
      • fileLastModified (opt.) mapping attribute for the file's last modified date
      • fileReadAccess (opt.) mapping attribute for the read access from the file permission info (Access Control List)
      • fileWriteAccess (opt.) mapping attribute for the write access from the file permission info (Access Control List)
    • parameters to control size of output bulks, see below for details
      • maxFilesPerBulk (opt.) maximum number of files in one bulk. (default: 1000)
      • minFilesPerBulk (opt.) minimum number of files in one bulk. (default: 100)
      • directoriesPerBulk (opt.) number of directories written to one bulk for follow-up crawl tasks. (default: 10)
  • Task generator: runOnceTrigger
  • Input slots:
    • directoriesToCrawl
  • Output slots:
    • directoriesToCrawl
    • filesToCrawl
Processing

The File Crawler starts crawling in the rootFolder. It produces one record for each subdirectory in the bucket connected to directoriesToCrawl and one record per file in the bucket connected to filesToCrawl. The bucket in slot directoriesToCrawl should be connected to the input slot of the File Crawler so that the subdirectories are crawled in followup tasks. The resulting records do not yet contain the file content but only metadata attributes configured in the mapping.

The directory and file records are collected in bulks, whose size can be configured via the parameters maxFilesPerBulk, minFilesPerBulk and directoriesPerBulk:

  • maxFilesPerBulk has the same effect in any of the following cases:
    • not configured: a new filesToCrawl bulk is started each 1000 files.
    • configured: a new filesToCrawl bulk is started when the configured value is reached.
  • minFilesPerBulk
    • not configured: only files in the crawled directory are added to filesToCrawl bulks, all subdirectories are written to directoriesToCrawl bulks.
    • configured: when minFilesPerBulk is not reached with all files of the current folder, we step into the subfolder(s) to reach the configured minimum size. If min size is reached, all remaining files of the current subfolder are also written to filesToCrawl bulk(s). Remaining subfolders of the current folder and subfolders of already crawled subfolders are written to directoriesToCrawl bulks.
  • directoriesPerBulk
    • not configured: each sub-directory that is not read directly will be written to a seperate directoriesToCrawl bulk
    • configured: the given number of sub-directories will be written to the same directoriesToCrawl bulk before a new one is started.

Please note that both parameters must be >= 0 and also that minFilesPerBulk must be < maxFilesPerBulk. Otherwise your job will fail.

Since SMILA 1.3 these parameters are used in the initial crawl task, too. There is no special logic anymore in this task.

Source:

The attribute _source is set from the task parameter dataSource which has no further meaning currently, but it is needed by the delta service.

Compounds:

If the runnning CompoundExtractor service identifies an object as a extractable compound, it is marked with attribute _isCompound set to true.

File permission info

If metadata attributes for file access info are configured in mapping:

  {
      ...
     "mapping":{
        ...
       "fileReadAccess":"ReadAccess",
       "fileWriteAccess":"WriteAccess"   
     },
     ...
   }

Then contains each resulting record file access attributes set with following exemplary values:

  • Linux: The properties will contain the names for the files owner and group, if they have read/write access and the special value "_OTHERS_", if all users have read/write access. If a user/group name cannot be resolved by the operating system, the value will contain the numeric user/group id instead. For example:
> ls -l file
-rw-r--r-- 1 johndoe  users  1 29. Jan 16:06 file

-->

{
   ...
   "ReadAccess": [
      "johndoe",
      "users",
      "_OTHERS_"
   ],
   "WriteAccess": [
      "jschumacher"
   ]
}
  • Windows:
{
   ...
   "ReadAccess": [
      "BUILTIN\Administrators",
      "NT AUTORITY\SYSTEM",
      "BUILTIN\Users",
      ...
   ],
   "WriteAccess": [
      "BUILTIN\Administrators",
      "NT AUTORITY\SYSTEM",
      "NT AUTORITY\Authentificated Users",
      ...
   ]
}

File Fetcher

For each input record, reads the file referenced in attribute filePath and adds the content as attachment fileContent and optionally further file properties. The File Fetcher can be used in combination with the File Crawler where the File Crawler extracts the metadata of files and the Fetcher adds the file content or it can be used individually to get both the file content and metadata properties.

Configuration
  • Worker name: fileFetcher
  • Parameters:
    • mapping (req.) needed to get the file path and to add the fetched file content
      • filePath (req.) to read the attribute that contains the file path
      • fileContent (req.) attachment name where the file content is written to
      • fileFolder (opt.) mapping attribute for the file folder (complete path without file name)
      • fileName (opt.) mapping attribute for the file name
      • fileExtension (opt.) mapping attribute for the file extension
      • fileSize (opt.) mapping attribute for the file size (in bytes)
      • fileLastModified (opt.) mapping attribute for the file's last modified date
  • Input slots:
    • filesToFetch
  • Output slots:
    • files


File Extractor Worker

Used for extracting compounds (zip, tgz, etc.) in file crawling.

Configuration
  • Worker name: fileExtractor
  • Parameters:
    • filters (opt., see File Crawler)
      • maxFileSize: (opt., see File Crawler)
      • filePatterns: (opt., see File Crawler)
        • include: (opt., see File Crawler)
        • exclude: (opt., see File Crawler)
      • folderPatterns: (opt., see File Crawler)
        • include: (opt., see File Crawler)
        • exclude: (opt.) The behaviour is slightly different here to that of the File Crawler: If an exclude pattern matches the folder path of an extracted file, then the file is filtered out. But according to the pattern, files from subdirectories may be imported!
    • mapping (req.)
      • filePath (req., see File Crawler): needed to get the file path of the compound file to extract
      • fileFolder (opt., see File Crawler)
      • fileName (opt., see File Crawler)
      • fileExtension (opt., see File Crawler)
      • fileSize (opt., see File Crawler)
      • fileLastModified (opt., see File Crawler)
      • fileContent (req., see File Fetcher)
  • Input slots:
    • compounds
  • Output slots:
    • files
Processing

For each input record, an input stream to the described file is created and fed into the CompoundExtractor service to extract the compound elements. If an element is a compound itself, it is also extracted. If it is not a compound, a new record is created. The produced records are converted to look like records produced by the file crawler resp. fetcher, with the attributes and attachment set that are specified in the mapping configuration. Additionally, the following attributes are set:

  • _deltaHash: computed as by the FileCrawler worker
  • _compoundRecordId: record ID of top-level compound this element was extracted from
  • _isCompound: set to true for elements that are compounds themselves.
  • _compoundPath: sequence of filePath attribute values of the compound objects needed to navigate to the compound element.

Sample file crawl job

Job definition that imports all files from root folder "workspace-SMILA", pushing the imported records to job "indexUpdateJob". The following files/folders are filtered out:

  • files ending with ".class"
  • files starting with "."
  • folder(-path)s ending with ".svn"
  {
   "name":"crawlFileJob",
   "workflow":"fileCrawling",
   "parameters":{
     "tempStore":"temp",
     "dataSource":"files",
     "rootFolder":"/workspace-SMILA",
     "jobToPushTo":"indexUpdateJob",
     "mapping":{
       "fileContent":"Content",
       "filePath":"Path",       
       "fileName":"FileName",       
       "fileExtension":"FileExtension",
       "fileReadAccess":"ReadAccess",
       "fileWriteAccess":"WriteAccess"   
     },
     "filters":{              
       "filePatterns":{         
         "exclude":["\\..*", ".*\\.class"]  
       },          
       "folderPatterns":{         
         "exclude":[".*\\.svn"]
       }
     }
   }
 }

Back to the top