Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

SMILA/Documentation/Importing/Crawler/JDBC

JDBC Crawler and JDBC Fetcher worker are used for importing data via JDBC from a database. For a big picture and the worker's interaction have a look at the Importing Concept.

JDBC Crawler

The JDBC Crawler executes an SQL statement and crawles the result set, producing a record for each row of the result set.

Configuration

The JDBC Crawler worker is usually the first worker in a workflow and the job is started in runOnce mode.

  • Worker name: jdbcCrawler
  • Parameters:
    • dataSource: (req.) value for attribute _source, needed e.g. by the delta service
    • dbUrl: (req.) database URL to connect to
    • dbProps: (opt.) properties used when connecting to the database.
      • user (opt.) user name
      • password (opt.) user password
      • ... (opt.) any property supported by the used JDBC driver
    • crawlSql: (req.) the SQL statement to execute
    • mapping (req.) specifies how to map database column names to record attributes or attachments.
      • COLUMN-1 (opt.) mapping of the first column to an attribute/attachment
      • COLUMN-2 (opt.) mapping of the second column to an attribute/attachment
      • COLUMN-N (opt.) mapping of the last column to an attribute/attachment
    • idColumns (req.) a list of database column names used to generate the record ID from
    • deltaColumns (opt.) a list of database column names (upper-case) used to generate the value for attribute _deltaHash
    • maxAttachmentSize (opt.) maximum accepted size of BLOB/CLOB column values (in bytes/characters) for attachment creation. Default is "1000000000" (1 billion). Larger values are skipped and a warning is written to the log.
    • parameters to control size of output bulks, see below for details
      • maxRecordsPerBulk (opt.) maximum number of records in one bulk. (default: 1000)
  • Task generator: runOnceTrigger
  • Output slots:
    • crawledRecords
Processing

The JDBC Crawler executes the crawlSql statement and produces one record per result row in the bucket connected to crawledRecords. Please note that internally database column names are normalized to upper-case. In the configuration however any casing can be used. The resulting records contain only the values of the columns configured in the mapping. Whether a column is represented as an attribute or as an attachment depends on the type of the database column. Below is a table that summarizes the supported database types, how they are mapped to Java types and whether they are represented as attributes or attachments:

Database Type Java Type represented as
BIT Boolean attribute
BOOLEAN Boolean attribute
BIGINT Long attribute
INTEGER Long attribute
SMALLINT Long attribute
TINYINT Long attribute
DOUBLE Double attribute
FLOAT Double attribute
REAL Double attribute
DECIMAL Double(scale>0) or Long attribute
NUMERIC Double(scale>0) or Long attribute
DATE Date attribute
TIME DateTime attribute
TIMESTAMP DateTime attribute
CHAR String attribute
LONGNVARCHAR String attribute
LONGVARCHAR String attribute
VARCHAR String attribute
NCHAR String attribute
NVARCHAR String attribute
ROWID String attribute
BINARY byte[] attachment
VARBINARY byte[] attachment
LONGVARBINARY byte[] attachment
BLOB byte[] attachment
CLOB byte[] attachment
NCLOB byte[] attachment
NULL - no entry is generated

The following types are not fully supported:

  • ARRAY
  • DATALINK
  • DISTINCT
  • JAVA_OBJECT
  • OTHER
  • REF
  • SQLXML
  • STRUCT

The crawler tries to automatically convert any values into attributes of an appropriate data type. If this is not possible an attachment with the bytes is generated.

The records are collected in bulks, whose size can be configured via the parameter maxRecordsPerBulk:

  • maxRecordsPerBulk has the same effect in any of the following cases:
    • not configured: a new crawledRecords bulk is started after 1000 records.
    • configured: a new crawledRecords bulk is started when the configured value is reached.

Please note that maxRecordsPerBulk must be > 0. Otherwise your job will fail.

Source: The attribute _source is set from the task parameter dataSource which has no further meaning currently, but it is needed by the delta service.


JDBC Fetcher

For each input record, it executes the fetchSql statement and adds the result rows to the record.

Configuration
  • Worker name: jdbcFetcher
  • Parameters:
    • dbUrl: (req.) database URL to connect to
    • dbProps: (opt.) propereties used when connecting to the database.
      • user (opt.) user name
      • password (opt.) user password
      • ... (opt.) any property supported by the used JDBC driver
    • fetchSql: (opt.) the SQL statement executed to fetch the data which is used to enrich the crawled input record. It may contain one or more '?' as parameter placeholders, see PreparedStatement. If the fetchSql parameter isn't set, the originally crawled record is written unchanged to the output.
    • fetchParameterAttributes: (opt.) a list of record attribute names who's values are used in the given order as parameters (substitutes for the '?') in the fetchSql statement. If fetchParameterAttributes isn't set, the record attributes which were mapped from the idColumns are used as fetchSql statement parameter substitutes.
    • mapping (req.) specifies how to map database column names to record attributes or attachments. Please note that database column names are normalized to upper-case.
      • COLUMN-1 (opt.) mapping of the first column to an attribute/attachment
      • COLUMN-2 (opt.) mapping of the second column to an attribute/attachment
      • COLUMN-N (opt.) mapping of the last column to an attribute/attachment
    • idColumns (opt.) If fetchParameterAttributes isn't set, the record attributes which were mapped from the idColumns DB columns are used as parameter (substitutes for the '?') in the fetchSql statement. In this case the id columns must also be specified in the mapping.
    • maxAttachmentSize (opt.) maximum accepted size of BLOB/CLOB column values (in bytes/characters) for attachment creation. Default is "1000000000" (1 billion). Larger values are skipped and a warning is written to the log.
  • Input slots:
    • recordsToFetch
  • Output slots:
    • fetchedRecords
Processing

The JDBC Fetcher is used to enrich the input records with the data selected by the fetchSql statement. The fetchSql is executed as PreparedStatement. So it can have parameters ('?') which will be replaced by either the values of the record attributes specified by the fetchParameterAttributes or (per default) by the values of the record attributes which were mapped from the idColumns.

All columns that are selected by the fetchSql query and are mapped in the mapping section will enrich the crawled record. That means, the mapped attributes are added to the record's metadata, binary data (BLOB, CLOB) will be added as attachments. Have a look at the table above for a description of the mapping from database types to Java types.

In general, the fetchSql query will return exactly one row. If it returns no row at all (resp. no result), the originally crawled record is written unchanged to the output. If it returns more than one row, the values of the rows are merged (e.g. as lists) before being added to the input record.

If one of the attributes used as fetch parameters is not set or is does not have a simple values, but a sequence or a map, it is skipped and no additional data will be fetched for this record. It will just be written unchanged to the output.

Sample JDBC crawl job

This example uses the following fictitious tables of database EmailDB:

Emails
Sender Receiver Subject SendDate BodyId
EmailBodies
BodyId Body


  {
   "name":"crawlJdbcJob",
   "workflow":"jdbcCrawling",
   "parameters":{
     "dataSource":"emails",
     "dbUrl":"jdbc:derby:memory:EmailDB",
     "dbProps":
     {
        "user": "admin",
        "password": "topsecret"
     },
     "crawlSql":"SELECT * FROM Emails",
     "fetchSql":"SELECT Body FROM EmailBodies WHERE BodyId=?",
     "fetchParameterAttributes": "bodyReference",
     "idColumns": ["Sender", "Receiver"],
     "deltaColumns": "SendDate",
     "mapping":{
       "Sender":"From",
       "Receiver":"To",       
       "Subject":"Title",       
       "SendDate":"lastModified",   
       "BodyId":"bodyReference"
     },
     "jobToPushTo":"indexUpdateJob",
     "tempStore": "temp"
   }
 }

Adding JDBC Drivers

By default SMILA includes only JDBC drivers for Derby. If you want to access other databases then you have to provide according JDBC drivers. To add a JDBC driver you have to do the following steps:

  • Copy your JDBC driver jars into folder SMILA/plugins/org.eclipse.smila.jdbc/lib
  • Edit SMILA/plugins/org.eclipse.smila.jdbc/META-INF/MANIFEST.MF and add the jars to the bundle classpath, e.g.
Bundle-ClassPath: .,
 lib/postgresql-9.1-902.jdbc4.jar,
 lib/mysql-connector-java-5.1.20-bin.jar
  • Do not forget the whitespace character in front of the path.
  • JDBC-4-compliant drivers should be found automatically after a restart. For other drivers you must add the driver class name to the SMILA-JDBC header in META-INF/MANIFEST.MF, e.g.
SMILA-JDBC: org.apache.derby.jdbc.EmbeddedDriver,
 org.apache.derby.jdbc.ClientDriver

Back to the top