Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

SMILA/Documentation/HowTo/Howto integrate a component in SMILA

< SMILA‎ | Documentation‎ | HowTo
Revision as of 05:10, 5 September 2011 by Andreas.schank.attensity.com (Talk | contribs) (Enhanced: Integrating own workers)

This page summarizes the different types and complexity levels for the integration of components in SMILA.

Introduction

Due to its architecture SMILA allows for the easy integration of third-party components into its framework. Actually there are three different possible integration scenarios available that are depicted in the following table.

Integrating BPEL service
This is probably the most frequently used integration scenario. It allows for the integration or exchange of functionality (services, 3rd party software, etc.) used to process records in the workflow engine.
Integrate-Service 0.9.0.png
The figure demonstrates how you can integrate the functionality of your service or your piece of software to SMILA by adding it to the workflow engine.

delta indexing manager.


Integrating data sources
Integrating your own crawler or agent implementations is another common scenario for adding functionality to SMILA. By doing so, further data sources can be unlocked to provide additional input to SMILA.
Integrate-Crawler 0.9.0.png
The figure above exemplary shows how you can add your own crawler implementation to SMILA. Please note that though you may also add an agent implementation likewise this option is not shown in the figure. This was chosen due to simplicity.

delta indexing manager.


Integrating workers
Integrating your own worker implementation is another common scenario for adding functionality or adapting workflows to SMILA.
Integrate-Worker 0.9.0.png
The figure above exemplary shows how you can add your own worker implementation to SMILA. Please note that you also have to add the worker to the job manager configuration files (workers.json, workflows.json) and add your worker as an OSGi service to activate it.


Integrating alternative implementations of SMILA core components
This scenario is particularly intended for the experienced (SMILA) developer and comprises the possibility to exchange existing implementations of the SMILA core components by your own implementations.
Provide-Alternative-To-Core-Component 0.9.0.png
The figure above demonstrates how two of the SMILA core components -- connectivity and data store -- may be exchanged by your own implementations. These components serve as examples only, that is, you may also exchange other core components such as the blackboard service or the delta indexing manager.
Note.png
The above figures exemplary demonstrate at which levels in the SMILA architecture an integration of new components is applicable. However, for simplicity reason, we restricted the above figures to the index processing chain while completely ignoring the search processing chain that offers the same integration options (except for the integration of agents and crawlers), but is currently not in the focus of this page.


Conventions

Handling of Character Encoding

To make processing of data in SMILA easier: If external data must be converted to a string (e.g. an attribute value), the crawler, agent or any other component accessing an external data source should try everything that is possible to ensure that the conversion is done using the correct encoding. For example, HTTP clients should use the encoding reported by the HTTP server. If the data source does not provide information about the character encoding, you can use the class org.eclipse.smila.utils.file.EncodingHelper that tries to convert a byte[] to a string by trying to detect the correct encoding from a byte[] by checking BOMs or checking XML and HTML content for instructions and finally by using UTF-8 or, if this fails, the default platform encoding. You may find this helpful.

On the other hand, if valid string data must be converted to a byte[] (e.g. if it is stored as a attachment after pipelet processing), the conversion must always use UTF-8 encoding.

Integrating BPEL service

As already shown in the overview above, SMILA offers the possibility to integrate your own service or piece of software into SMILA BPEL workflows. In SMILA we simply call these workflows pipelines. A pipeline is the definition of a BPEL process (or workflow) that orchestrates pipelets and other BPEL services (e.g. web services).

There are several options on how to achieve this:

  • Simple: The easiest method to add functionality is to invoke a web service by using the standard functionality of BPEL. However, the disadvantage is that not all data of SMILA records are accessible if you opt for this method of integration.
  • Default: The recommended way to integrate additional functionality in SMILA is to provide Java implementations of an interface that allow for an easy creation of the above mentioned pipelets.
  • Advanced: (idea, not realized yet) This method extends the default mechanism by providing an alternative procedure for integrating OSGi services that do not run in the same OSGi runtime as the BPEL workflow but in another OSGI runtime that may even run on a remote machine.

Simple: Integrating web services

The simplest way of integrating additional functionality in SMILA is to call a web service, which is a standard BPEL workflow engine functionality independent of SMILA. However, there are some limitations concerning the input and result data to/from web services: The workflow object (a DOM object) that enters the BPEL workflow in SMILA contains only the record IDs by default. That means records and the data contained therein - attributes, annotations, and attachments - are not accessible from a BPEL workflow because it can only access and use the values contained in the BPEL workflow object.

To overcome this restriction you can add additional data to the workflow object by adding filters in the configuration file located at org.eclipse.smila.blackboard/RecordFilters.xml. These filter rules define which attributes and annotations should be copied to the workflow object to make them accessible in the BPEL workflow. Additionally, you should not forget to include all attributes and annotations in the RecordFilters.xml file that you wish to write data to. Though filters work on attributes and annotations there is no possibility to access attachments of records because binary data is not reasonable in DOM.

Examples

A good example for this use case is the integration of the Language Weaver web service. The Language Weaver Translation Server provides a web service interface that translates a text into another language. This service could easily be used within SMILA to extend its functionality.

Further reading

Please consult the following how-to tutorials for a more detailed technical description:

Default: Integrating local SMILA pipelets

The default and thus recommended technique to integrate simple and small functionality or software in SMILA is to provide a pipelet that runs in the same OSGi runtime as the BPEL workflow engine. Pipelets are easy to implement as they require only standard Java knowledge. They are not shared between multiple pipelines, even multiple invocations of a Pipelet in the same Pipeline do not share the same instance. The lifecycle and configuration of pipelets is managed by the workflow engine, not by OSGi runtime. For further information on pipelets refer to the Pipelets documentation.

The above mentioned restriction of integrated web services using the BPEL default engine functionality does not apply to pipelets. Both have full access to SMILA records by using the [SMILA/Glossary#B|blackboard service]], which makes it easy to read, modify, and store records.

In general pipelets follow the same (sometimes optional) logical steps (of course this depends highly on the business logic to be executed). These steps are:

  • Read the configuration (optional)
  • Read input data from blackboard (optional)
  • Execute the business logic
  • Write result data to blackboard (optional)

In terms of the pipelet that implements the business logic you are totally free to use any desired technology. Some of the posibilities include:

  • Using POJOs (For examples refer to the XML processing pipelets)
  • Using any local available OSGi service (For an example refer to the LuceneSearchPipelet that uses a LuceneSearchService)
  • Using other technologies such as JNI, RMI, or CORBA to integrate remote or non Java components (As an example consider the integration of Oracle Outside In Technology.)
Examples
  • Typical examples for pipelets are the XML processing pipelets. These lightweight pipelets are used for XML processing (e.g. XSL transformation). Each pipeline uses its own pipelet instance.
Further reading

Please consult the following how-to tutorials for a more detailed technical description:

Advanced: Integrating remote services

tbd.

Enhanced: Integrating own workers

When the desired changes to the functionality are not simple and small or the desired functionality requires more than just accessing the records, it is recommended to integrate your functionality as new workers. These workers have to be defined in the asynchronous job processing configuration and can then be integrated in workflows.

Description

A worker is an OSGi service, that implements the Worker interface, i.e. provides a name (to collect tasks and to be identified in workflows) and offers a threadsafe perform method, which takes a TaskContext object (generated by the WorkerManager) as an argument.

Normally the worker will handle a bulk of records in the perform method, manipulate the contained records, may access external services or stores and create new bulks of records.

The worker does not have to handle tasks or check if tasks are available. These menial tasks are done by the WorkerManager that cares for all registered workers, handles the invocations of their perform methods and copes with input and output bulks, up-scaling or exception handling. The worker just has to provide the perfom method to deal with the records.

If the worker requires direct access to the JobManager/TaskManager, e.g. because it is an initial worker and has to get initial tasks to start workflow runs without being triggered by previous workers, it must not register itself as a Worker service but has to tackle TaskManager and JobManager itself. Be careful, this is tedious work! So if possible stick to the Worker interface and let everything else be handled by the WorkerManager.

Further reading

Please consult at least the following pages about asynchronous job processing:

Integrating data sources

Due to the architecture of the SMILA connectivity framework it is easy to include additional data sources by providing appropriate implementations of agents and/or crawlers.

Examples

  • A typical agent is a FilesystemWatcher. It monitors a folder (or folder structure) for changes (creation, modification, or deletion of files/folders) and reports those actions to SMILA.
  • Typical crawlers are the FilesystemCrawler or the WebCrawler. The first iterates over a folder structure and sends all encountered files to SMILA. The latter traverses the links of HTML pages, follows links to other HTML pages, and sends these pages as well as other resources (images, PDF files, etc.) to SMILA.

Further reading

Please consult the following how-to tutorials for a more detailed technical description:

Integrating alternative implementations of SMILA core components

The component-based architecture of SMILA even allows you to provide your own implementations of SMILA core components. More info coming soon...

Examples

A typical example is an alternative implementation of the DeltaIndexingManager that does not store its state in memory but in the file system or in a database.

Further reading

tbd.

Back to the top