Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "SMILA/Documentation/HowTo/Howto integrate a component in SMILA"

m (Integrating a SMILA pipelet)
m (Integrating a SMILA pipelet)
Line 68: Line 68:
 
In SMILA we simply call these workflows [[SMILA/Glossary#P|pipelines]]. A pipeline is the definition of a BPEL process (or workflow) that orchestrates [[SMILA/Glossary#P|pipelets]] and other BPEL services (e.g. web services).
 
In SMILA we simply call these workflows [[SMILA/Glossary#P|pipelines]]. A pipeline is the definition of a BPEL process (or workflow) that orchestrates [[SMILA/Glossary#P|pipelets]] and other BPEL services (e.g. web services).
  
The recommended way to integrate additional functionality in SMILA is to provide Java implementations of an interface that allow for an easy creation of the above mentioned [[SMILA/Glossary#P|pipelets]].  
+
The default and thus recommended technique to integrate simple and small functionality or software in SMILA is to provide a [[SMILA/Glossary#P|pipelet]]. Pipelets are easy to implement as they require only standard Java knowledge. They are not shared between multiple pipelines, even multiple invocations of a Pipelet in the same Pipeline do not share the same instance. The lifecycle and configuration of pipelets is managed by the workflow engine, not by OSGi runtime. For further information on pipelets refer to the [[SMILA/Documentation/Pipelets|Pipelets documentation]].
  
==== Integrating SMILA pipelets ====
+
Pipelets have full access to SMILA [[SMILA/Glossary#R|records]] by using the [SMILA/Glossary#B|blackboard service]], which makes it easy to read, modify, and store [[SMILA/Glossary#R|records]].
The default and thus recommended technique to integrate simple and small functionality or software in SMILA is to provide a [[SMILA/Glossary#P|pipelet]] that runs in the same OSGi runtime as the BPEL workflow engine. Pipelets are easy to implement as they require only standard Java knowledge. They are not shared between multiple pipelines, even multiple invocations of a Pipelet in the same Pipeline do not share the same instance. The lifecycle and configuration of pipelets is managed by the workflow engine, not by OSGi runtime. For further information on pipelets refer to the [[SMILA/Documentation/Pipelets|Pipelets documentation]].
+
 
+
The above mentioned restriction of integrated web services using the BPEL default engine functionality does not apply to pipelets. Both have full access to SMILA [[SMILA/Glossary#R|records]] by using the [SMILA/Glossary#B|blackboard service]], which makes it easy to read, modify, and store [[SMILA/Glossary#R|records]].
+
  
 
In general pipelets follow the same (sometimes optional) logical steps (of course this depends highly on the business logic to be executed). These steps are:
 
In general pipelets follow the same (sometimes optional) logical steps (of course this depends highly on the business logic to be executed). These steps are:
Line 84: Line 81:
 
* Using POJOs (For examples refer to the [[SMILA/Documentation/Bundle_org.eclipse.smila.processing.pipelets.xmlprocessing|XML processing pipelets]])
 
* Using POJOs (For examples refer to the [[SMILA/Documentation/Bundle_org.eclipse.smila.processing.pipelets.xmlprocessing|XML processing pipelets]])
 
* Using any local available OSGi service (For an example refer to the [[SMILA/Documentation/Bundle org.eclipse.smila.processing.pipelets#org.eclipse.smila.processing.pipelets.MimeTypeIdentifyPipelet|MimeTypeIdentifyPipelet]] which uses a MimeTypeIdentifier service)
 
* Using any local available OSGi service (For an example refer to the [[SMILA/Documentation/Bundle org.eclipse.smila.processing.pipelets#org.eclipse.smila.processing.pipelets.MimeTypeIdentifyPipelet|MimeTypeIdentifyPipelet]] which uses a MimeTypeIdentifier service)
* Using other technologies such as JNI, RMI, or CORBA to integrate remote or non Java components (As an example consider the integration of [http://www.oracle.com/technologies/embedded/outside-in.html Oracle Outside In Technology].)
+
* Using other technologies such as JNI, RMI, or CORBA to integrate remote or non Java components.
  
 
===== Examples =====
 
===== Examples =====

Revision as of 13:42, 23 October 2012

This page summarizes the different types and complexity levels for the integration of components in SMILA.

Introduction

Due to its architecture SMILA allows for the easy integration of third-party components into its framework. Actually there are four different possible integration scenarios available that are depicted in the following table.

Integrating a SMILA pipelet
This is probably the most frequently used integration scenario. It allows for the integration or exchange of functionality (services, 3rd party software, etc.) used to process records in the workflow engine.
Integrate-Service 1.1.0.png
The figure demonstrates how you can integrate the functionality of your service or your piece of software to SMILA by adding it to the workflow engine.


Integrating workers
Integrating your own worker implementation is another common scenario for adding functionality or adapting workflows to SMILA.
Integrate-Worker 1.1.0.png
The figure above exemplary shows how you can add your own worker implementation to SMILA. Please note that you also have to add the worker to the job manager configuration files (workers.json, workflows.json) and add your worker as an OSGi service to activate it.


Integrating data sources
Integrating your own implement a new crawler for the ETL framework is another common scenario for adding functionality to SMILA. By doing so, further data sources can be unlocked to provide additional input to SMILA.
Integrate-DataSources 1.1.0.png
The figure above exemplary shows how you can add your own crawler implementation to SMILA. Please note that eventually you have to add a new Fetcher or extractors or further workers as well and you also have to add a new workflow definition, which is not shown in the figure. This was chosen due to simplicity.


Integrating alternative implementations of SMILA core components
This scenario is particularly intended for the experienced (SMILA) developer and comprises the possibility to exchange existing implementations of the SMILA core components by your own implementations.
Provide-Alternative-To-Core-Component 1.1.0.png
The figure above demonstrates one of the SMILA core components -- data store -- may be exchanged by your own implementation. This serves as examples only, that is, you may also exchange other core components such as the blackboard service.
Note.png
The above figures exemplary demonstrate at which levels in the SMILA architecture an integration of new components is applicable. However, for simplicity reason, we restricted the above figures to the index processing chain while completely ignoring the search processing chain that offers the same integration options (except for the integration of agents and crawlers), but is currently not in the focus of this page.


Conventions

Handling of Character Encoding

To make processing of data in SMILA easier: If external data must be converted to a string (e.g. an attribute value), the crawler/fetcher or any other component accessing an external data source should try everything that is possible to ensure that the conversion is done using the correct encoding. For example, HTTP clients should use the encoding reported by the HTTP server. If the data source does not provide information about the character encoding, you can use the class org.eclipse.smila.utils.file.EncodingHelper that tries to convert a byte[] to a string by trying to detect the correct encoding from a byte[] by checking BOMs or checking XML and HTML content for instructions and finally by using UTF-8 or, if this fails, the default platform encoding. You may find this helpful.

On the other hand, if valid string data must be converted to a byte[] (e.g. if it is stored as a attachment after pipelet processing), the conversion must always use UTF-8 encoding.

Integrating a SMILA pipelet

As already shown in the overview above, SMILA offers the possibility to integrate your own service or piece of software into SMILA BPEL workflows. In SMILA we simply call these workflows pipelines. A pipeline is the definition of a BPEL process (or workflow) that orchestrates pipelets and other BPEL services (e.g. web services).

The default and thus recommended technique to integrate simple and small functionality or software in SMILA is to provide a pipelet. Pipelets are easy to implement as they require only standard Java knowledge. They are not shared between multiple pipelines, even multiple invocations of a Pipelet in the same Pipeline do not share the same instance. The lifecycle and configuration of pipelets is managed by the workflow engine, not by OSGi runtime. For further information on pipelets refer to the Pipelets documentation.

Pipelets have full access to SMILA records by using the [SMILA/Glossary#B|blackboard service]], which makes it easy to read, modify, and store records.

In general pipelets follow the same (sometimes optional) logical steps (of course this depends highly on the business logic to be executed). These steps are:

  • Read the configuration (optional)
  • Read input data from blackboard (optional)
  • Execute the business logic
  • Write result data to blackboard (optional)

In terms of the pipelet that implements the business logic you are totally free to use any desired technology. Some of the posibilities include:

  • Using POJOs (For examples refer to the XML processing pipelets)
  • Using any local available OSGi service (For an example refer to the MimeTypeIdentifyPipelet which uses a MimeTypeIdentifier service)
  • Using other technologies such as JNI, RMI, or CORBA to integrate remote or non Java components.
Examples
  • Typical examples for pipelets are the XML processing pipelets. These lightweight pipelets are used for XML processing (e.g. XSL transformation). Each pipeline uses its own pipelet instance.
Further reading

Please consult the following how-to tutorials for a more detailed technical description:

Integrating own workers

When the desired changes to the functionality are not simple and small or the desired functionality requires more than just accessing the records, it is recommended to integrate your functionality as new workers. These workers have to be defined in the asynchronous job processing configuration and can then be integrated in workflows.

Description

A worker is an OSGi service, that implements the Worker interface, i.e. provides a name (to collect tasks and to be identified in workflows) and offers a threadsafe perform method, which takes a TaskContext object (generated by the WorkerManager) as an argument.

Normally the worker will handle a bulk of records in the perform method, manipulate the contained records, may access external services or stores and create new bulks of records.

The worker does not have to handle tasks or check if tasks are available. These menial tasks are done by the WorkerManager that cares for all registered workers, handles the invocations of their perform methods and copes with input and output bulks, up-scaling or exception handling. The worker just has to provide the perfom method to deal with the records.

If the worker requires direct access to the JobManager/TaskManager, e.g. because it is an initial worker and has to get initial tasks to start workflow runs without being triggered by previous workers, it must not register itself as a Worker service but has to tackle TaskManager and JobManager itself. Be careful, this is tedious work! So if possible stick to the Worker interface and let everything else be handled by the WorkerManager.

Further reading

Please consult at least the following pages about asynchronous job processing:

Integrating data sources

If you like to integrate data sources, you should do so by writing a new Importing-Wokrflow along with apropriate crawler/fetcher workers (and eventually an extractor worker) as is described in HowTo add a new Data Source to the Importing Framework.

This also involves integration of own workers (see Integrating own workers above).

Further reading

Please consult the following how-to tutorials for a more detailed technical description:

Integrating alternative implementations of SMILA core components

The component-based architecture of SMILA even allows you to provide your own implementations of SMILA core components, since these are implemented as OSGi service components (see Declarativ Services) and can thus be exchanged in a standard way.

  • Include a new plug-in that exposes a service implementing the interface of the core component (e.g. ObjectStoreService)
  • modify the config.ini configuration file in SMILA.application to include and start the new plug-in instead of the plug-in provided by SMILA core.
  • build your application and run it.

Examples

A typical example could be an alternative implementation of the ObjectStoreService that does not store the objects in the file system but in memory or in a database.

Further reading

Back to the top