Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "SMILA/Specifications/Partitioning Storages"

 
(21 intermediate revisions by 3 users not shown)
Line 1: Line 1:
== Partitioning storages for Backup and Reuse/Recrawling ==
+
{{Note|This page is out of data, and I'm not sure if we still need it all. It should be either rewritten or deleted.}}
===== Use case. =====
+
After each DFP it should be possible to store records in a separate partition that can be accessed later.
+
===== Changes in XML and Bin storage. =====
+
To support partitioning both XML- and Bin- storages must be able to store data to partitions and load data from partitions. Thus XML- and Bin- storage APIs should be extended in such way that partition information will be accepted as an additional parameter when saving or loading data. With binstorage, binary attachments can have quite a big size, thus if attachment wasn't changed from one partition to another, it's worth not to copy attachment's data for each partition but store only reference to actual attachment.
+
For example, methods that manage attachments as streams should be extended to:
+
store(Id, InputStream, Partition);
+
fetchAsStream(Id, Partition);
+
Also it should be possible to obtain data from the default (latest) partition when no partition information is provided. So ''fetchAsStream(Id)'' should return attachment saved to the latest partition.
+
===== Partitioning configuration and changes in Blackboard API. =====
+
Partition information (partition name) can be configured into Listener Rule.
+
There are following ways of how to pass partition information:
+
====== 1. Partition information is passed as a record Id property. ======
+
Listener gets record from the queue, and sets partition information to the record's Id property.
+
Blackboard uses partition information in load and commit operations.
+
In this case no significant changes are required to the blackboard API because partition information is encapsulated into record Id. Blackboard should be only extended to support loading and committing records to particular partitions:
+
load(Id, Partition)
+
commit(Id, Partition)
+
====== 2. Partition information is passed separately from record as a JMS property. ======
+
Listener reads JMS property from the queue and makes it available for other components that will use blackboard (like processing).
+
In this case there are two possibilities:
+
  
- record is loaded (and committed) to storages using partition information. After that record can be accessed by Id from the internal blackboard cache, until commit is invoked. Implementing this way will require first to solve problems when some process wants to access already committed record, that exist now. After those problems get solved, the only changes required for blackboard API will be to implement load and commit operations that handle partitions.
+
== Implementing Storage Points for Data Backup and Reusing ==
  
- all methods from blackboard API should be duplicated to handle partition information as a second parameter. In this case if record is missing into internal blackboard cache, it can be first loaded from storages using provided partition information. This way eliminates problems mentioned above but requires a lot of modifications to blackboard API to handle partition information in every method:
+
=== Implementing Storage Points ===
createLiteral(Id)
+
createLiteral(Id, Partition)
+
createAnnotation(Id)
+
createAnnotation(Id, Partition)
+
...
+
  
 +
==== Requirements ====
 +
 +
The core of the SMILA framework consists of a pipeline where data is pushed into a queue  whence it is fed into data flow processors (DFP). The requirement for Storage Points is that it should be possible to store records in a specific “storage point” after each DFP. Storage Point in this case means some storage configuration where data is saved, for example it can be partition in local storage (please see Chapter 2 for more information) or some remote storage.
 +
Storage points should be configured in the following way:
 +
# Each DFP should have a configuration for the storage point from where data will be loaded before the BPEL processing (“Input” storage point);
 +
# Optionally it should be possible to configure the storage point where data should be stored after BPEL processing (“Output” storage point). If this configuration is omitted, data should not be stored to any storage point at all after BPEL processing;
 +
# If “Input” and “Output” storage points have the same configuration, data in the “Input” storage point should be overridden after BPEL processing.
 +
 +
The goal for these modifications is that information that is stored to storage points can be accessed anytime later. This will solve following problems:
 +
 +
# Backup and recovery: It will be possible to make a backup copy of some specific state of data
 +
# Failure recovery during processing: Some DFP involve expensive processing. With storage points it will be possible to continue processing from one of the previously saved states in case of DFP failure
 +
# Reusing data collected from previous DFPs: Data that is a result of executing some DFP sequence can be saved to storage point and reused later
 +
# Easy data management: It will be possible to easily manage saved states of data, for example delete or move some storage point that contains obsolete data
 +
 +
However, this all should not make the basic configuration of a SMILA system more complicated: If one does not care about multiple storages at all, it should not be necessary to configure storage points all over the configuration files, but everything should run OK on defaults.
 +
 +
=== Architecture overview ===
 +
 +
Following figure shows the overview of the core components of the SMILA framework:
 +
 +
[[Image:SMILA-storagepoints-architecture-overview.png|SMILA Architecture Overview]]
 +
 +
To support the above requirements components shown on Figure 1 should be changed in the following way:
 +
 +
A. There should be a way to configure storage points for each DFP;
 +
 +
B. Blackboard service should be changed to handle storage points.
 +
 +
=== Proposed changes ===
 +
 +
==== Storage points configuration ====
 +
 +
It is proposed to use XML configuration file to configure storage points. Storage points will be identified by name and the whole configuration will look like following:
 +
 +
<source lang="xml">
 +
<StoragePoints>
 +
  <StoragePoint Id=“point1“>
 +
    <storage point parameters, eg. storage interface, partition etc>
 +
  </StoragePoint>
 +
  ...
 +
</StoragePoints>
 +
</source>
 +
 +
For example:
 +
 +
<source lang="xml">
 +
...
 +
<StoragePoint Id=”point1”>
 +
  <XmlStorage Service=”XmlStorageService” Partition=”A”/>
 +
  <BinaryStorage Service=”BinaryStorageService” Partition=”B”/>
 +
</StoragePoint>
 +
...
 +
</source>
 +
 +
A User can define a default StorageID, that is every time used when no specific StorageID is defined
 +
<source lang="xml">
 +
...
 +
<StoragePoint Id=”DefaultStoragePoint”>
 +
  <XmlStorage Service=”XmlStorageService” Partition=”A”/>
 +
  <BinaryStorage Service=”BinaryStorageService” Partition=”B”/>
 +
</StoragePoint>
 +
...
 +
</source>
 +
 +
 +
To make configuration easier storages API can be normalized so that all storages will implement the same interface. (Some proposal on this subject was posted by Ivan into mailing list).
 +
 +
==== Alternative: Storage Point ID as OSGi service property ====
 +
 +
In this example we do not need a centralized configuration of storages and storage points, but just add a Storage Point ID to each Record Metadata/Binary Storage or as JMSProperty (which is discussed in the next section) as a OSGi service property, e.g in a DS component description of an binary storage service:
 +
 +
<source lang="xml">
 +
<component name="BinaryStorageService" immediate="true">
 +
    <service>
 +
        <provide interface="org.eclipse.smila.binarystorage.BinaryStorageService"/>
 +
    </service>
 +
    <property name="smila.storage.point.id" value="point1"/>
 +
</component>
 +
</source>
 +
 +
And in an (XML) Record Metadata storage service:
 +
 +
<source lang="xml">
 +
<component name="XmlStorageService" immediate="true">
 +
    <implementation class="org.eclipse.smila.xmlstorage.internal.impl.XmlStorageServiceImpl"/>
 +
    <service>
 +
        <provide interface="org.eclipse.smila.xmlstorage.XmlStorageService"/>
 +
        <provide interface="org.eclipse.smila.storage.RecordMetadataStorageService"/>
 +
    </service>
 +
    <property name="smila.storage.point.id" value="point1"/>
 +
</component>
 +
</source>
 +
 +
(note that we also introduced a second interface here that is more specialized for storing and reading Record Metadata for a Blackboard than the XmlStorageService, but does not enforce that the service uses XML to store record metadata).
 +
 +
Then the Blackboard wanting to use Storage Point "point1" would just look for a RecordMetadataStorageService and a BinaryStorageService (for attachments) having the property set to "point1". There would be no need to implement a central StoragePoint configuration facility.
 +
 +
==== Configuring storage points for DFP ====
 +
 +
As shown on the Figure 1, Listener component is responsible for getting Record from JMS queue, loading record on the Blackboard and executing BPEL workflow. Storage points cannot be configured inside the BPEL Workflow because the same BPEL Workflow can be used in multiple DFPs and hence can use different storage points.
 +
Thus it’s proposed to configure storage point IDs into Listener rules. With such configuration it will be possible to have separate storage points configurations for each workflow and all DFPs will be configured in a single place.
 +
 +
There are two ways of how storage points can be configured:
 +
 +
1. Listener rule contains configuration only for “Output” storage point. The “Input” storage point Id is read form the queue. After processing is finished “Output” storage poitn Id is sent back to the queue and becomes “Input” configuration for the next DFP. The whole process is shown on the following picture:
 +
 +
[[Image:SMILA-storagepoints-queue-option-1.png|Listener rules contain "output" point only]]
 +
 +
The advantage of this way is that user needs to carry only about “Output” storage point configuration because “Input” storage point configuration will be automatically obtained from the queue. On the other hand, it can greatly complicate management, backup and data reusing tasks because it will be not possible to find out which storage point was used as “Input” when particular Listener rule was applied.
 +
 +
Example: Listener Config:
  
The first option seems to be more convenient and easy to implement because it allows to keep partition information directly into the record, thus easily pass and receive it between distributed components and requires almost no blackboard API modifications.
 
====== 3. Alternative API :) ======
 
[[User:Churkin.ivan.gmail.com|Ivan Churkin]]
 
I do not like both
 
 
<source lang="java">
 
<source lang="java">
load(Id, Partition)
+
<Rule Name="ADD Rule" WaitMessageTimeout="10" Workers="2" TargetStoragePoint="p1">
commit(Id, Partition)
+
... 
 +
</Rule>
 
</source>
 
</source>
Its m-threads dangerous.
+
The source targetStorePoint is transferred over the Queue by storing it in the Record as MetaData or by sending it as JMSProperty (we used JMSProperties right now for DataSourceID by now) with the Record.
 +
 
 +
2. Listener rule contains configuration for both “Input” and “Output” partitions. In this case storage points configuration is not sent over the queue:
 +
 
 +
[[Image:SMILA-storagepoints-queue-option-2.png|Listener rules contain "input" and "output" point]]
 +
 
 +
The advantage of this way is that for some particular rule it will always be possible to find out which “Input” and “Output” storage points were used for this rule. In this case it’s up to user to make sure that provided configuration is correct and consistent. This greatly simplifies backup and data management tasks so it’s proposed to implement this way of configuration.  Also with this way configuration will be a little more complex, for example if the same rule should be applied in two different DFP sequences but data should be loaded from different “Input” storage points , it will be required to create two rules for each “Input” partition.
 +
 
 +
Example: Listener Config:
 
<source lang="java">
 
<source lang="java">
createLiteral(Id)
+
<Rule Name="ADD Rule" WaitMessageTimeout="10" Workers="2" TargetStoragePoint="p1" SourceStoragePoint="p2">
createLiteral(Id, Partition)
+
... 
createAnnotation(Id)
+
</Rule>
createAnnotation(Id, Partition)
+
 
</source>
 
</source>
It's not effective.
 
  
What about getting partition interface reference and after that working with partition on the old manner?
+
Note: Maybe it could even be possible (and useful?) to implement both: Default "input" storage points could be defined in Listener Rules, while a storage point ID could be passed with the message to override the default?
 +
 
 +
'''Rules regarding the Alternative with OSGI-Properties'''
 +
 
 +
Example: Listener Config:
 +
 
 +
It's completely the same:
  
 
<source lang="java">
 
<source lang="java">
interface Blackboard{
+
<Rule Name="ADD Rule" WaitMessageTimeout="10" Workers="2" TargetStoragePoint="p1" SourceStoragePoint="p2">
  BlackboardPartition getPartition(String partiitionName);
+
... 
 +
</Rule>
 +
</source>
 +
 
 +
To find the services providing the named storage points the DFP has to lookup services for the appropriate interfaces (RecordMetadataStorageService and BinaryStorageService) that have the specific storage point names set as property "smila.storage.point.id". It is not necessary to know the name of the service registration or to specify two separate names for Record and Binary storage service.
 +
 
 +
==== Passing the Storage-Location to BPEL/Pipelets/Blackboard ====
 +
 
 +
After Listener obtains storage point configuration it should pass this configuration to the Blackboard so that records from that storage point can be loaded on the Blackboard and processed by BPEL workflow. It can be done in the following different ways ( this can be combined with the both upper solutions ):
 +
# Storage point configuration passed as a Record Id property: The advantage of this approach is that it will always be possible to find out easily to which storage point this particular record belongs to. Disadvantage is that record Id is immutable object to be used as a hash key and changing Id properties during processing can be not a good idea. (would best apply to option 1 in the above section)
 +
# Storage point configuration passed as a Record Metadata: In this case an attribute or annotation containing the storage point ID should be added to the Record metadata before processing.
 +
# Storage point configuration passed separately from the record: In this case record won’t contain any information about storage points configuration into itself: E.g. in the case where the listener rules do not contain the input storage porint, it could be passed in the queue messsage as a message property. This has the advantage that the listener can also select messages by their storage point of the contained records (e.g., to manage load balancing, or because not all listeners have access to all storage points)
 +
 
 +
Therefore we propose to use the third option. Note that it is still possible in this case to store the storage point ID in record metadata for informational purpose (e.g. setting a field in the final search index to read the storage point ID after search). But the relevant storage point ID for the queue listener will be a message property.
 +
 
 +
==== Changes in the Blackboard service ====
 +
 
 +
There are following proposals for Blackboard service changes:
 +
 
 +
# Blackboard API will expose additional new methods that will allow working with storage points: This imposes to many changes to clients of the blackboard service, so we do not want to follow this road. Further details omitted for now.
 +
# Blackboard API won’t be changed.
 +
 
 +
In this case we introduce a new BlackboardManager service that returns references to the actual Blackboard instances using the default or a named storage point:
 +
 
 +
<source lang="java">
 +
interface BlackboardManager {
 +
  Blackboard getDefaultBlackboard();
 +
  Blackboard getBlackboardForStoragePoint(String storagePointId);
 
}
 
}
  
interface BlackboardPartition{
+
interface Blackboard {
... old blackboard interface ..
+
  <contains current Blackboard API methods>
 
}
 
}
 
</source>
 
</source>
 +
 +
With this configuration correct reference of the Blackboard object should be passed to BPEL workflow each time workflow is executed. This can be done by WorkflowProcessor that will send the right Blackboard to the BPEL server. In their invocation, Pipelets and Processing Services get the Blackboard instance to be used from the processing engine anyway, so they will continue working with Blackboard in the same way like it is implemented now.
 +
 +
Thus, the WP process(…) method should be enhanced to accept the Blackboard instance as an additional method argument instead of being statically linked to a single blackboard:
 +
 +
<source lang="java">
 +
Id[] process(String workflowName, Blackboard blackboard, Id[] recordIds) throws ProcessingException;
 +
</source>
 +
 +
This resembles the Pipelet/ProcessingService API. However, a difficulty of this may be to find a way to pass the information about which blackboard is to be used in a pipelet/processing service invocation around the integrated BPEL engine to the Pipelet/ServiceProcessingManagers. But I hope this can be solved.
 +
 +
=== Partitions ===
 +
 +
==== Requirements ====
 +
 +
The requirement for Partitions is that xml storage and binary storage should be able to work with ‘partitions’. This means that storages should be able to store data to different internal partitions.
 +
 +
==== Changes in Storages API ====
 +
 +
Currently SMILA operates with two physical storages – xml storage and binary storage. API of both storages should be extended to handle partitioning. API should provide methods that will allow getting data from specified partition and saving data to specified partition. Partitioning configuration should be passed as an additional parameter:
 +
 +
<source lang="java">
 +
_xssConnection.getDocument(Id, Partition);
 +
_xssConnection.addOrUpdateDocument(Id, Document, Partition);
 +
</source>
 +
 +
For the first implementation storages will work with data in partitions in the following way:
 +
 +
* With xml storage each partition will contain its own copy of the record Document.
 +
* With binary storage each partition will contain its own copy of the attachment.
 +
 +
This behavior can be improved for better performance in further versions. For more information please see next section.
 +
 +
==== Alternative implenmentation using OSGi services ====
 +
 +
If we use the implementation of storage points using OSGi service properties described above in section "Alternative: Storage Point ID as OSGi service properties" we can use this to hide partitions completely from clients: In this case a storage service that wants to provide different partitions could register one "partition proxy" OSGi service for each partition that each have its own storage point ID, provide the correct storage interface (binary/record metdata/XML), but do not store data on their own, but just forward requests to the "master" storage service by just adding the partition name. This proxy service creation can be done programmatically and dynamically by the master service when a new partition is created (via service configuration or management console) so it's not necessary to create a DS component description for each partition.
 +
 +
The following figure should illustrate this setup:
 +
 +
[[Image:SMILA-storagepoint-partition-proxies.png|Use of partition proxy services to hide partitioning of storages]]
 +
 +
This way no client would ever need to use additional partition IDs when communicating with a storage service, and storages that cannot provide partitions do not need to implement methods with partition parameters that cannot be used anyway.
 +
 +
 +
 +
==== Proposed further changes ====
 +
 +
With binary storage attachments can have a big size (for example, when crawling video files), therefore creating actual copy for each partition can be ineffective and can cause serious performance issues. As a solution for this problem binary storage should not create an actual attachment copy for each partition but rather keep reference to actual attachment when attachment was not changed from one partition to another.
 +
 +
Anyway, this solution can cause some problems too:
 +
# Problems can occur if backup job is being done with some external tool that is not aware of references. This problem should not generally happen because backups will rather be done with properly configured tool;
 +
# Some pipelet can change Attachment1 into Partition 1, while Partition 2 should still keep old version of attachment. In this case there should be some service that will be monitoring references consistency.
 +
 +
[[Category:SMILA]]

Latest revision as of 08:06, 24 January 2012

Note.png
This page is out of data, and I'm not sure if we still need it all. It should be either rewritten or deleted.


Implementing Storage Points for Data Backup and Reusing

Implementing Storage Points

Requirements

The core of the SMILA framework consists of a pipeline where data is pushed into a queue whence it is fed into data flow processors (DFP). The requirement for Storage Points is that it should be possible to store records in a specific “storage point” after each DFP. Storage Point in this case means some storage configuration where data is saved, for example it can be partition in local storage (please see Chapter 2 for more information) or some remote storage. Storage points should be configured in the following way:

  1. Each DFP should have a configuration for the storage point from where data will be loaded before the BPEL processing (“Input” storage point);
  2. Optionally it should be possible to configure the storage point where data should be stored after BPEL processing (“Output” storage point). If this configuration is omitted, data should not be stored to any storage point at all after BPEL processing;
  3. If “Input” and “Output” storage points have the same configuration, data in the “Input” storage point should be overridden after BPEL processing.

The goal for these modifications is that information that is stored to storage points can be accessed anytime later. This will solve following problems:

  1. Backup and recovery: It will be possible to make a backup copy of some specific state of data
  2. Failure recovery during processing: Some DFP involve expensive processing. With storage points it will be possible to continue processing from one of the previously saved states in case of DFP failure
  3. Reusing data collected from previous DFPs: Data that is a result of executing some DFP sequence can be saved to storage point and reused later
  4. Easy data management: It will be possible to easily manage saved states of data, for example delete or move some storage point that contains obsolete data

However, this all should not make the basic configuration of a SMILA system more complicated: If one does not care about multiple storages at all, it should not be necessary to configure storage points all over the configuration files, but everything should run OK on defaults.

Architecture overview

Following figure shows the overview of the core components of the SMILA framework:

SMILA Architecture Overview

To support the above requirements components shown on Figure 1 should be changed in the following way:

A. There should be a way to configure storage points for each DFP;

B. Blackboard service should be changed to handle storage points.

Proposed changes

Storage points configuration

It is proposed to use XML configuration file to configure storage points. Storage points will be identified by name and the whole configuration will look like following:

<StoragePoints>
  <StoragePoint Id=“point1“>
    <storage point parameters, eg. storage interface, partition etc> 
  </StoragePoint>
  ...
</StoragePoints>

For example:

...
<StoragePoint Id=”point1”>
   <XmlStorage Service=”XmlStorageService” Partition=”A”/>
   <BinaryStorage Service=”BinaryStorageService” Partition=”B”/>
</StoragePoint>
...

A User can define a default StorageID, that is every time used when no specific StorageID is defined

...
<StoragePoint Id=”DefaultStoragePoint”>
   <XmlStorage Service=”XmlStorageService” Partition=”A”/>
   <BinaryStorage Service=”BinaryStorageService” Partition=”B”/>
</StoragePoint>
...


To make configuration easier storages API can be normalized so that all storages will implement the same interface. (Some proposal on this subject was posted by Ivan into mailing list).

Alternative: Storage Point ID as OSGi service property

In this example we do not need a centralized configuration of storages and storage points, but just add a Storage Point ID to each Record Metadata/Binary Storage or as JMSProperty (which is discussed in the next section) as a OSGi service property, e.g in a DS component description of an binary storage service:

<component name="BinaryStorageService" immediate="true">
    <service>
        <provide interface="org.eclipse.smila.binarystorage.BinaryStorageService"/>
    </service>
    <property name="smila.storage.point.id" value="point1"/>
</component>

And in an (XML) Record Metadata storage service:

<component name="XmlStorageService" immediate="true">
    <implementation class="org.eclipse.smila.xmlstorage.internal.impl.XmlStorageServiceImpl"/>
    <service>
        <provide interface="org.eclipse.smila.xmlstorage.XmlStorageService"/>
        <provide interface="org.eclipse.smila.storage.RecordMetadataStorageService"/>
    </service>
    <property name="smila.storage.point.id" value="point1"/>
</component>

(note that we also introduced a second interface here that is more specialized for storing and reading Record Metadata for a Blackboard than the XmlStorageService, but does not enforce that the service uses XML to store record metadata).

Then the Blackboard wanting to use Storage Point "point1" would just look for a RecordMetadataStorageService and a BinaryStorageService (for attachments) having the property set to "point1". There would be no need to implement a central StoragePoint configuration facility.

Configuring storage points for DFP

As shown on the Figure 1, Listener component is responsible for getting Record from JMS queue, loading record on the Blackboard and executing BPEL workflow. Storage points cannot be configured inside the BPEL Workflow because the same BPEL Workflow can be used in multiple DFPs and hence can use different storage points. Thus it’s proposed to configure storage point IDs into Listener rules. With such configuration it will be possible to have separate storage points configurations for each workflow and all DFPs will be configured in a single place.

There are two ways of how storage points can be configured:

1. Listener rule contains configuration only for “Output” storage point. The “Input” storage point Id is read form the queue. After processing is finished “Output” storage poitn Id is sent back to the queue and becomes “Input” configuration for the next DFP. The whole process is shown on the following picture:

Listener rules contain "output" point only

The advantage of this way is that user needs to carry only about “Output” storage point configuration because “Input” storage point configuration will be automatically obtained from the queue. On the other hand, it can greatly complicate management, backup and data reusing tasks because it will be not possible to find out which storage point was used as “Input” when particular Listener rule was applied.

Example: Listener Config:

<Rule Name="ADD Rule" WaitMessageTimeout="10" Workers="2" TargetStoragePoint="p1">
	...  
</Rule>

The source targetStorePoint is transferred over the Queue by storing it in the Record as MetaData or by sending it as JMSProperty (we used JMSProperties right now for DataSourceID by now) with the Record.

2. Listener rule contains configuration for both “Input” and “Output” partitions. In this case storage points configuration is not sent over the queue:

Listener rules contain "input" and "output" point

The advantage of this way is that for some particular rule it will always be possible to find out which “Input” and “Output” storage points were used for this rule. In this case it’s up to user to make sure that provided configuration is correct and consistent. This greatly simplifies backup and data management tasks so it’s proposed to implement this way of configuration. Also with this way configuration will be a little more complex, for example if the same rule should be applied in two different DFP sequences but data should be loaded from different “Input” storage points , it will be required to create two rules for each “Input” partition.

Example: Listener Config:

<Rule Name="ADD Rule" WaitMessageTimeout="10" Workers="2" TargetStoragePoint="p1" SourceStoragePoint="p2">
	...  
</Rule>

Note: Maybe it could even be possible (and useful?) to implement both: Default "input" storage points could be defined in Listener Rules, while a storage point ID could be passed with the message to override the default?

Rules regarding the Alternative with OSGI-Properties

Example: Listener Config:

It's completely the same:

<Rule Name="ADD Rule" WaitMessageTimeout="10" Workers="2" TargetStoragePoint="p1" SourceStoragePoint="p2">
	...  
</Rule>

To find the services providing the named storage points the DFP has to lookup services for the appropriate interfaces (RecordMetadataStorageService and BinaryStorageService) that have the specific storage point names set as property "smila.storage.point.id". It is not necessary to know the name of the service registration or to specify two separate names for Record and Binary storage service.

Passing the Storage-Location to BPEL/Pipelets/Blackboard

After Listener obtains storage point configuration it should pass this configuration to the Blackboard so that records from that storage point can be loaded on the Blackboard and processed by BPEL workflow. It can be done in the following different ways ( this can be combined with the both upper solutions ):

  1. Storage point configuration passed as a Record Id property: The advantage of this approach is that it will always be possible to find out easily to which storage point this particular record belongs to. Disadvantage is that record Id is immutable object to be used as a hash key and changing Id properties during processing can be not a good idea. (would best apply to option 1 in the above section)
  2. Storage point configuration passed as a Record Metadata: In this case an attribute or annotation containing the storage point ID should be added to the Record metadata before processing.
  3. Storage point configuration passed separately from the record: In this case record won’t contain any information about storage points configuration into itself: E.g. in the case where the listener rules do not contain the input storage porint, it could be passed in the queue messsage as a message property. This has the advantage that the listener can also select messages by their storage point of the contained records (e.g., to manage load balancing, or because not all listeners have access to all storage points)

Therefore we propose to use the third option. Note that it is still possible in this case to store the storage point ID in record metadata for informational purpose (e.g. setting a field in the final search index to read the storage point ID after search). But the relevant storage point ID for the queue listener will be a message property.

Changes in the Blackboard service

There are following proposals for Blackboard service changes:

  1. Blackboard API will expose additional new methods that will allow working with storage points: This imposes to many changes to clients of the blackboard service, so we do not want to follow this road. Further details omitted for now.
  2. Blackboard API won’t be changed.

In this case we introduce a new BlackboardManager service that returns references to the actual Blackboard instances using the default or a named storage point:

interface BlackboardManager {
  Blackboard getDefaultBlackboard();
  Blackboard getBlackboardForStoragePoint(String storagePointId);
}
 
interface Blackboard {
  <contains current Blackboard API methods>
}

With this configuration correct reference of the Blackboard object should be passed to BPEL workflow each time workflow is executed. This can be done by WorkflowProcessor that will send the right Blackboard to the BPEL server. In their invocation, Pipelets and Processing Services get the Blackboard instance to be used from the processing engine anyway, so they will continue working with Blackboard in the same way like it is implemented now.

Thus, the WP process(…) method should be enhanced to accept the Blackboard instance as an additional method argument instead of being statically linked to a single blackboard:

Id[] process(String workflowName, Blackboard blackboard, Id[] recordIds) throws ProcessingException;

This resembles the Pipelet/ProcessingService API. However, a difficulty of this may be to find a way to pass the information about which blackboard is to be used in a pipelet/processing service invocation around the integrated BPEL engine to the Pipelet/ServiceProcessingManagers. But I hope this can be solved.

Partitions

Requirements

The requirement for Partitions is that xml storage and binary storage should be able to work with ‘partitions’. This means that storages should be able to store data to different internal partitions.

Changes in Storages API

Currently SMILA operates with two physical storages – xml storage and binary storage. API of both storages should be extended to handle partitioning. API should provide methods that will allow getting data from specified partition and saving data to specified partition. Partitioning configuration should be passed as an additional parameter:

_xssConnection.getDocument(Id, Partition);
_xssConnection.addOrUpdateDocument(Id, Document, Partition);

For the first implementation storages will work with data in partitions in the following way:

  • With xml storage each partition will contain its own copy of the record Document.
  • With binary storage each partition will contain its own copy of the attachment.

This behavior can be improved for better performance in further versions. For more information please see next section.

Alternative implenmentation using OSGi services

If we use the implementation of storage points using OSGi service properties described above in section "Alternative: Storage Point ID as OSGi service properties" we can use this to hide partitions completely from clients: In this case a storage service that wants to provide different partitions could register one "partition proxy" OSGi service for each partition that each have its own storage point ID, provide the correct storage interface (binary/record metdata/XML), but do not store data on their own, but just forward requests to the "master" storage service by just adding the partition name. This proxy service creation can be done programmatically and dynamically by the master service when a new partition is created (via service configuration or management console) so it's not necessary to create a DS component description for each partition.

The following figure should illustrate this setup:

Use of partition proxy services to hide partitioning of storages

This way no client would ever need to use additional partition IDs when communicating with a storage service, and storages that cannot provide partitions do not need to implement methods with partition parameters that cannot be used anyway.


Proposed further changes

With binary storage attachments can have a big size (for example, when crawling video files), therefore creating actual copy for each partition can be ineffective and can cause serious performance issues. As a solution for this problem binary storage should not create an actual attachment copy for each partition but rather keep reference to actual attachment when attachment was not changed from one partition to another.

Anyway, this solution can cause some problems too:

  1. Problems can occur if backup job is being done with some external tool that is not aware of references. This problem should not generally happen because backups will rather be done with properly configured tool;
  2. Some pipelet can change Attachment1 into Partition 1, while Partition 2 should still keep old version of attachment. In this case there should be some service that will be monitoring references consistency.

Back to the top