Difference between revisions of "SMILA/Specifications/Partitioning Storages"

From Eclipsepedia

Jump to: navigation, search
m
Line 1: Line 1:
== Partitioning storages for Backup and Reuse/Recrawling ==
+
== Implementing Storage Points for Data Backup and Reusing ==
===== Use case. =====
+
The requirement is that after each DFP it should be possible to store records in a separate partition that can be accessed later. Hence every record will have multiple versions associated with DFP each saved to it's partition. This can be needed in several cases:
+
# Backup some particular state of data, eg. some partition;
+
# If DFP process fails it will be possible to continue processing from one of the previously saved states;
+
# Partitions can be used for reusing existing data, for example when some DFP process must be applied in different workflows and takes a lot of time to complete it will be possible to run it only once and later use already processed data saved to partition.
+
  
===== Changes in XML and Bin storage. =====
+
=== Implementing Storage Points ===
To support partitioning both XML- and Bin- storages must be able to store data to partitions and load data from partitions. Thus XML- and Bin- storage APIs should be extended in such way that partition information will be accepted as an additional parameter when saving or loading data. With binstorage, binary attachments can have quite a big size, thus if attachment wasn't changed from one partition to another, it's worth not to copy attachment's data for each partition but store only reference to actual attachment.
+
For example, methods that manage attachments as streams should be extended to:
+
store(Id, InputStream, Partition);
+
fetchAsStream(Id, Partition);
+
Also it should be possible to obtain data from the default (latest) partition when no partition information is provided. So ''fetchAsStream(Id)'' should return attachment saved to the latest partition.
+
===== Partitioning configuration and changes in Blackboard API. =====
+
Partition information (partition name) can be configured into Listener Rule.
+
There are following ways of how to pass partition information:
+
====== 1. Partition information is passed as a record Id property. ======
+
Listener gets record from the queue, and sets partition information to the record's Id property.
+
Blackboard uses partition information in load and commit operations.
+
In this case no significant changes are required to the blackboard API because partition information is encapsulated into record Id. Blackboard should be only extended to support loading and committing records to particular partitions:
+
load(Id)
+
This will load record using partition information from the Id property
+
 
+
commit(Id, Partition)
+
This will commit record to specified partition (in order to be able to load record from one partition and commit to another)
+
====== 2. Partition information is passed separately from record as a JMS property. ======
+
Listener reads JMS property from the queue and makes it available for other components that will use blackboard (like processing).
+
In this case there are two possibilities:
+
  
- record is loaded (and committed) to storages using partition information. After that record can be accessed by Id from the internal blackboard cache, until commit is invoked. Implementing this way will require first to solve problems when some process wants to access already committed record, that exist now. After those problems get solved, the only changes required for blackboard API will be to implement load and commit operations that handle partitions.
+
==== Requirements ====
  
- all methods from blackboard API should be duplicated to handle partition information as a second parameter. In this case if record is missing into internal blackboard cache, it can be first loaded from storages using provided partition information. This way eliminates problems mentioned above but requires a lot of modifications to blackboard API to handle partition information in every method:
+
The core of the SMILA framework consists of a pipeline where data is pushed into a queue  whence it is fed into data flow processors (DFP) .
createLiteral(Id)  
+
The requirement for Storage Points is that it should be possible to store records in a specific “storage point” after each DFP. Storage Point in this case means some storage configuration where data is saved, for example it can be partition in local storage (please see Chapter 2 for more information) or some remote storage.
createLiteral(Id, Partition)  
+
Storage points should be configured in the following way:
createAnnotation(Id)
+
# Each DFP should have a configuration for the storage point from where data will be loaded before the BPEL processing (“Input” storage point);
createAnnotation(Id, Partition)
+
# Optionally it should be possible to configure the storage point where data should be stored after BPEL processing (“Output” storage point). If this configuration is omitted, data should not be stored to any storage point at all after BPEL processing;
...
+
# If “Input” and “Output” storage points have the same configuration, data in the “Input” storage point should be overridden after BPEL processing.
  
 +
The goal for these modifications is that information that is stored to storage points can be accessed anytime later. This will solve following problems:
  
The first option seems to be more convenient and easy to implement because it allows to keep partition information directly into the record, thus easily pass and receive it between distributed components and requires almost no blackboard API modifications.
+
# Backup and recovery: It will be possible to make a backup copy of some specific state of data
====== 3. Alternative API :) ======
+
# Failure recovery during processing: Some DFP involve expensive processing. With storage points it will be possible to continue processing from one of the previously saved states in case of DFP failure
[[User:Churkin.ivan.gmail.com|Ivan Churkin]]
+
# Reusing data collected from previous DFPs: Data that is a result of executing some DFP sequence can be saved to storage point and reused later
I do not like both
+
# Easy data management: It will be possible to easily manage saved states of data, for example delete or move some storage point that contains obsolete data
<source lang="java">
+
load(Id, Partition)
+
2 Architecture overview
commit(Id, Partition)
+
Following figure shows the overview of the core components of the SMILA framework:
 +
 
 +
[[SMILA-storagepoints-architecture-overview.png]]
 +
 
 +
To support the above requirements components shown on Figure 1 should be changed in the following way:
 +
A. There should be a way to configure storage points for each DFP;
 +
B. Blackboard service should be changed to handle storage points.
 +
 
 +
=== Proposed changes ===
 +
 
 +
==== Storage points configuration ====
 +
 
 +
It is proposed to use XML configuration file to configure storage points. Storage points will be identified by name and the whole configuration will look like following:
 +
 
 +
<source lang="xml">
 +
<StoragePoints>
 +
  <StoragePoint Id=“point1“>
 +
    <storage point parameters, eg. storage interface, partition etc>
 +
  </StoragePoint>
 +
  ...
 +
</StoragePoints>
 
</source>
 
</source>
Its m-threads dangerous.
+
 
<source lang="java">
+
For example:
createLiteral(Id)
+
 
createLiteral(Id, Partition)
+
<source lang="xml">
createAnnotation(Id)
+
...
createAnnotation(Id, Partition)
+
<StoragePoint Id=”point1”>
 +
  <XmlStorage Service=”XmlStorageService” Partition=”A”/>
 +
  <BinaryStorage Service=”BinaryStorageService” Partition=”B”/>
 +
</StoragePoint>
 +
...
 
</source>
 
</source>
It's not effective.
 
  
What about getting partition interface reference and after that working with partition on the old manner?
+
To make configuration easier storages API can be normalized so that all storages will implement the same interface. (Some proposal on this subject was posted by Ivan into mailing list).
 +
 
 +
==== Alternative: Storage Point ID as OSGi service property ====
 +
 
 +
In this example we do not need a centralized configuration of storages and storage points, but just add a Storage Point ID to each Record Metadata/Binary Storage as a OSGi service property, e.g in a DS component description of an binary storage service:
 +
 
 +
<source lang="xml">
 +
<component name="BinaryStorageService" immediate="true">
 +
    <service>
 +
        <provide interface="org.eclipse.smila.binarystorage.BinaryStorageService"/>
 +
    </service>
 +
    <property name="smila.storage.point.id" value="point1"/>
 +
</component>
 +
</source>
 +
 
 +
And in an (XML) Record Metadata storage service:
 +
 
 +
<source lang="xml">
 +
<component name="XmlStorageService" immediate="true">
 +
    <implementation class="org.eclipse.smila.xmlstorage.internal.impl.XmlStorageServiceImpl"/>
 +
    <service>
 +
        <provide interface="org.eclipse.smila.xmlstorage.XmlStorageService"/>
 +
        <provide interface="org.eclipse.smila.storage.RecordMetadataStorageService"/>
 +
    </service>
 +
    <property name="smila.storage.point.id" value="point1"/>
 +
</component>
 +
</source>
 +
 
 +
(note that we also introduced a second interface here that is more specialized for storing and reading Record Metadata for a Blackboard than the XmlStorageService, but does not enforce that the service uses XML to store record metadata).
 +
 
 +
Then the Blackboard wanting to use Storage Point "point1" would just look for a RecordMetadataStorageService and a BinaryStorageService (for attachments) having the property set to "point1". There would be no need to implement a central StoragePoint configuration facility.
 +
 
 +
 
 +
==== Configuring storage points for DFP ====
 +
 
 +
As shown on the Figure 1, Listener component is responsible for getting Record from JMS queue, loading record on the Blackboard and executing BPEL workflow. Storage points cannot be configured inside the BPEL Workflow because the same BPEL Workflow can be used in multiple DFPs and hence can use different storage points.
 +
Thus it’s proposed to configure storage points into Listener rules. With such configuration it will be possible to have separate storage points configurations for each workflow and all DFPs will be configured in a single place.
 +
 
 +
There are two ways of how storage points can be configured:
 +
 
 +
1. Listener rule contains configuration only for “Output” storage point. “Input” storage point configuration is read form the queue. After processing is finished “Output” configuration is sent back to the queue and becomes “Input” configuration for the next DFP. The whole process is shown on the following picture:
 +
 
 +
[[SMILA-storagepoints-queue-option-1.png]]
 +
 
 +
The advantage of this way is that user needs to carry only about “Output” storage point configuration because “Input” storage point configuration will be automatically obtained from the queue. On the other hand, it can greatly complicate management, backup and data reusing tasks because it will be not possible to find out which storage point was used as “Input” when particular Listener rule was applied.
 +
2. Listener rule contains configuration for both “Input” and “Output” partitions. In this case storage points configuration is not sent over the queue:
 +
 
 +
[[SMILA-storagepoints-queue-option-2.png]]
 +
 
 +
The advantage of this way is that for some particular rule it will always be possible to find out which “Input” and “Output” storage points were used for this rule. In this case it’s up to user to make sure that provided configuration is correct and consistent. This greatly simplifies backup and data management tasks so it’s proposed to implement this way of configuration.  Also with this way configuration will be a little more complex, for example if the same rule should be applied in two different DFP sequences but data should be loaded from different “Input” storage points , it will be required to create two rules for each “Input” partition.
 +
 
 +
After Listener obtains storage point configuration it should pass this configuration to the Blackboard so that records from that storage point can be loaded on the Blackboard and processed by BPEL workflow. It can be done in the following different ways:
 +
# Storage point configuration passed as a Record Id property: The advantage of this approach is that it will always be possible to find out easily to which storage point this particular record belongs to. Disadvantage is that record Id is immutable object to be used as a hash key and changing Id properties during processing can be not a good idea.
 +
# Storage point configuration passed as a Record Metadata attribute: In this case attribute containing storage point configuration should be added to the Record metadata before processing
 +
# Storage point configuration passed separately from the record: In this case record won’t contain any information about storage points configuration into itself: E.g. in the case where the listener rules do not contain the input storage porint, it could be passed in the queue messsage as a message property. This has the advantage that the listener can also select messages by their storage point of the contained records (e.g., to manage load balancing, or because not all listeners have access to all storage points).
 +
 
 +
==== Changes in the Blackboard service ====
 +
 
 +
There are following proposals for Blackboard service changes:
 +
 
 +
# Blackboard API will expose additional new methods that will allow working with storage points: This imposes to many changes to clients of the blackboard service, so we do not want to follow this road. Further details omitted for now.
 +
# Blackboard API won’t be changed.
 +
 
 +
In this case we introduce a new BlackboardManager service that returns references to the actual Blackboard instances using the default or a named storage point:
  
 
<source lang="java">
 
<source lang="java">
interface Blackboard{
+
interface BlackboardManager {
BlackboardPartition getPartition(String partiitionName);
+
  Blackboard getDefaultBlackboard();
 +
  Blackboard getBlackboardForStoragePoint(String storagePointId);
 
}
 
}
  
interface BlackboardPartition{
+
interface Blackboard {
... old blackboard interface ..
+
  <contains current Blackboard API methods>
 
}
 
}
 
</source>
 
</source>
 +
 +
With this configuration correct reference of the Blackboard object should be passed to BPEL workflow each time workflow is executed. This can be done by WorkflowProcessor that will send the right Blackboard to the BPEL server. In their invocation, Pipelets and Processing Services get the Blackboard instance to be used from the processing engine anyway, so they will continue working with Blackboard in the same way like it is implemented now.
 +
 +
Thus, the WP process(…) method should be enhanced to accept the Blackboard instance as an additional method argument instead of being statically linked to a single blackboard:
 +
 +
<source lang="java">
 +
Id[] process(String workflowName, Blackboard blackboard, Id[] recordIds) throws ProcessingException;
 +
</source>
 +
 +
This resembles the Pipelet/ProcessingService API. However, a difficulty of this may be to find a way to pass the information about which blackboard is to be used in a pipelet/processing service invocation around the integrated BPEL engine to the Pipelet/ServiceProcessingManagers. But I hope this can be solved.
 +
 +
=== Partitions ===
 +
 +
==== Requirements ====
 +
 +
The requirement for Partitions is that xml storage and binary storage should be able to work with ‘partitions’. This means that storages should be able to store data to different internal partitions.
 +
 +
==== Changes in Storages API ====
 +
 +
Currently SMILA operates with two physical storages – xml storage and binary storage. API of both storages should be extended to handle partitioning. API should provide methods that will allow getting data from specified partition and saving data to specified partition. Partitioning configuration should be passed as an additional parameter:
 +
 +
<source lang="java">
 +
_xssConnection.getDocument(Id, Partition);
 +
_xssConnection.addOrUpdateDocument(Id, Document, Partition);
 +
</source>
 +
 +
For the first implementation storages will work with data in partitions in the following way:
 +
 +
* With xml storage each partition will contain its own copy of the record Document.
 +
* With binary storage each partition will contain its own copy of the attachment.
 +
 +
This behavior can be improved for better performance in further versions. For more information please see next section.
 +
 +
==== Alternative implenmentation using OSGi services ====
 +
 +
If we use the implementation of storage points using OSGi service properties described above in section "Alternative: Storage Point ID as OSGi service properties" we can use this to hide partitions completely from clients: In this case a storage service that wants to provide different partitions could register one "partition proxy" OSGi service for each partition that each have its own storage point ID, provide the correct storage interface (binary/record metdata/XML), but do not store data on their own, but just forward requests to the "master" storage service by just adding the partition name. This proxy service creation can be done programmatically and dynamically by the master service when a new partition is created (via service configuration or management console) so it's not necessary to create a DS component description for each partition.
 +
 +
The following figure should illustrate this setup:
 +
 +
[[SMILA-storagepoint-partition-proxies.png]]
 +
 +
This way no client would ever need to use additional partition IDs when communicating with a storage service, and storages that cannot provide partitions do not need to implement methods with partition parameters that cannot be used anyway.
 +
 +
 +
 +
==== Proposed further changes ====
 +
 +
With binary storage attachments can have a big size (for example, when crawling video files), therefore creating actual copy for each partition can be ineffective and can cause serious performance issues. As a solution for this problem binary storage should not create an actual attachment copy for each partition but rather keep reference to actual attachment when attachment was not changed from one partition to another.
 +
 +
Anyway, this solution can cause some problems too:
 +
# Problems can occur if backup job is being done with some external tool that is not aware of references. This problem should not generally happen because backups will rather be done with properly configured tool;
 +
# Some pipelet can change Attachment1 into Partition 1, while Partition 2 should still keep old version of attachment. In this case there should be some service that will be monitoring references consistency.

Revision as of 07:16, 15 December 2008

Contents

Implementing Storage Points for Data Backup and Reusing

Implementing Storage Points

Requirements

The core of the SMILA framework consists of a pipeline where data is pushed into a queue whence it is fed into data flow processors (DFP) . The requirement for Storage Points is that it should be possible to store records in a specific “storage point” after each DFP. Storage Point in this case means some storage configuration where data is saved, for example it can be partition in local storage (please see Chapter 2 for more information) or some remote storage. Storage points should be configured in the following way:

  1. Each DFP should have a configuration for the storage point from where data will be loaded before the BPEL processing (“Input” storage point);
  2. Optionally it should be possible to configure the storage point where data should be stored after BPEL processing (“Output” storage point). If this configuration is omitted, data should not be stored to any storage point at all after BPEL processing;
  3. If “Input” and “Output” storage points have the same configuration, data in the “Input” storage point should be overridden after BPEL processing.

The goal for these modifications is that information that is stored to storage points can be accessed anytime later. This will solve following problems:

  1. Backup and recovery: It will be possible to make a backup copy of some specific state of data
  2. Failure recovery during processing: Some DFP involve expensive processing. With storage points it will be possible to continue processing from one of the previously saved states in case of DFP failure
  3. Reusing data collected from previous DFPs: Data that is a result of executing some DFP sequence can be saved to storage point and reused later
  4. Easy data management: It will be possible to easily manage saved states of data, for example delete or move some storage point that contains obsolete data

2 Architecture overview Following figure shows the overview of the core components of the SMILA framework:

SMILA-storagepoints-architecture-overview.png

To support the above requirements components shown on Figure 1 should be changed in the following way: A. There should be a way to configure storage points for each DFP; B. Blackboard service should be changed to handle storage points.

Proposed changes

Storage points configuration

It is proposed to use XML configuration file to configure storage points. Storage points will be identified by name and the whole configuration will look like following:

<StoragePoints>
  <StoragePoint Id=“point1“>
    <storage point parameters, eg. storage interface, partition etc> 
  </StoragePoint>
  ...
</StoragePoints>

For example:

...
<StoragePoint Id=”point1”>
   <XmlStorage Service=”XmlStorageService” Partition=”A”/>
   <BinaryStorage Service=”BinaryStorageService” Partition=”B”/>
</StoragePoint>
...

To make configuration easier storages API can be normalized so that all storages will implement the same interface. (Some proposal on this subject was posted by Ivan into mailing list).

Alternative: Storage Point ID as OSGi service property

In this example we do not need a centralized configuration of storages and storage points, but just add a Storage Point ID to each Record Metadata/Binary Storage as a OSGi service property, e.g in a DS component description of an binary storage service:

<component name="BinaryStorageService" immediate="true">
    <service>
        <provide interface="org.eclipse.smila.binarystorage.BinaryStorageService"/>
    </service>
     <property name="smila.storage.point.id" value="point1"/>
</component>

And in an (XML) Record Metadata storage service:

<component name="XmlStorageService" immediate="true">
    <implementation class="org.eclipse.smila.xmlstorage.internal.impl.XmlStorageServiceImpl"/>
    <service>
        <provide interface="org.eclipse.smila.xmlstorage.XmlStorageService"/>
        <provide interface="org.eclipse.smila.storage.RecordMetadataStorageService"/>
    </service>
     <property name="smila.storage.point.id" value="point1"/>
</component>

(note that we also introduced a second interface here that is more specialized for storing and reading Record Metadata for a Blackboard than the XmlStorageService, but does not enforce that the service uses XML to store record metadata).

Then the Blackboard wanting to use Storage Point "point1" would just look for a RecordMetadataStorageService and a BinaryStorageService (for attachments) having the property set to "point1". There would be no need to implement a central StoragePoint configuration facility.


Configuring storage points for DFP

As shown on the Figure 1, Listener component is responsible for getting Record from JMS queue, loading record on the Blackboard and executing BPEL workflow. Storage points cannot be configured inside the BPEL Workflow because the same BPEL Workflow can be used in multiple DFPs and hence can use different storage points. Thus it’s proposed to configure storage points into Listener rules. With such configuration it will be possible to have separate storage points configurations for each workflow and all DFPs will be configured in a single place.

There are two ways of how storage points can be configured:

1. Listener rule contains configuration only for “Output” storage point. “Input” storage point configuration is read form the queue. After processing is finished “Output” configuration is sent back to the queue and becomes “Input” configuration for the next DFP. The whole process is shown on the following picture:

SMILA-storagepoints-queue-option-1.png

The advantage of this way is that user needs to carry only about “Output” storage point configuration because “Input” storage point configuration will be automatically obtained from the queue. On the other hand, it can greatly complicate management, backup and data reusing tasks because it will be not possible to find out which storage point was used as “Input” when particular Listener rule was applied. 2. Listener rule contains configuration for both “Input” and “Output” partitions. In this case storage points configuration is not sent over the queue:

SMILA-storagepoints-queue-option-2.png

The advantage of this way is that for some particular rule it will always be possible to find out which “Input” and “Output” storage points were used for this rule. In this case it’s up to user to make sure that provided configuration is correct and consistent. This greatly simplifies backup and data management tasks so it’s proposed to implement this way of configuration. Also with this way configuration will be a little more complex, for example if the same rule should be applied in two different DFP sequences but data should be loaded from different “Input” storage points , it will be required to create two rules for each “Input” partition.

After Listener obtains storage point configuration it should pass this configuration to the Blackboard so that records from that storage point can be loaded on the Blackboard and processed by BPEL workflow. It can be done in the following different ways:

  1. Storage point configuration passed as a Record Id property: The advantage of this approach is that it will always be possible to find out easily to which storage point this particular record belongs to. Disadvantage is that record Id is immutable object to be used as a hash key and changing Id properties during processing can be not a good idea.
  2. Storage point configuration passed as a Record Metadata attribute: In this case attribute containing storage point configuration should be added to the Record metadata before processing
  3. Storage point configuration passed separately from the record: In this case record won’t contain any information about storage points configuration into itself: E.g. in the case where the listener rules do not contain the input storage porint, it could be passed in the queue messsage as a message property. This has the advantage that the listener can also select messages by their storage point of the contained records (e.g., to manage load balancing, or because not all listeners have access to all storage points).

Changes in the Blackboard service

There are following proposals for Blackboard service changes:

  1. Blackboard API will expose additional new methods that will allow working with storage points: This imposes to many changes to clients of the blackboard service, so we do not want to follow this road. Further details omitted for now.
  2. Blackboard API won’t be changed.

In this case we introduce a new BlackboardManager service that returns references to the actual Blackboard instances using the default or a named storage point:

interface BlackboardManager {
  Blackboard getDefaultBlackboard();
  Blackboard getBlackboardForStoragePoint(String storagePointId);
}
 
interface Blackboard {
  <contains current Blackboard API methods>
}

With this configuration correct reference of the Blackboard object should be passed to BPEL workflow each time workflow is executed. This can be done by WorkflowProcessor that will send the right Blackboard to the BPEL server. In their invocation, Pipelets and Processing Services get the Blackboard instance to be used from the processing engine anyway, so they will continue working with Blackboard in the same way like it is implemented now.

Thus, the WP process(…) method should be enhanced to accept the Blackboard instance as an additional method argument instead of being statically linked to a single blackboard:

Id[] process(String workflowName, Blackboard blackboard, Id[] recordIds) throws ProcessingException;

This resembles the Pipelet/ProcessingService API. However, a difficulty of this may be to find a way to pass the information about which blackboard is to be used in a pipelet/processing service invocation around the integrated BPEL engine to the Pipelet/ServiceProcessingManagers. But I hope this can be solved.

Partitions

Requirements

The requirement for Partitions is that xml storage and binary storage should be able to work with ‘partitions’. This means that storages should be able to store data to different internal partitions.

Changes in Storages API

Currently SMILA operates with two physical storages – xml storage and binary storage. API of both storages should be extended to handle partitioning. API should provide methods that will allow getting data from specified partition and saving data to specified partition. Partitioning configuration should be passed as an additional parameter:

_xssConnection.getDocument(Id, Partition);
_xssConnection.addOrUpdateDocument(Id, Document, Partition);

For the first implementation storages will work with data in partitions in the following way:

  • With xml storage each partition will contain its own copy of the record Document.
  • With binary storage each partition will contain its own copy of the attachment.

This behavior can be improved for better performance in further versions. For more information please see next section.

Alternative implenmentation using OSGi services

If we use the implementation of storage points using OSGi service properties described above in section "Alternative: Storage Point ID as OSGi service properties" we can use this to hide partitions completely from clients: In this case a storage service that wants to provide different partitions could register one "partition proxy" OSGi service for each partition that each have its own storage point ID, provide the correct storage interface (binary/record metdata/XML), but do not store data on their own, but just forward requests to the "master" storage service by just adding the partition name. This proxy service creation can be done programmatically and dynamically by the master service when a new partition is created (via service configuration or management console) so it's not necessary to create a DS component description for each partition.

The following figure should illustrate this setup:

SMILA-storagepoint-partition-proxies.png

This way no client would ever need to use additional partition IDs when communicating with a storage service, and storages that cannot provide partitions do not need to implement methods with partition parameters that cannot be used anyway.


Proposed further changes

With binary storage attachments can have a big size (for example, when crawling video files), therefore creating actual copy for each partition can be ineffective and can cause serious performance issues. As a solution for this problem binary storage should not create an actual attachment copy for each partition but rather keep reference to actual attachment when attachment was not changed from one partition to another.

Anyway, this solution can cause some problems too:

  1. Problems can occur if backup job is being done with some external tool that is not aware of references. This problem should not generally happen because backups will rather be done with properly configured tool;
  2. Some pipelet can change Attachment1 into Partition 1, while Partition 2 should still keep old version of attachment. In this case there should be some service that will be monitoring references consistency.