Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "SMILA/Specifications/ProcessingMessageResequencer"

(Solution Proposals)
 
(16 intermediate revisions by 2 users not shown)
Line 16: Line 16:
 
The needs of the final processing target might differ in their requirements. At this time we will only treat the case of a full text retrieval engines, like Lucene.
 
The needs of the final processing target might differ in their requirements. At this time we will only treat the case of a full text retrieval engines, like Lucene.
  
 +
SMILA/Specifications/ProcessingMessageResequencer
 
=== Indexing Requirements ===
 
=== Indexing Requirements ===
  
 
the requirements for indexing are a little relaxed compared to the general case. These are the simplifications:
 
the requirements for indexing are a little relaxed compared to the general case. These are the simplifications:
 
* the order needs only to be maintained on a per record base
 
* the order needs only to be maintained on a per record base
* older PRs are always superseded by newer OR for a given resource. the outcome of these operations can be discarded -- or even better : processing of these could be suppressed by removing from the Q(s) or pipelines for instance.
+
* older PRs are always superseded by newer PR for a given resource. the outcome of these operations can be discarded -- or even better: processing of these could be suppressed.
  
==== Basic Operations ====
+
The following requirements are just a complete list of possible demands an application may impose. There is no implicit statement attached to the likelihood that a particular requirement is requested by an application, although there might be such. The intent of the list is to have a complete enumeration. qualification for an item is merely: may such a case, however unlikely, exist?
 +
 
 +
The solutions are to outline how a specific requirement may be implemented or covered. It also may chose to not cover it. An application may then chose a solution that matches its needs. As usual the requirements are split into functional and non-functional.
 +
 
 +
==== Functional ====
 +
 
 +
===== Basic Operations =====
  
 
{{CTable}}
 
{{CTable}}
Line 36: Line 43:
 
the following sections names the cases/scenarios  that need to be covered that dont come to mind mediately but need to be considered nonetheless:
 
the following sections names the cases/scenarios  that need to be covered that dont come to mind mediately but need to be considered nonetheless:
  
==== compound management, splitting of records ====
+
===== compound management, splitting of records =====
  
depending on the application's need the descends (ie. all records created from one record ) must be processed in a cerain order.
+
two cases of compounds need to be distinguished here: aggregations and compositions.
  
parent/child associatens ususally result in a tree structure. there are 4 basic ways to traverse a tree, namely:
+
as in UML, aggregation means that the parent has a dependency to the child but the child may exist  (as a child or even distinctly on its own) elsewhere. composition in contrast '''owns''' the descendants, meaning that they cant be accessed or created independent of the parent. life cycle of the child is controlled by the parent.
* root to leaf, breadth first
+
the two cases will be discussed in the context of processing now:
* root to leaf, depth first
+
* leaf to root, breadth first
+
* leaf to root, depth first
+
  
appart from this, applications may have special processing needs and as such an own, custom implementation must be supported.  
+
====== Composition ======
 +
this is the easier case for resquencing b/c only the one processing step working on the root item is possible to create PRs for child item. the ID of a child item will always include the parent id in some sort of way. thus resequencing the parent and its children as a whole is sufficient. (an internal ordering of parent and descendants may be required and is discussed below.)
  
==== support >1 processing targets ====
+
====== Aggregation ======
the same record is processed and added to more than one processing target (PT), such as an 2 diff. search indexs, having diff. structures for diff. tasks.
+
* it is possible and likely that the records for the same resource will look differently.
+
* diff. pipleines and branches may be executed to get to the PT
+
*  some PRs of the same data source may only be added to one PT while other are added to several and others are chose not to be processed at all.
+
  
==== clustering, complex processing chain ====
+
the referenced item may be referenced
this means the scenario where processing is spread to diff. nodes in a cluster. it also includes usage of several MQs and/or piplines.
+
* by some other item OR
 +
* may exist on its own.  
  
'''Assumption:''' there is just one instance on just one node to handle all access to the processing target.  
+
as a consequence
 +
* several diff. root items may hold a reference to it OR
 +
* the child item is being processed as a root itself.
  
==== single point of failure ====
+
an application may require to handle these cases in these ways:
the solution (ideally) doesnt pose an SPOF.
+
# referenced items are to be seen only in context to the parent or on their own. <br>e.g. it does not make it apparent that the child belong to A is in fact the same as belonging to B OR the same root item C . <br>this leads to an identical handling as with compositions and in each case a records is added to the index.
 +
# referenced items are to be seen as distinct items, making the relationships apparent<br> in this case the ID is generated always in the same way independently of the parent.  Only one record for the child is added to the index. the child record will either contain no reference to the parent(s) or lists all of them.
 +
# the third way of handling this, is to do both.
  
=== Idea 1 - Connectivity Consolidation/Resequencer Buffer (CCB, CRB) ===
+
if the application requires
 +
* to handle references as distinct or shared items (2nd and 3rd case) AND
 +
* child items must be processed at the time of the parent (can happen if no change event is ever fired for the child or accessible),
 +
... then aggregation poses the more challenging case in regard to resequencing. B/c new items are created during processing by possibly diff. items the parent item cannot be used as means of ordering the child items. Even less so, if the item may also be added on its own w/o a parent. instead there must be some means that created (split) records are ordered in their own realm.
  
There was an idea to handle this case in the connectivity directly with the help of a buffer:
+
====== Parent/Descendants Ordering  Requirement ======
  
# each incoming PR is buffered for a period of time X. <br> X is at minimum as long as the longest processing path takes for any given record. In the beginning this value is certainly chosen manually  but with evaluating [[SMILA/Project_Concepts/Performance_counters_API|Performance Counters]] it should be possible to get X automatically or adjust it.
+
this requirement applies to the case
# during the time of PR in the buffer, additional PRs for the same rescource are consolidated retaining only the latest to reduce load
+
* where the child is handled in the context of the parent (composition and 1st case aggregation) AND
 +
* the order of processing of descendants matters.
  
===== CON =====
+
depending on the application's need  the descends (ie. all records created from one record ) must be processed in a certain order.
* lag<br>all PRs will have the lag of ~2 times X before the index is updated. for mass crawling this might be acceptable but an application using agents usually tries to minimize the period between the resource change and the update of the index.
+
* no guarantee that X is sufficient <br>delaying processing will reduce the chances of mishaps but there is no guarantee that this is really so. <br>the simpliest case of voiding the mechanism even in a simples scenarios, is when the system is for what ever reason under a higher load than usual. <br> even more so when the processing chain is more complex such as in a cluster setup to spread processing load over several nodes. in such a scenario we will also need to take into account that some nodes may be down temporarily while retaining the records that were assigned to them.
+
* connectivity may have to store a very large amount of items before it can rout them, and these need have to presisted on shutdown etc as well.
+
  
===== PRO =====
+
parent/child associatens ususally result in a tree structure.  there are 4 basic ways to traverse a tree, namely:
* simple to implement and has no effect on the API or other logic
+
* root to leaf, breadth first
 +
* root to leaf, depth first
 +
* leaf to root, breadth first
 +
* leaf to root, depth first
  
===  Idea 2 - Full Resequencer (FRS) ===
+
appart from this, applications may have special processing needs and as such an own, custom implementation must be supported.
  
Synopis: The Resequencer will update the processing target in the exact order as the crawler or agent adds PRs to connectivity.
+
===== support >1 processing targets =====
 +
the same record is processed and added to more than one processing target (PT), such as an 2 diff. search indexs, having diff. structures for diff. tasks.
 +
* it is possible and likely that the records for the same resource will look differently.
 +
* diff. pipleines and branches may be executed to get to the PT
 +
*  some PRs of the same data source may only be added to one PT while other are added to several and others are chose not to be processed at all.  
  
The processing will be like so:
+
===== complex processing chains =====
#  the router will feed Q1 with PRs. <br> For the resequencer to know the order, a new meta info needs to be added -- the sequence number (SN). it must be generated by the agent or by the agent controller
+
#  the processing piplines are as normal, but:
+
## w/o the step of calling the processing target
+
## they add the result to a new queue, Q2
+
#  the Resequencer will listen on Q2 and picks up all PRs
+
##  starting with the first record: feed consecutive chunks of PRs to the processing target 
+
## wait for PRs only a max. amount of time (timeout)
+
  
===== PRO =====
+
the processing chain (or workflow) may be arbitrarily complex with forks and joins, consisting of several pipelines which may contain any number of pipelets. the path a PR travels is controlled by the rules of pipeline listeners and conditions on their pipelets.<br>in the cases of some setups and due to the nature of concurrency, the same PR may undergo complete different processing steps and it is not foreseeable which route it takes (though such a case is likely a misconfiguration).
* no processing target can ask for more and correct result is always possible
+
* it is possible to add a note into the index for records ending up in the DLQ, ie. record was not indexed due to processing error.
+
  
===== CON =====
+
===== parallel processing branches =====
* setup is a more complex
+
* added overhead due to more steps in the processing chain
+
* change to agents or agent controller
+
* overkill, b/c there is no need to resequence all PRs, only those with same ID
+
* lost PRs will cause the FRS  to delay all following PRs up to the  timeout
+
  
===  Idea 3 - Smart Resequencer (SRS) ===
+
in particular, a workflow may also contain parallel processing branches where the same PR is sent several times ( i.e. creating copies of the same PR ) to  diff. Qs and/or with diff. JMS properties for consumption by diff. workflows. <br> a use case for such a scenario is when the items shall be indexed or stored by completely diff. PTs and where the pre-processing steps are different in the two branches.
  
Synopsis: The Smart Resequencer will only resequence the operation on a '''per record''' basis.<br>
+
in this case, it is inherent in the parallel workflow design, that several PRs for the same item exists in the workflow for some period of time. this results automatically in write conflicts and bugs when using a shared record, as is now the case with a persisting BB. therefore in such a case only a transient BB is allowed!
Rationale: in most cases records are independent of each other and so there is no need of ordering all records.
+
  
The processing is very similar to the Full Resequencer:
+
workarounds lifting this limitation are:
#  the router will feed Q1 with PRs. <br> For the resequencer to know the order, a new meta info needs to be added -- the sequence number (SN). it must be generated by the agent or by the agent controller
+
* have a persisting BB per processing branch. an OOB working setup for this is to execute the parallel processing branches on different nodes in a cluster setup or run several SMILA instances on the same box.
# the Resequencer will
+
* modify the ID such that it becomes unique for each parallel processing branch
## get copies of the PRs sent to Q1 via a 2nd Send task on the router, so that it can know what's coming into the processing chain, i.e. it maps IDs to SNs  <br> these PR copies just need to contain the id and the SN.
+
* implementing the partition concept for the storages, where each parallel branch will have its own partition.
## enhancement: <br> the SRS could remove obsolete older PRs from Qs to stop them from being processed further. <br>this needs the hash and SN to be present as a JMX property for the selector and doesnt cover the case of records currently processed in a pipline.
+
#  the processing piplines are as normal, but:
+
## w/o the step of calling the processing target
+
## they add the result to a new queue, Q2
+
#  the Resequencer will listen on Q2 and pick up all PRs. for each  PR:
+
##  IF its SN matches the one in the map THEN <br/> call the index service for the record <br/> ELSE <br/> ignore and skip operation <br>FI
+
## remove ID from map
+
  
==== Impl Basics ====
+
===== clustering =====
* implemented as ProcessingService
+
this means the setup where processing is spread to diff. nodes in a cluster. it also includes usage of several MQs and/or piplines.
* records are sent to it with the command/process mode  REGISTER and SEQUENCE
+
* SN and process mode are given as annotation on the record, called the Config Annotation (CA). this is the same way as with the lucen service. (i first wanted to do this as JMS props  but they are not accessible in a ProcessingService) 
+
* map may be in memory or a persisting solution may be implemented/chosen. <br> IMO the amount of records held im momory should be relativly small, only to the amount of what is in the processing chain. (hm, that can be a lot, since connectivity is not pausing crawlers and agents (yet) if there is much in the MQ)
+
  
==== Meeting Requirements ====
+
'''Assumption:''' there is just one instance on just one node to handle all access to the processing target.
  
The general ones should be sufficeiently clear from the functional description of the SRS. here come now the furhter ones:
+
===== oscillating items  =====
 +
these are items that constantly change and where the update intervall usually is smaller then it takes to process them.  
  
===== split records =====
+
==== Non - Functional ====
the processing step splitting the record is responsible for the the following:
+
* order of PRs for descendents is noted in their respective ConfigAnnotation (e.g. a link to the ID of the preceding or succeding  resouce or such) 
+
* all descendents inherit the SN from their root
+
* register the split records with the SRS
+
* possibly deregister the root and/or intermediate PRs if these are not processed further
+
 
+
the SRS will
+
* collect all PRs belonging to the tree of split PRs until it is complete
+
** missing PRs:
+
*** timeout for
+
*** config on how to continue with non-complete trees: {all or nothing, sequence incomplete}
+
 
+
===== >1 Processing Targets =====
+
 
+
this can be supported in diff. ways. both have in common:
+
# sending the PR to any of the PTs is done thru the SRS by calling it with the SEQUENCE command (this is just a generalization of the basic concept and repeated here for clarity)
+
# the SRS needs to know how many (potential) PTs there are for a resource  (determine by the ID) and when processing really has finished for a given ID. 
+
# each SEQUENCE and UNREGISTER command will reduce the count, when it reaches 0 all PRs have reached their PT and the ID can be removed from the map.
+
 
+
====== idea 1 - SRS ID rules ======
+
* the config of the SRS contains rules or conditions that determine the count.
+
* it starts with that count wich is computed on REGISTRATION.
+
 
+
====== idea 2 - processing steps control counter ======
+
* processing steps take care of in- and decerementing the counter in the normal processing chain by using the REGISTER and UNREGISTER commands to reflect additional or obsolte PTs
+
 
+
====== Opinion ======
+
i like idea 2 better b/c it puts the config of the SRS in the same place that also controls the flow of PRs anyhow. it is just a matter of including an SRS call with the respective command.<br>
+
in contrast, idea 1 would mean that we have to:
+
* duplicate the processing chain logic in some other place
+
* implement a rule/condition engine and config.
+
 
+
===== clustering, complex processing chain =====
+
* complex processing chains are possible as already described in other places. the SRS just needs to be placed in front of the PT and called in the flow of things.
+
* resequencing in a cluster scenario works OOB as the output of the SRS is just directed anyway to the Q that feeds the PT (or any another processing step). <br>
+
It matters little if that sits on on another node. the key here lies in the design of the processing chain to not mix up the order again, such as having >1 listerns on the Output-Q.
+
 
+
'''Note'''<br>
+
i think the SRS will even work if the assumption that each PT has only one instance and node that solely accesses the PT holds not true.
+
a setup like this will then segment all PRs by some scheme that depends on the ID and then the SRS has only to resequence the one segement and thus: all is well.
+
  
 
===== single point of failure =====
 
===== single point of failure =====
hm. this is a tough one as the singelnes is inherent. i have no clue yet, how to solve this, other than to use a fail-over solution.
+
the solution (ideally) doesnt pose an SPOF.
i guess, just as the PT itself, it needs to be monitored closely to detect malfunctios and in a cluster scenario where this case is required the SRS lives.
+
  
==== Cases introduced thru this solution  ====
+
===== scalability and performance =====
this section lists cases and problems that need to be covered that are introduced thru the solution itself.
+
this is a general requirement and the solution shall outline under this section the impact on performance and where possible bottlenecks are.
some of the items listed here will also apply to the FSR!!
+
  
* handling of unregistered records<br>what happens when SEQUENCING a PR that is already @ count 0 /not existing.
+
=== Solution Proposals ===
* what to do with recods that miss needed config data? <br> handling depends on the process mode:
+
** SEQUENCE: error as default , but outcome could be config'able such as : DLQ, any other Q
+
** REGISTER: error
+
* overflow of the SN<br>a reset signal must be sent to SRS
+
 
+
 
+
==== PRO ====
+
* smarter ;) than FRS
+
*
+
 
+
==== CON ====
+
* see also almost all CONS @ FRS
+
* ossilating resources (that constantly change) will never make it into the index.
+
  
 +
* [[SMILA/Specifications/Processing_Message_Resequencer/Connectivity_Consolidation_Buffer| Connectivity Consolidation Buffer (CBC)]]
 +
* [[SMILA/Specifications/Processing_Message_Resequencer/Full_Resequencer | Full Resequencer (FRS)]]
 +
* [[SMILA/Specifications/Processing_Message_Resequencer/Smart_Resequencer | Smart Resequencer (SRS)]]
 +
* [[SMILA/Specifications/Processing_Message_Resequencer/Skip_Pipelet|Skip Pipelet (SP)]]
 +
* [[SMILA/Specifications/Processing_Message_Resequencer/Record_Version_Number|Record Version Number (RVN)]]
  
 
=== General Problems ===
 
=== General Problems ===
Line 214: Line 154:
 
| RS || Resquecer Service
 
| RS || Resquecer Service
 
|-
 
|-
| FRS|| Smart Resquecer Service
+
| FRS|| Full Resequencer Service
 
|-
 
|-
| SRS|| Smart Resquecer Service
+
| SRS|| Smart Resequencer Service
 
|-
 
|-
 
| Q  || the Queue as used in a Message Queue  
 
| Q  || the Queue as used in a Message Queue  
Line 224: Line 164:
 
'''NOTE:''' the term "message" is often used interchangably for this, albeit not quite correct.  
 
'''NOTE:''' the term "message" is often used interchangably for this, albeit not quite correct.  
 
|-
 
|-
|  PT  || processing target, eg. a full text index  
+
|  PT  || processing target, basically any pipelet that stores some information on the record other than in Bin- or records storage and where the processing order matters. A search index is an example of this.
 
|-
 
|-
 
|  CA  || Config Annotation. A specially named annotation that is attached to the record holding all needed information the RS needs to do its work.  
 
|  CA  || Config Annotation. A specially named annotation that is attached to the record holding all needed information the RS needs to do its work.  
 
|}
 
|}
 
  
 
==== Ideas ====
 
==== Ideas ====
 
* replace the SN with a more general ComparableObject
 
* replace the SN with a more general ComparableObject

Latest revision as of 08:09, 9 October 2009


Note.png
Status
this page is very much a WIP and discussion is still happening on the dev list.
  • 2009 10 02 major changes to reflect newest insights

as the concept matures during the discussion this page will be updated in certain intervals.

this enhancement is tracked thru bug 289995

for the development i opened a new branch @ https://dev.eclipse.org/svnroot/rt/org.eclipse.smila/branches/2009-09-23_r608_resequencer


The Core Problem

When listening with >1 listener on a Q or with selectors there is no guarantee that the order of processing requests (PR) is maintained as intended. However, at the end of processing we need to be sure that the final processing target reflects the correct state of the data source at any given time.

The needs of the final processing target might differ in their requirements. At this time we will only treat the case of a full text retrieval engines, like Lucene.

SMILA/Specifications/ProcessingMessageResequencer

Indexing Requirements

the requirements for indexing are a little relaxed compared to the general case. These are the simplifications:

  • the order needs only to be maintained on a per record base
  • older PRs are always superseded by newer PR for a given resource. the outcome of these operations can be discarded -- or even better: processing of these could be suppressed.

The following requirements are just a complete list of possible demands an application may impose. There is no implicit statement attached to the likelihood that a particular requirement is requested by an application, although there might be such. The intent of the list is to have a complete enumeration. qualification for an item is merely: may such a case, however unlikely, exist?

The solutions are to outline how a specific requirement may be implemented or covered. It also may chose to not cover it. An application may then chose a solution that matches its needs. As usual the requirements are split into functional and non-functional.

Functional

Basic Operations
Operation N Operation N+1 expected index State after N+1
ADD A,t1 ADD A,t2 A,t2
ADD A,t1 DELETE A,t2 A doesn't exist
DELETE A,t1 ADD A,t2 A exists

the following sections names the cases/scenarios that need to be covered that dont come to mind mediately but need to be considered nonetheless:

compound management, splitting of records

two cases of compounds need to be distinguished here: aggregations and compositions.

as in UML, aggregation means that the parent has a dependency to the child but the child may exist (as a child or even distinctly on its own) elsewhere. composition in contrast owns the descendants, meaning that they cant be accessed or created independent of the parent. life cycle of the child is controlled by the parent. the two cases will be discussed in the context of processing now:

Composition

this is the easier case for resquencing b/c only the one processing step working on the root item is possible to create PRs for child item. the ID of a child item will always include the parent id in some sort of way. thus resequencing the parent and its children as a whole is sufficient. (an internal ordering of parent and descendants may be required and is discussed below.)

Aggregation

the referenced item may be referenced

  • by some other item OR
  • may exist on its own.

as a consequence

  • several diff. root items may hold a reference to it OR
  • the child item is being processed as a root itself.

an application may require to handle these cases in these ways:

  1. referenced items are to be seen only in context to the parent or on their own.
    e.g. it does not make it apparent that the child belong to A is in fact the same as belonging to B OR the same root item C .
    this leads to an identical handling as with compositions and in each case a records is added to the index.
  2. referenced items are to be seen as distinct items, making the relationships apparent
    in this case the ID is generated always in the same way independently of the parent. Only one record for the child is added to the index. the child record will either contain no reference to the parent(s) or lists all of them.
  3. the third way of handling this, is to do both.

if the application requires

  • to handle references as distinct or shared items (2nd and 3rd case) AND
  • child items must be processed at the time of the parent (can happen if no change event is ever fired for the child or accessible),

... then aggregation poses the more challenging case in regard to resequencing. B/c new items are created during processing by possibly diff. items the parent item cannot be used as means of ordering the child items. Even less so, if the item may also be added on its own w/o a parent. instead there must be some means that created (split) records are ordered in their own realm.

Parent/Descendants Ordering Requirement

this requirement applies to the case

  • where the child is handled in the context of the parent (composition and 1st case aggregation) AND
  • the order of processing of descendants matters.

depending on the application's need the descends (ie. all records created from one record ) must be processed in a certain order.

parent/child associatens ususally result in a tree structure. there are 4 basic ways to traverse a tree, namely:

  • root to leaf, breadth first
  • root to leaf, depth first
  • leaf to root, breadth first
  • leaf to root, depth first

appart from this, applications may have special processing needs and as such an own, custom implementation must be supported.

support >1 processing targets

the same record is processed and added to more than one processing target (PT), such as an 2 diff. search indexs, having diff. structures for diff. tasks.

  • it is possible and likely that the records for the same resource will look differently.
  • diff. pipleines and branches may be executed to get to the PT
  • some PRs of the same data source may only be added to one PT while other are added to several and others are chose not to be processed at all.
complex processing chains

the processing chain (or workflow) may be arbitrarily complex with forks and joins, consisting of several pipelines which may contain any number of pipelets. the path a PR travels is controlled by the rules of pipeline listeners and conditions on their pipelets.
in the cases of some setups and due to the nature of concurrency, the same PR may undergo complete different processing steps and it is not foreseeable which route it takes (though such a case is likely a misconfiguration).

parallel processing branches

in particular, a workflow may also contain parallel processing branches where the same PR is sent several times ( i.e. creating copies of the same PR ) to diff. Qs and/or with diff. JMS properties for consumption by diff. workflows.
a use case for such a scenario is when the items shall be indexed or stored by completely diff. PTs and where the pre-processing steps are different in the two branches.

in this case, it is inherent in the parallel workflow design, that several PRs for the same item exists in the workflow for some period of time. this results automatically in write conflicts and bugs when using a shared record, as is now the case with a persisting BB. therefore in such a case only a transient BB is allowed!

workarounds lifting this limitation are:

  • have a persisting BB per processing branch. an OOB working setup for this is to execute the parallel processing branches on different nodes in a cluster setup or run several SMILA instances on the same box.
  • modify the ID such that it becomes unique for each parallel processing branch
  • implementing the partition concept for the storages, where each parallel branch will have its own partition.
clustering

this means the setup where processing is spread to diff. nodes in a cluster. it also includes usage of several MQs and/or piplines.

Assumption: there is just one instance on just one node to handle all access to the processing target.

oscillating items

these are items that constantly change and where the update intervall usually is smaller then it takes to process them.

Non - Functional

single point of failure

the solution (ideally) doesnt pose an SPOF.

scalability and performance

this is a general requirement and the solution shall outline under this section the impact on performance and where possible bottlenecks are.

Solution Proposals

General Problems

Shared Record Instance via Blackboard

sharing the records via the BB for all processing steps introduces a grave concurrency bug. this is outlined in my mail @ [RE: Message Resequencer :: concept bug detected and general SMILA concurrency problem]

for the time being using a transient BB should solve the problem but is really not the ideal solution. in the end we need partitions for the BB that solve this issue IMO.

Appendix

Abreviations

Abrev Meaning
SN Sequence Number
RS Resquecer Service
FRS Full Resequencer Service
SRS Smart Resequencer Service
Q the Queue as used in a Message Queue
PR processing request, ie to either add or delete a resource and do the needed processing for that. the PR is the combination of JMS message and record.

NOTE: it is legal to have >1 PRs for the same recource on the processing chain. this concept's goal is to bring the PRs into proper order and not neccessarily have just one PR per resource in the processing chain.
NOTE: the term "message" is often used interchangably for this, albeit not quite correct.

PT processing target, basically any pipelet that stores some information on the record other than in Bin- or records storage and where the processing order matters. A search index is an example of this.
CA Config Annotation. A specially named annotation that is attached to the record holding all needed information the RS needs to do its work.

Ideas

  • replace the SN with a more general ComparableObject

Copyright © Eclipse Foundation, Inc. All Rights Reserved.