Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "SMILA/Specifications/ProcessingMessageResequencer"

(Idea 2 - Full Resequencer (FRS): moved to own page)
(moved SRS to oen page)
Line 117: Line 117:
 
* [[SMILA/Specifications/Processing_Message_Resequencer/Smart_Resequencer | Smart Resequencer]]
 
* [[SMILA/Specifications/Processing_Message_Resequencer/Smart_Resequencer | Smart Resequencer]]
 
* [[SMILA/Specifications/Processing_Message_Resequencer/Skip_Pipelet|Skip Pipelet]]
 
* [[SMILA/Specifications/Processing_Message_Resequencer/Skip_Pipelet|Skip Pipelet]]
 
 
 
===  Idea 3 - Smart Resequencer (SRS) ===
 
 
Synopsis: The Smart Resequencer will only resequence the operation on a '''per record''' basis.<br>
 
Rationale: in most cases records are independent of each other and so there is no need of ordering all records. 
 
 
====  working principle ====
 
 
the following applies to all PRs in need of resequencing: 
 
# the first thing it needs to do, is to be REGISTERED with the SRS. for the SRS to work, it must declare a sequence number (SN) that reflects the order of PRs as created by the agent/crawler.
 
# before it is added to the PT it must be RESEQUENCED by the SRS, which just checks if the PR's SN equals the SN of the latest REGISTED PR. if so, then it is added to the output if not, it is not added the output.
 
 
===== non-concurrent REGISTERING setup =====
 
# the router will deliver all PRs to Q1 where the SRS has a sole listener that registers all incoming messages
 
# the output of the SRS can then be processed as normal by multiple pipelines   
 
# each pipeline that calls the PT must inject a call to the SRS with the RESQUENCE command '''before''' the call to the PT.
 
 
a down side to this setup is that REGISTERING adds execution time to the critical path for all records and can only be proced by one thread.
 
question: is there a setup that is better w/o introducing an error? 
 
 
===== concurrent REGISTERING setup with call in pipelines =====
 
instead of creating a sole pipeline for the SRS, this idea proposes to add the REGISTER call to each pipeline at the very beginning.
 
by this, concurrent processing of all PRs is fostered from the start.
 
 
this will work safely:
 
# listener L1 and L2 work concurrently on the same item but with diff. SNs. L1 on PR1.SN1 and L2 on PR2.SN2.
 
# the critical case is when SRS doesnt know of the more recent PR2 when being asked to RESEQUENCE PR1, like so:
 
## If L1 calls SRS with RESEQUENCE before L2 gets to call REGISTER,  SRS doesnt know that an SN2 is out there and lets PR with SN1 pass thru.
 
## this is not an error b/c the order of processing is maintained. it has a  drawback however, which  is that SRS is unable to supress PR1.
 
# i cannot think of another case where this setup leads to an aerror , b/c all PRs are first registered before any further processing may take place this setup works in all cases IMO. (please prove me wrong)
 
 
'''pro:''' reduced overhead due to one less Q and pipeline
 
 
'''con:'''  it might be easy to forget to call the SRS in each pipeline.
 
 
 
===== concurrent REGISTERING setup with mirror Queue =====
 
 
in this setup a copy of the PR with its SN is sent to an additional Q2 in parallel to Q1 (two send tasks in router). SRS is the only listener on Q2 and REGISTERS  all the PRs.
 
 
this will introduce an error in case that a PR is RESEQUENCed while for the same item there is a new PR waiting in Q2.
 
 
the problem here is generally that REGISTERing happens asynchronously to processing  and hence cannot be safe.
 
 
'''fix:''' the error introcuced by this setup can be fixed by demanding that the SRS is first to process all PRs on Q2 before RESEQUENCing PRs. however, this may cause PRs only to be added to the index as long as the agent produces PRs. in that case it wont be better than the non-concurent setup.
 
 
==== Basics Impl. Ideas ====
 
 
* implemented as ProcessingService
 
* records are sent to it with the command/process mode  REGISTER and SEQUENCE
 
* SN and process mode are given as annotation on the record, called the Config Annotation (CA). this is  the same way as with the lucen service. (i first wanted to do this as JMS props  but they are not accessible in a ProcessingService) 
 
* map may be in memory or a persisting solution may be implemented/chosen. <br> IMO the amount of records held im momory should be relativly small, only to the amount of what is in the processing chain. (hm, that can be a lot, since connectivity is not pausing crawlers and agents (yet) if there is much in the MQ)
 
 
 
==== Meeting Requirements ====
 
 
The general ones should be sufficeiently clear from the functional description of the SRS. here come now the further ones:
 
 
===== split records =====
 
 
compound and aggregation are handled the same way, like so:
 
 
the processing step splitting the record is responsible for the the following:
 
* all descendants inherit the SN from their root
 
* if internal ordering: <br>order of PRs for descendants is noted in their respective ConfigAnnotation (e.g. a link to the ID of the preceding or succeeding  resource or such) 
 
* register the split records with the SRS
 
* possibly deregister the root and/or intermediate PRs if these are not processed further
 
 
the SRS will
 
* collect all PRs belonging to the tree of split PRs until it is complete
 
** missing PRs:
 
*** timeout 
 
*** config on how to continue with non-complete trees: {all or nothing, sequence incomplete}
 
 
===== >1 Processing Targets =====
 
 
this can be supported in diff. ways. both have in common:
 
# sending the PR to any of the PTs is done thru the SRS by calling it with the RESEQUENCE command (this is just a generalization of the basic concept and repeated here for clarity)
 
# the SRS needs to know how many (potential) PTs there are for a resource  (determine by the ID) and when processing really has finished for a given ID. 
 
# each RESEQUENCE and UNREGISTER command will reduce the count, when it reaches 0 all PRs have reached their PT and the ID can be removed from the map.
 
 
====== idea 1 - SRS ID rules ======
 
* the config of the SRS contains rules or conditions that determine the count.
 
* it starts with that count wich is computed on REGISTRATION.
 
 
====== idea 2 - processing steps control counter ======
 
* processing steps take care of in- and decerementing the counter in the normal processing chain by using the REGISTER and UNREGISTER commands to reflect additional or obsolte PTs
 
 
====== Opinion ======
 
i like idea 2 better b/c it puts the config of the SRS in the same place that also controls the flow of PRs anyhow. it is just a matter of including an SRS call with the respective command.<br>
 
in contrast, idea 1 would mean that we have to:
 
* duplicate the processing chain logic in some other place
 
* implement a rule/condition engine and config.
 
 
===== clustering, complex processing chain =====
 
* complex processing chains are possible as already described in other places. the SRS just needs to be placed in front of the PT and called in the flow of things
 
* resequencing in a cluster scenario works OOB just the setup/config changes.
 
** SRS is run on several nodes and shares a custer capable map OR
 
** if the router sends PRs for the same item always to the same processing node, then the SRS can be local to the processing nodes and setup as normal OR
 
** SRS runs on just one node. then
 
*** all messages from the router need to be send to the SRS node first for REGISTERING<br>it makes sense to have th SRS and router on the same node to avoid chnaging nodes for the first step.
 
*** all processing nodes dont call he PT directly in their pipeline but send their result to SRS node
 
*** the SRS RESEQUENCE pipeline will call the PT.
 
 
'''Note'''<br>
 
i think the SRS will even work if the assumption that each PT has only one instance and node that solely accesses the PT holds not true.
 
a setup like this will then segment all PRs by some scheme that depends on the ID and then the SRS has only to resequence the one segement and thus: all is well.
 
 
===== single point of failure =====
 
hm. this is a tough one as the single'nes is inherent. i have no clue yet, how to solve this, other than to use a fail-over solution.
 
i guess, just as the PT itself, it needs to be monitored closely to detect malfunctions.
 
 
===== scalability and performance =====
 
there is some performance degradation to be expected because
 
# SRS increases the number of threads
 
# SRS is inserted at least 2 times into the processing flow, namely at the beginning and end.
 
## when it registers an item
 
## when it resquences it
 
 
the internal workings of these steps are fairly simple and should not take much time compared to the rest of the processing, albeit in a highly concurrent scenario the synchronization will take its toll.
 
 
==== Cases introduced thru this solution  ====
 
this section lists cases and problems that need to be covered that are introduced thru the solution itself.
 
some of the items listed here will also apply to the FSR!!
 
 
* handling of unregistered records<br>what happens when SEQUENCING a PR that is already @ count 0 /not existing.
 
* what to do with recods that miss needed config data? <br> handling depends on the process mode:
 
** SEQUENCE: error as default , but outcome could be config'able such as : DLQ, any other Q
 
** REGISTER: error
 
* overflow of the SN<br>a reset signal must be sent to SRS
 
 
 
==== PRO ====
 
* smarter ;) than FRS
 
* no change to APIs are needed, implementation of agents/crawlers (controller) needs to add the SN as an annotation to each record. which can be turned on or off via config.
 
* unobstrutive, SRS can be used or not.
 
 
==== CON ====
 
* see also almost all CONS @ FRS
 
* oscillating resources (that constantly change) will never make it into the index.
 
  
 
=== General Problems ===
 
=== General Problems ===

Revision as of 15:58, 7 October 2009


Note.png
Status
this page is very much a WIP and discussion is still happening on the dev list.
  • 2009 10 02 major changes to reflect newest insights

as the concept matures during the discussion this page will be updated in certain intervals.

this enhancement is tracked thru bug 289995

for the development i opened a new branch @ https://dev.eclipse.org/svnroot/rt/org.eclipse.smila/branches/2009-09-23_r608_resequencer


The Core Problem

When listening with >1 listener on a Q or with selectors there is no guarantee that the order of processing requests (PR) is maintained as intended. However, at the end of processing we need to be sure that the final processing target reflects the correct state of the data source at any given time.

The needs of the final processing target might differ in their requirements. At this time we will only treat the case of a full text retrieval engines, like Lucene.

SMILA/Specifications/ProcessingMessageResequencer

Indexing Requirements

the requirements for indexing are a little relaxed compared to the general case. These are the simplifications:

  • the order needs only to be maintained on a per record base
  • older PRs are always superseded by newer PR for a given resource. the outcome of these operations can be discarded -- or even better: processing of these could be suppressed.

The following requirements are just a complete list of possible demands an application may impose. There is no implicit statement attached to the likelihood that a particular requirement is requested by an application, although there might be such. The intent of the list is to have a complete enumeration. qualification for an item is merely: may such a case, however unlikely, exist?

The solutions are to outline how a specific requirement may be implemented or covered. It also may chose to not cover it. An application may then chose a solution that matches its needs. As usual the requirements are split into functional and non-functional.

Functional

Basic Operations
Operation N Operation N+1 expected index State after N+1
ADD A,t1 ADD A,t2 A,t2
ADD A,t1 DELETE A,t2 A doesn't exist
DELETE A,t1 ADD A,t2 A exists

the following sections names the cases/scenarios that need to be covered that dont come to mind mediately but need to be considered nonetheless:

compound management, splitting of records

two cases of compounds need to be distinguished here: aggregations and compositions.

as in UML, aggregation means that the parent has a dependency to the child but the child may exist (as a child or even distinctly on its own) elsewhere. composition in contrast owns the descendants, meaning that they cant be accessed or created independent of the parent. life cycle of the child is controlled by the parent. the two cases will be discussed in the context of processing now:

Composition

this is the easier case for resquencing b/c only the one processing step working on the root item is possible to create PRs for child item. the ID of a child item will always include the parent id in some sort of way. thus resequencing the parent and its children as a whole is sufficient. (an internal ordering of parent and descendants may be required and is discussed below.)

Aggregation

the referenced item may be referenced

  • by some other item OR
  • may exist on its own.

as a consequence

  • several diff. root items may hold a reference to it OR
  • the child item is being processed as a root itself.

an application may require to handle these cases in these ways:

  1. referenced items are to be seen only in context to the parent or on their own.
    e.g. it does not make it apparent that the child belong to A is in fact the same as belonging to B OR the same root item C .
    this leads to an identical handling as with compositions and in each case a records is added to the index.
  2. referenced items are to be seen as distinct items, making the relationships apparent
    in this case the ID is generated always in the same way independently of the parent. Only one record for the child is added to the index. the child record will either contain no reference to the parent(s) or lists all of them.
  3. the third way of handling this, is to do both.

if the application requires

  • to handle references as distinct or shared items (2nd and 3rd case) AND
  • child items must be processed at the time of the parent (can happen if no change event is ever fired for the child or accessible),

... then aggregation poses the more challenging case in regard to resequencing. B/c new items are created during processing by possibly diff. items the parent item cannot be used as means of ordering the child items. Even less so, if the item may also be added on its own w/o a parent. instead there must be some means that created (split) records are ordered in their own realm.

Parent/Descendants Ordering Requirement

this requirement applies to the case

  • where the child is handled in the context of the parent (composition and 1st case aggregation) AND
  • the order of processing of descendants matters.

depending on the application's need the descends (ie. all records created from one record ) must be processed in a certain order.

parent/child associatens ususally result in a tree structure. there are 4 basic ways to traverse a tree, namely:

  • root to leaf, breadth first
  • root to leaf, depth first
  • leaf to root, breadth first
  • leaf to root, depth first

appart from this, applications may have special processing needs and as such an own, custom implementation must be supported.

support >1 processing targets

the same record is processed and added to more than one processing target (PT), such as an 2 diff. search indexs, having diff. structures for diff. tasks.

  • it is possible and likely that the records for the same resource will look differently.
  • diff. pipleines and branches may be executed to get to the PT
  • some PRs of the same data source may only be added to one PT while other are added to several and others are chose not to be processed at all.
clustering, complex processing chain

this means the scenario where processing is spread to diff. nodes in a cluster. it also includes usage of several MQs and/or piplines.

Assumption: there is just one instance on just one node to handle all access to the processing target.

oscillating items

these are items that constantly change and where the update intervall usually is smaller then it takes to process them.

Non - Functional

single point of failure

the solution (ideally) doesnt pose an SPOF.

scalability and performance

this is a general requirement and the solution shall outline under this section the impact on performance and where possible bottlenecks are.

Solution Pages

General Problems

Shared Record Instance via Blackboard

sharing the records via the BB for all processing steps introduces a grave concurrency bug. this is outlined in my mail @ [RE: Message Resequencer :: concept bug detected and general SMILA concurrency problem]

for the time being using a transient BB should solve the problem but is really not the ideal solution. in the end we need partitions for the BB that solve this issue IMO.

Appendix

Abreviations

Abrev Meaning
SN Sequence Number
RS Resquecer Service
FRS Smart Resquecer Service
SRS Smart Resquecer Service
Q the Queue as used in a Message Queue
PR processing request, ie to either add or delete a resource and do the needed processing for that. the PR is the combination of JMS message and record.

NOTE: it is legal to have >1 PRs for the same recource on the processing chain. this concept's goal is to bring the PRs into proper order and not neccessarily have just one PR per resource in the processing chain.
NOTE: the term "message" is often used interchangably for this, albeit not quite correct.

PT processing target, eg. a full text index
CA Config Annotation. A specially named annotation that is attached to the record holding all needed information the RS needs to do its work.


Ideas

  • replace the SN with a more general ComparableObject

Back to the top