Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Difference between revisions of "SMILA/Specifications/DeltaIndexingAndConnectivtyDiscussion09"

m
(Motivation for this page and usage)
 
(14 intermediate revisions by 2 users not shown)
Line 1: Line 1:
{|
+
[[Category:SMILA]]
| style="background:red;" | <br>WARNING: This page is under construction by [[User:Daniel.stucky.empolis.com|Daniel Stucky]]<br> &nbsp;
+
|}
+
  
 +
== Motivation for this page and usage ==
  
 +
the current implementation for the [[SMILA/Documentation/DeltaIndexingManager | DeltaIndexingManager]] has several problems or short comings which are listed under the section  [[#Ideas (under discussion)|Ideas (under discussion)]]. if the idea is rather large, an own page is usually better and should be created as a child to this page. it still should have an own section that at least must contain a link to the page..
  
== Alternative Concept ==
+
The initiating authors should edit only their own sections and not those of others.
  
=== Motivation ===
+
each subsection/page should state:
 +
* context such as: author, data, based on SVN revision
 +
*  motivation/problem
 +
* a solution proposal
  
==== Why DeltaIndexing ====
+
ideas that have been implemented are moved to their own page and referenced in [[#Implemented Changes|Implemented Changes]].
DeltaIndexing is used to speed up repeated indexing of the same DataSource. DeltaIndexing reflects the state of the DataSource in the index and can so determine if a Record of a DataSource is new, has changed or was deleted. An ID and a HASH-Token (which is build over characteristic data that shows a Record has changed, e.g. last modification date) are used to compute this information. In general as a consequence, less data has to be accessed on the DataSource and only the relevant data is indexed. For each DeltaIndexing run all Records that are new, have changed or have NOT changed are marked with a visited flag. At the end of the run, if no errors have occurred, all Records that have not been visited are computed. These records are the ones that are still in the index but are not available on the DataSource anymore (they were deleted or moved). These are also deleted from the index.
+
  
 +
== Ideas  and Problems (under discussion) ==
  
 +
=== DeltaIndexing reflects crawl state rather than index state ===
  
==== The Problems ====
+
One Problem at the moment is, that because SMILA's processing of incoming Records is asynchronous, DeltaIndexing does NOT really reflect the state of a Record in the index, as there is no guarantee that a Record is indexed after it was successfully added to the Queue. This could be achieved by implementing Notifications that update the DeltaIndexing state using this information. If this is done, then the computation of DeltaIndexing-Delete has to wait for all Queue entries to pass the workflow. This is a complex process which seems to be error-prone. Is it really necessary to reflect the index state or is it enough to reflect the last crawl state ?
* One Problem at the moment is, that because SMILA's processing of incoming Records is asynchronous, DeltaIndexing does NOT really reflect the state of a Record in the index, as there is no guarantee that a Record is indexed after it was successfully added to the Queue. This could be achieved by implementing Notifications that update the DeltaIndexing state using this information. If this is done, then the computation of DeltaIndexing-Delete has to wait for all Queue entries to pass the workflow. This is a complex process which seems to be error-prone. Is it really necessary to reflect the index state or is it enough to reflect the last crawl state ?
+
* the API of the ConnectivityManager includes parts of the API of the DeltaIndexingManager, which makes it more complex than necessary. Also it implicates that the ConnectivityManager has an internal state, as DeltaIndexing for a DataSource has to be initialized and finalized. This interfaces forces it's clients to make use of DeltaIndexing and to follow a strict workflow (initialize, add records, optionally call DeltaIndex-Delete and delete the returned IDs, finish). Even if this usage was configurable, the API is - simply spoken - ugly.
+
  
 
+
=== Extract Session Interface from DeltaIndexingManager ===
=== New Ideas ===
+
For a better separation of tasks and an easy handling of locks on data sources during a delta indexing run, we could introduce the following interfaces. The implementations should only be proxies using the same DeltaIndexingManager service implementation, so that a DeltaIndexingSession may internally use another service if the initial one becomes unavailable.
 
+
==== Seperated Interfaces ====
+
I suggest to separate ConnectivityManager interface and DeltaIndexingManager interface. It makes both APIs more clear and focused. We should think about SMILA more of a "construction kit" than a "ready for all issues salvation". E.g. if someone wants to connect to SMILA, not using Crawlers or Agents but using the benefits of DeltaIndexing, all the components he needs are there. He can implement his own importer using the DeltaIndexingManager and ConnectivityManager interfaces. There is no need to provide the whole functionality "en-block".
+
  
 
<source lang="java">
 
<source lang="java">
interface ConnectivityManager
+
interface DeltaIndexingManager
 
{
 
{
  int add(Record[] records) throws ConnectivityException;
+
    /**
  int update(Record[] records) throws ConnectivityException; // optional
+
    * Initializes a new DeltaIndexingSession if the datasource is not locked.
  int delete(Id[] ids) throws ConnectivityException;
+
    */
 +
    DeltaIndexingSession init(String dataSourceID) throws DeltaIndexingException;
 +
 
 +
    /**
 +
    * Clear all data sources that are not locked.
 +
    */
 +
    void clear() throws DeltaIndexingException;
 +
 
 +
    /**
 +
    * Clears the data source if not locked.
 +
    */
 +
    void clear(String dataSourceID) throws DeltaIndexingException;
 +
 
 +
    /**
 +
    * Unlocks all data sources by force.
 +
    */
 +
    void unlockDatasources() throws DeltaIndexingException;
 +
 +
    /**
 +
    * Checks if a data source exists.
 +
    */
 +
    boolean exists(String dataSourceId);
 
}
 
}
 
</source>
 
</source>
Line 35: Line 55:
  
 
<source lang="java">
 
<source lang="java">
interface DeltaIndexinManager
+
interface DeltaIndexingSession
 
{
 
{
     void init(String dataSourceID) throws DeltaIndexingException;
+
     /**
 +
    * Checks if the id needs to be updated.
 +
    */
 
     boolean checkForUpdate(Id id, String hash) throws DeltaIndexingException;
 
     boolean checkForUpdate(Id id, String hash) throws DeltaIndexingException;
 +
 +
    /**
 +
    * Maks the id as visited.
 +
    */
 
     void visit(Id id, String hash) throws DeltaIndexingException;
 
     void visit(Id id, String hash) throws DeltaIndexingException;
 +
 +
    /**
 +
    * Returns an iterator over all unvistied ids of the data source
 +
    */
 
     Iterator<Id> obsoleteIdIterator(String dataSourceID) throws DeltaIndexingException;
 
     Iterator<Id> obsoleteIdIterator(String dataSourceID) throws DeltaIndexingException;
     void finish(String dataSourceId) throws ConnectivityException;
+
 
     ...
+
    /**
     // same functionality for Compound objects, remember not to overload methods when using SCA
+
    * Returns an iterator over all unvistied ids of a parent id (compound objects)
 +
    */
 +
    Iterator<Id> obsoleteIdIterator(Id id) throws DeltaIndexingException;
 +
 
 +
    /**
 +
    * Deletes the id.
 +
    */
 +
     void delete(Id id) throws DeltaIndexingException;
 +
 
 +
     /**
 +
    * Finishes the deltaindexing run and unlocks the data source.
 +
     */
 +
    void finish(String dataSourceID) throws DeltaIndexingException;
 
}
 
}
 
</source>
 
</source>
  
 +
<b>This approach was not realized.</b>
 +
But a sessionId was introduced to distinguish between different sessions without relying on thread ids. See [https://bugs.eclipse.org/bugs/show_bug.cgi?id=279243 https://bugs.eclipse.org/bugs/show_bug.cgi?id=279243]
  
Notes: If calls to ConnectivityManager are NOT relevant for DeltaIndexingState (e.g. if it's enough that a call of add/delete succeeded, not the successfull adding to the Queue is required) they could forego a return value and the ConnectivityException and then in the SCA interface these methods could be annotated with @oneway to improve performance. Via callbacks it would still be possible to send back information asynchronously. But if feedback is required, the synchronous method call is much easier to use.
 
  
 +
==== Discussion ====
  
==== Usage of DeltaIndexingManager ====
+
===== modifications to the interfaces =====
# used by CrawlerController: This approach would not change much of the current programming logic. Only that the CrawlerController would communicate with two references instead of one.
+
# used by Crawlers: This is a radical change as this also affects the Crawler interface. Crawlers could directly communicate with the DeltaIndexingManager and provide only those Records that pass DeltaIndexing (are new, nedd an update). CrawlerController and Crawler could implement a Consumer/Producer pattern which should improve performance. No more sending of arrays with DIInformation and thereafter retrieving the Record objects. DeltaIndexing-Delete information is computed in the Crawler and can passed to the CrawlerController as regular Records (only the ID is set) and a delete flag to notify the CrawlerController that this Record is to be deleted. This should reduce communication overhead, as the DIInformation has not to be passed between multiple components and the whole process can work multithreaded. Of course this adds a lot more logic to the Crawler and demands more knowledge from a Crawler developer. It would also mean that ID and HASH are generated in the Crawler. The downside is that each Crawler has to implement the DeltaIndexing workflow themselves. <br>We could even move all execution logic to the Crawler. CrawlerController would become obsolete. Then Crawlers would handle everything themselves - communication with DeltaIndexingManager, CoumpoundHandlers and ConnectivityManager. I think in this way the best performance can be achieved, as the setup is the very simple. No unnecessary passing of data between components. But a lot of logic has to be re-implemented in every Crawler. I wonder if there is a chance to minimize this.
+
  
 +
TM 2009 10 15:
 +
i second the notion to extract a session interface. but i also would do a few renames and changes like so:
  
 +
<source lang="java">
  
== New Feature: DeltaIndexing On/Off ==
+
public interface IDeltaIndexingManager {
  
=== Motivation ===
+
  /**
It should be possible to turn the usage of DeltaIndexing on and off, either to reduce complexity or to gain better performance.
+
  * Initializes the internal state for an import of a dataSourceID and creates a session wherein it establishes a lock
 +
  * to avoid that the same dataSourceID is initialized multiple times concurrently. It returns an object for the session
 +
  * that a client has to use to gain access to the locked data source.
 +
  *
 +
  * @param dataSourceID
 +
  *          dataSourceID
 +
  *
 +
  * @return the i delta indexing session
 +
  *
 +
  * @throws DeltaIndexingException
 +
  *          the delta indexing exception
 +
  */
 +
  IDeltaIndexingSession createSession(final String dataSourceID) throws DeltaIndexingException;
  
=== Draft ===
+
  /* methods that don't need a session */
A simple boolean logic (on/off) seems to simple, as I see 4 possible use cases (modes):
+
* <b>FULL</b>: DeltaIndexing is fully activated. This means that
+
** each Record is checked if it needs to be updated
+
** for each Record an entry is made/updated in the DeltaIndexingManager
+
** Delta-Delete is executed at the end of the import
+
* <b>ADDITIVE</b>: as <b>FULL</b>, but Delta-Delete is not executed (we allow records in the index that do not exist anymore
+
* <b>INITIAL</b>: For an initial import in an empty index or a new source in an existing index performance can be optimized by
+
** NOT checking if a record needs to be updated (we know that all records are new)
+
** adding an entry in the DeltaIndexingManager for each Record. This allows later imports to make use of DeltaIndexing
+
** NOT: executing Delta-Delete (we know that no records are to be deleted)
+
* <b>DISABLED</b>: DeltaIndexing is fully deactivated. No checks are done, no entries are created/updated, no Delta-Delete is executed. Later runs cannot benefit from DeltaIndexing
+
As always, Delta-Delete MUST NOT be executed if any errors occur during import as we do not want to delete records erroneously!
+
  
=== Configuration ===
+
  /**
To configure the mode of DeltaIndexing execution, an additional parameter is needed in the IndexOrderConfiguration:
+
  * Clears all entries of the DeltaIndexingManager including sessions.
 +
  *
 +
  * @throws DeltaIndexingException
 +
  *          the delta indexing exception
 +
  */
 +
  void clear() throws DeltaIndexingException;
  
XML-Schema:
+
  /**
<source lang="xml">
+
  * Unlock the given data source and removes the sessions.
...
+
  *
<xs:element name="DeltaIndexingMode" type="DeltaIndexingModeType"/>
+
  * @param dataSourceID
 +
  *          the data source id
 +
  * @throws DeltaIndexingException
 +
  *          the delta indexing exception
 +
  */
 +
  void unlockDatasource(final String dataSourceID) throws DeltaIndexingException;
  
   <xs:simpleType name="DeltaIndexingModeType">
+
   /**
    <xs:annotation>
+
  * Unlock all data sources and removes all sessions.
      <xs:appinfo>
+
  *
        <jxb:class ref="org.eclipse.eilf.connectivity.framework.indexorder.messages.DeltaIndexingModeType"/>
+
  * @throws DeltaIndexingException
      </xs:appinfo>
+
  *          the delta indexing exception
    </xs:annotation>
+
  */
    <xs:restriction base="xs:string">
+
  void unlockDatasources() throws DeltaIndexingException;
      <xs:pattern value="FULL"/>
+
 
      <xs:pattern value="ADDITIVE"/>
+
  /**
      <xs:pattern value="INITIAL"/>
+
  * Gets an overview what data sources are locked or unlocked.
      <xs:pattern value="DISABLED"/>
+
  *
    </xs:restriction>
+
  * @return a map containing the dataSoureId and the LockState
   </xs:simpleType>
+
  */
...
+
  Map<String, LockState> getLockStates();
</source>
+
 
 +
  /**
 +
  * Checks if the entries for the given dataSourceId exist.
 +
  *
 +
  * @param dataSourceId
 +
  *          the data source id
 +
  *
 +
  * @return true, if successful
 +
  */
 +
  boolean dataSourceExists(final String dataSourceId);
 +
 
 +
  /**
 +
  * Get the number of delta indexing entries for the given dataSourceID.
 +
  *
 +
  * @param dataSourceID
 +
  *          the data source id
 +
  * @return the number of entries
 +
  */
 +
  long getEntryCount(final String dataSourceID);
 +
 
 +
  /**
 +
  * Get the number of delta indexing entries for all data sources.
 +
  *
 +
  * @return a map of dataSoureIds and the entry counts
 +
  */
 +
  Map<String, Long> getEntryCounts();
 +
 
 +
  /**
 +
  * An enumeration defining the lock states a data source in the DeltaIndexingManager.
 +
  */
 +
  public enum LockState {
 +
    /**
 +
    * The lock states.
 +
    */
 +
    LOCKED, UNLOCKED;
 +
  }
 +
}
 +
 
 +
/**
 +
* The Interface IDeltaIndexingSession.
 +
*
 +
* @author tmenzel
 +
*/
 +
public interface IDeltaIndexingSession {
 +
 
 +
  /**
 +
  * Clear all entries of the given sessionId.
 +
  *
 +
  * @throws DeltaIndexingSessionException
 +
  *          if the sessionId is invalid
 +
  * @throws DeltaIndexingException
 +
  *          the delta indexing exception
 +
  */
 +
  void clear() throws DeltaIndexingSessionException, DeltaIndexingException;
 +
 
 +
  /**
 +
  * Finish this delta indexing session and remove the lock.
 +
  *
 +
  * @throws DeltaIndexingSessionException
 +
  *          if the sessionId is invalid
 +
  * @throws DeltaIndexingException
 +
  *          the delta indexing exception
 +
  */
 +
  void commit() throws DeltaIndexingSessionException, DeltaIndexingException;
 +
 
 +
  /**
 +
  * Delete.
 +
  *
 +
  * @param id
 +
  *          the id
 +
  *
 +
  * @throws DeltaIndexingSessionException
 +
  *          if the sessionId is invalid
 +
  * @throws DeltaIndexingException
 +
  *          the delta indexing exception
 +
  */
 +
  void delete(final Id id) throws DeltaIndexingSessionException, DeltaIndexingException;
 +
 
 +
  /**
 +
  * Delete untouched ids. rather than calling {@link #delete(Id)} by the controller when iterating thru the ids, the
 +
  * implementation may do so internally for all untouched ids in one go more efficiently.
 +
  *
 +
  * @param id
 +
  *          the id
 +
  *
 +
  * @return the number of deleted ids
 +
  *
 +
  * @throws DeltaIndexingSessionException
 +
  *          the delta indexing session exception
 +
  * @throws DeltaIndexingException
 +
  *          the delta indexing exception
 +
  */
 +
  long deleteUntouchedIds() throws DeltaIndexingSessionException, DeltaIndexingException;
 +
 
 +
  /**
 +
  * Obsolete id iterator.
 +
  *
 +
  *
 +
  * @return the iterator< id>
 +
  *
 +
  * @throws DeltaIndexingSessionException
 +
  *          if the sessionId is invalid
 +
  * @throws DeltaIndexingException
 +
  *          the delta indexing exception
 +
  */
 +
  Iterator<Id> getUntouchedIds() throws DeltaIndexingSessionException, DeltaIndexingException;
 +
 
 +
  /**
 +
  * Obsolete id iterator for id fragments.
 +
  *
 +
  * @param id
 +
  *          the id
 +
  *
 +
  * @return the iterator< id>
 +
  *
 +
  * @throws DeltaIndexingSessionException
 +
  *          if the sessionId is invalid
 +
  * @throws DeltaIndexingException
 +
  *          the delta indexing exception
 +
  */
 +
  Iterator<Id> getUntouchedIds(final Id id) throws DeltaIndexingSessionException, DeltaIndexingException;
 +
 
 +
  /**
 +
  * checks if the hash of the current id is new or has changed (true) or not (false). //
 +
  *
 +
  * to reduce method calls mark entry as visited on return value false
 +
  *
 +
  * @param id
 +
  *          the id
 +
  * @param hash
 +
  *          the hash
 +
  *
 +
  * @return true, if checks for changed
 +
  *
 +
  * @throws DeltaIndexingSessionException
 +
  *          the delta indexing session exception
 +
  * @throws DeltaIndexingException
 +
  *          the delta indexing exception
 +
  */
 +
  boolean hasChanged(final Id id, final String hash) throws DeltaIndexingSessionException, DeltaIndexingException;
 +
 
 +
  /**
 +
  * rolls back changes that were made in the curreent session between init() and finish(), it should be called before
 +
  * finishing process.
 +
  *
 +
  * @throws DeltaIndexingSessionException
 +
  *          if the sessionId is invalid
 +
  * @throws DeltaIndexingException
 +
  *          the delta indexing exception
 +
  */
 +
   void rollback() throws DeltaIndexingSessionException, DeltaIndexingException;
 +
 
 +
  /**
 +
  * Creates or updates the delta indexing entry. this is THE method to make the record known to DI. It sets the hash,
 +
  * the isCompound flag and marks this id as visited.
 +
  *
 +
  * @param id
 +
  *          the id
 +
  * @param hash
 +
  *          the hash
 +
  * @param isCompound
 +
  *          boolean flag if the record identified by id is a compound record (true) or not (false)
 +
  *
 +
  * @throws DeltaIndexingSessionException
 +
  *          if the sessionId is invalid
 +
  * @throws DeltaIndexingException
 +
  *          the delta indexing exception
 +
  */
 +
  void touch(final Id id, final String hash, final boolean isCompound) throws DeltaIndexingSessionException,
 +
    DeltaIndexingException;
 +
 
 +
  /**
 +
  * this is a combination of {@link #hasChanged(Id, String)} and {@link #touch(Id, String, boolean)} in one step.
 +
  * <p>
 +
  * It has a perf. gain over calling the methods seperatly but has the drawback, that the record is always touched
 +
  * independently of an exception that occurs before putting the record into the Q. on the other hand, this matters not
 +
  * much as the subsequent processing may also cause errors which arent reflected in the "touch" state.
 +
  *
 +
  * @param id
 +
  *          the id
 +
  * @param hash
 +
  *          the hash
 +
  * @param isCompound
 +
  *          the is compound
 +
  *
 +
  * @return true, if successful
 +
  *
 +
  * @throws DeltaIndexingSessionException
 +
  *          the delta indexing session exception
 +
  * @throws DeltaIndexingException
 +
  *          the delta indexing exception
 +
  */
 +
  boolean checkAndTouch(final Id id, final String hash, final boolean isCompound)
 +
    throws DeltaIndexingSessionException, DeltaIndexingException;
 +
 
 +
}
  
XML example
 
<source lang="xml">
 
...
 
<DeltaIndexingMode>FULL</DeltaIndexingMode>
 
...
 
 
</source>
 
</source>
  
  
 +
==== Usage of DeltaIndexingManager by CrawlerControler alone ====
  
=== Implementation ===
+
Here is another idea based on the changes introduced with [[SMILA/Specifications/DeltaIndexingAndConnectivtyDiscussion09/Separate_Interfaces_for_ConnectivityManager_and_DeltaIndexingManager]] but taking it further that not the CrawlerController communicates with DeltaIndexingManager but each Crawler:
  
==== Current Concept ====
+
This is a radical change as it also affects the Crawler interface. Crawlers could directly communicate with the DeltaIndexingManager and provide only those Records that pass DeltaIndexing (are new, nedd an update). CrawlerController and Crawler could implement a Consumer/Producer pattern which should improve performance. No more sending of arrays with DIInformation and thereafter retrieving the Record objects. DeltaIndexing-Delete information is computed in the Crawler and can passed to the CrawlerController as regular Records (only the ID is set) and a delete flag to notify the CrawlerController that this Record is to be deleted. This should reduce communication overhead, as the DIInformation has not to be passed between multiple components and the whole process can work multithreaded. Of course this adds a lot more logic to the Crawler and demands more knowledge from a Crawler developer. It would also mean that ID and HASH are generated in the Crawler. The downside is that each Crawler has to implement the DeltaIndexing workflow themselves. <br>We could even move all execution logic to the Crawler. CrawlerController would become obsolete. Then Crawlers would handle everything themselves - communication with DeltaIndexingManager, CoumpoundHandlers and ConnectivityManager. I think in this way the best performance can be achieved, as the setup is the very simple. No unnecessary passing of data between components. But a lot of logic has to be re-implemented in every Crawler. I wonder if there is a chance to minimize this.
The execution logic has to be added in parts to the CrawlerController (CrawlThread) and ConnectivityManager. Therefore the mode has to be added to the ConnectivityManager interface. The problem is, that still initiialize and finish need to be called, and that the MODE then controls if and how DeltaIndexing is used. This makes the usage and implementation of ConnectivityManager more and more complex and obscure (too many special cases).
+
  
 +
(an [[SMILA/Specifications/DeltaIndexingAndConnectivtyDiscussion09/Usage_of_DeltaIndexingManager_by_CrawlerControler_alone| empty page]] exists for this already)
  
==== Alternative Concept ====
+
== Implemented Changes ==
The execution logic has to be added either
+
 
* to the CrawlerController (CrawlThread) only. It decides what actions to perform on the given mode.
+
{{CTable}}
* to the Crawler themselves, if the more radical change is implemented
+
| Page || Date || Bug || Author(s)
 +
|-
 +
| [[SMILA/Specifications/DeltaIndexingAndConnectivtyDiscussion09/Turning_DeltaIndexing_On_or_Off |New Feature: DeltaIndexing On/Off ]] || 2009-06-10 || {{bug|279242}} || [[User:Daniel.stucky.empolis.com|Daniel Stucky]]
 +
|-
 +
| [[SMILA/Specifications/DeltaIndexingAndConnectivtyDiscussion09/Separate_Interfaces_for_ConnectivityManager_and_DeltaIndexingManager| Separate Interfaces for ConnectivityManager and DeltaIndexingManager  ]] || 2008-06 ? || ? || [[User:Daniel.stucky.empolis.com|Daniel Stucky]]
 +
|}

Latest revision as of 02:56, 17 October 2009


Motivation for this page and usage

the current implementation for the DeltaIndexingManager has several problems or short comings which are listed under the section Ideas (under discussion). if the idea is rather large, an own page is usually better and should be created as a child to this page. it still should have an own section that at least must contain a link to the page..

The initiating authors should edit only their own sections and not those of others.

each subsection/page should state:

  • context such as: author, data, based on SVN revision
  • motivation/problem
  • a solution proposal

ideas that have been implemented are moved to their own page and referenced in Implemented Changes.

Ideas and Problems (under discussion)

DeltaIndexing reflects crawl state rather than index state

One Problem at the moment is, that because SMILA's processing of incoming Records is asynchronous, DeltaIndexing does NOT really reflect the state of a Record in the index, as there is no guarantee that a Record is indexed after it was successfully added to the Queue. This could be achieved by implementing Notifications that update the DeltaIndexing state using this information. If this is done, then the computation of DeltaIndexing-Delete has to wait for all Queue entries to pass the workflow. This is a complex process which seems to be error-prone. Is it really necessary to reflect the index state or is it enough to reflect the last crawl state ?

Extract Session Interface from DeltaIndexingManager

For a better separation of tasks and an easy handling of locks on data sources during a delta indexing run, we could introduce the following interfaces. The implementations should only be proxies using the same DeltaIndexingManager service implementation, so that a DeltaIndexingSession may internally use another service if the initial one becomes unavailable.

interface DeltaIndexingManager
{
    /**
     * Initializes a new DeltaIndexingSession if the datasource is not locked.
     */
    DeltaIndexingSession init(String dataSourceID) throws DeltaIndexingException;
 
    /**
     * Clear all data sources that are not locked.
     */
    void clear() throws DeltaIndexingException;
 
    /**
     * Clears the data source if not locked.
     */
    void clear(String dataSourceID) throws DeltaIndexingException;
 
    /**
     * Unlocks all data sources by force.
     */
    void unlockDatasources() throws DeltaIndexingException;
 
    /**
     * Checks if a data source exists.
     */
    boolean exists(String dataSourceId);
}


interface DeltaIndexingSession
{
    /**
     * Checks if the id needs to be updated.
     */
    boolean checkForUpdate(Id id, String hash) throws DeltaIndexingException;
 
    /**
     * Maks the id as visited.
     */
    void visit(Id id, String hash) throws DeltaIndexingException;
 
    /**
     * Returns an iterator over all unvistied ids of the data source
     */
    Iterator<Id> obsoleteIdIterator(String dataSourceID) throws DeltaIndexingException;
 
    /**
     * Returns an iterator over all unvistied ids of a parent id (compound objects)
     */
    Iterator<Id> obsoleteIdIterator(Id id) throws DeltaIndexingException;
 
    /**
     * Deletes the id.
     */
    void delete(Id id) throws DeltaIndexingException;
 
    /**
    * Finishes the deltaindexing run and unlocks the data source.
    */
    void finish(String dataSourceID) throws DeltaIndexingException;
}

This approach was not realized. But a sessionId was introduced to distinguish between different sessions without relying on thread ids. See https://bugs.eclipse.org/bugs/show_bug.cgi?id=279243


Discussion

modifications to the interfaces

TM 2009 10 15: i second the notion to extract a session interface. but i also would do a few renames and changes like so:

public interface IDeltaIndexingManager {
 
  /**
   * Initializes the internal state for an import of a dataSourceID and creates a session wherein it establishes a lock
   * to avoid that the same dataSourceID is initialized multiple times concurrently. It returns an object for the session
   * that a client has to use to gain access to the locked data source.
   * 
   * @param dataSourceID
   *          dataSourceID
   * 
   * @return the i delta indexing session
   * 
   * @throws DeltaIndexingException
   *           the delta indexing exception
   */
  IDeltaIndexingSession createSession(final String dataSourceID) throws DeltaIndexingException;
 
  /* methods that don't need a session */
 
  /**
   * Clears all entries of the DeltaIndexingManager including sessions.
   * 
   * @throws DeltaIndexingException
   *           the delta indexing exception
   */
  void clear() throws DeltaIndexingException;
 
  /**
   * Unlock the given data source and removes the sessions.
   * 
   * @param dataSourceID
   *          the data source id
   * @throws DeltaIndexingException
   *           the delta indexing exception
   */
  void unlockDatasource(final String dataSourceID) throws DeltaIndexingException;
 
  /**
   * Unlock all data sources and removes all sessions.
   * 
   * @throws DeltaIndexingException
   *           the delta indexing exception
   */
  void unlockDatasources() throws DeltaIndexingException;
 
  /**
   * Gets an overview what data sources are locked or unlocked.
   * 
   * @return a map containing the dataSoureId and the LockState
   */
  Map<String, LockState> getLockStates();
 
  /**
   * Checks if the entries for the given dataSourceId exist.
   * 
   * @param dataSourceId
   *          the data source id
   * 
   * @return true, if successful
   */
  boolean dataSourceExists(final String dataSourceId);
 
  /**
   * Get the number of delta indexing entries for the given dataSourceID.
   * 
   * @param dataSourceID
   *          the data source id
   * @return the number of entries
   */
  long getEntryCount(final String dataSourceID);
 
  /**
   * Get the number of delta indexing entries for all data sources.
   * 
   * @return a map of dataSoureIds and the entry counts
   */
  Map<String, Long> getEntryCounts();
 
  /**
   * An enumeration defining the lock states a data source in the DeltaIndexingManager.
   */
  public enum LockState {
    /**
     * The lock states.
     */
    LOCKED, UNLOCKED;
  }
}
 
/**
 * The Interface IDeltaIndexingSession.
 * 
 * @author tmenzel
 */
public interface IDeltaIndexingSession {
 
  /**
   * Clear all entries of the given sessionId.
   * 
   * @throws DeltaIndexingSessionException
   *           if the sessionId is invalid
   * @throws DeltaIndexingException
   *           the delta indexing exception
   */
  void clear() throws DeltaIndexingSessionException, DeltaIndexingException;
 
  /**
   * Finish this delta indexing session and remove the lock.
   * 
   * @throws DeltaIndexingSessionException
   *           if the sessionId is invalid
   * @throws DeltaIndexingException
   *           the delta indexing exception
   */
  void commit() throws DeltaIndexingSessionException, DeltaIndexingException;
 
  /**
   * Delete.
   * 
   * @param id
   *          the id
   * 
   * @throws DeltaIndexingSessionException
   *           if the sessionId is invalid
   * @throws DeltaIndexingException
   *           the delta indexing exception
   */
  void delete(final Id id) throws DeltaIndexingSessionException, DeltaIndexingException;
 
  /**
   * Delete untouched ids. rather than calling {@link #delete(Id)} by the controller when iterating thru the ids, the
   * implementation may do so internally for all untouched ids in one go more efficiently.
   * 
   * @param id
   *          the id
   * 
   * @return the number of deleted ids
   * 
   * @throws DeltaIndexingSessionException
   *           the delta indexing session exception
   * @throws DeltaIndexingException
   *           the delta indexing exception
   */
  long deleteUntouchedIds() throws DeltaIndexingSessionException, DeltaIndexingException;
 
  /**
   * Obsolete id iterator.
   * 
   * 
   * @return the iterator< id>
   * 
   * @throws DeltaIndexingSessionException
   *           if the sessionId is invalid
   * @throws DeltaIndexingException
   *           the delta indexing exception
   */
  Iterator<Id> getUntouchedIds() throws DeltaIndexingSessionException, DeltaIndexingException;
 
  /**
   * Obsolete id iterator for id fragments.
   * 
   * @param id
   *          the id
   * 
   * @return the iterator< id>
   * 
   * @throws DeltaIndexingSessionException
   *           if the sessionId is invalid
   * @throws DeltaIndexingException
   *           the delta indexing exception
   */
  Iterator<Id> getUntouchedIds(final Id id) throws DeltaIndexingSessionException, DeltaIndexingException;
 
  /**
   * checks if the hash of the current id is new or has changed (true) or not (false). //
   * 
   * to reduce method calls mark entry as visited on return value false
   * 
   * @param id
   *          the id
   * @param hash
   *          the hash
   * 
   * @return true, if checks for changed
   * 
   * @throws DeltaIndexingSessionException
   *           the delta indexing session exception
   * @throws DeltaIndexingException
   *           the delta indexing exception
   */
  boolean hasChanged(final Id id, final String hash) throws DeltaIndexingSessionException, DeltaIndexingException;
 
  /**
   * rolls back changes that were made in the curreent session between init() and finish(), it should be called before
   * finishing process.
   * 
   * @throws DeltaIndexingSessionException
   *           if the sessionId is invalid
   * @throws DeltaIndexingException
   *           the delta indexing exception
   */
  void rollback() throws DeltaIndexingSessionException, DeltaIndexingException;
 
  /**
   * Creates or updates the delta indexing entry. this is THE method to make the record known to DI. It sets the hash,
   * the isCompound flag and marks this id as visited.
   * 
   * @param id
   *          the id
   * @param hash
   *          the hash
   * @param isCompound
   *          boolean flag if the record identified by id is a compound record (true) or not (false)
   * 
   * @throws DeltaIndexingSessionException
   *           if the sessionId is invalid
   * @throws DeltaIndexingException
   *           the delta indexing exception
   */
  void touch(final Id id, final String hash, final boolean isCompound) throws DeltaIndexingSessionException,
    DeltaIndexingException;
 
  /**
   * this is a combination of {@link #hasChanged(Id, String)} and {@link #touch(Id, String, boolean)} in one step.
   * <p>
   * It has a perf. gain over calling the methods seperatly but has the drawback, that the record is always touched
   * independently of an exception that occurs before putting the record into the Q. on the other hand, this matters not
   * much as the subsequent processing may also cause errors which arent reflected in the "touch" state.
   * 
   * @param id
   *          the id
   * @param hash
   *          the hash
   * @param isCompound
   *          the is compound
   * 
   * @return true, if successful
   * 
   * @throws DeltaIndexingSessionException
   *           the delta indexing session exception
   * @throws DeltaIndexingException
   *           the delta indexing exception
   */
  boolean checkAndTouch(final Id id, final String hash, final boolean isCompound)
    throws DeltaIndexingSessionException, DeltaIndexingException;
 
}


Usage of DeltaIndexingManager by CrawlerControler alone

Here is another idea based on the changes introduced with SMILA/Specifications/DeltaIndexingAndConnectivtyDiscussion09/Separate_Interfaces_for_ConnectivityManager_and_DeltaIndexingManager but taking it further that not the CrawlerController communicates with DeltaIndexingManager but each Crawler:

This is a radical change as it also affects the Crawler interface. Crawlers could directly communicate with the DeltaIndexingManager and provide only those Records that pass DeltaIndexing (are new, nedd an update). CrawlerController and Crawler could implement a Consumer/Producer pattern which should improve performance. No more sending of arrays with DIInformation and thereafter retrieving the Record objects. DeltaIndexing-Delete information is computed in the Crawler and can passed to the CrawlerController as regular Records (only the ID is set) and a delete flag to notify the CrawlerController that this Record is to be deleted. This should reduce communication overhead, as the DIInformation has not to be passed between multiple components and the whole process can work multithreaded. Of course this adds a lot more logic to the Crawler and demands more knowledge from a Crawler developer. It would also mean that ID and HASH are generated in the Crawler. The downside is that each Crawler has to implement the DeltaIndexing workflow themselves.
We could even move all execution logic to the Crawler. CrawlerController would become obsolete. Then Crawlers would handle everything themselves - communication with DeltaIndexingManager, CoumpoundHandlers and ConnectivityManager. I think in this way the best performance can be achieved, as the setup is the very simple. No unnecessary passing of data between components. But a lot of logic has to be re-implemented in every Crawler. I wonder if there is a chance to minimize this.

(an empty page exists for this already)

Implemented Changes

Page Date Bug Author(s)
New Feature: DeltaIndexing On/Off 2009-06-10 bug 279242 Daniel Stucky
Separate Interfaces for ConnectivityManager and DeltaIndexingManager 2008-06 ?  ? Daniel Stucky

Back to the top