The chart arranging persistence solutions could benefit from further hammering.
First, these technologies seem out of place:
- Web 2.0
I'm not sure what technologies these terms reference:
- Managed Objects
Second, perhaps a 2nd dimension is needed -- much of the hibernate vs. ibatis discussions center around this issue: whether the application is database- or object- centric.
When the database is a shared resource (there are legacy applications that simultaneously access the database directly) the applications are more often than not database-centric. This database sharing introduces lots of complexity and limits. If there is caching done in the applications, it has to be write-through. Code has to manage dirty reads or read-through behavior. EJB2 addressed all of these requirements and is famous for bloat and sloth. Even if none of the applications do caching, instead relying on the database for caching, the applications need to expect deadlocks and timeouts caused by the behavior of their peers.
When the database is not shared (all accesses are done through a single application server), the applications can still be database-centric. Forces here might be a separation of powers between the Java developers and the database developers. Or, if the database's performance is a critical factor, the database schema may need to dictate the object design. (This is not to say that object-centric applications cannot perform.) Or, you might have a development team that is strong in SQL and less strong in Java.
Database-centric applications are generally developed in a database-platform-specific manner. They leverage vendor specific syntax for stored procedures, functions, extensions, in-memory temporary tables, indexing tricks, etc.
When the database is not shared, and is managed by a single application server, there is an opportunity for object-centrism to prevail. Object-centric applications make use of object-relational mechanisms for persistence (or they use the few remaining object databases for persistence). Object centric applications can perform well when they can leverage caching to achieve throughput, safely managing cache coherency. Add the need for distributed caching in order to achieve throughput, and you can end up with onerous complexity required for distributed cache coherence.
Anyway, the question here is: Is Eclipse COSMOS a database- or object-centric application.
Generally in the iBatis vs. hibernate debate, the iBatis solution works best with database centric applications, and hibernate works best with object centric applications.
<Joel> So what are we - database-centric or object-centric? I'm sure both scenarios can be argued well. Let's take a look at a specific example: WEF.
WEF events clearly come in series, and so look a lot like rows. This would argue for a database-centric approach, with support for performant batch inserts, etc. However, the WEF format specifies relationships to Event Reporters and Event Sources. These Reporters and Sources are both WEF Components, which have ResourceIDs which (I believe) should follow the WSRF ResourceID contract, and should correspond to a persistent entity in the 'real world'. Ideally, instances of these components should be represented by single entities within COSMOS, and would fit naturally into a more object-centric approach - with support for object caching, etc. Is it a valid assumption to say that all of the data within a WEF Source and/or Reporter should be constant across a capture session? Is it valid to keep a Reporter's resourceID the same but change the Address?
I think what we're going to find is that the best persistence approach depends on the data - so what we're really arguing for here is how to implement an exemplar. Our real task is to define our API in such a way that we preclude as little as possible.
Here's an interesting comparison of requirements and implementations from BEA. http://edocs.bea.com/kodo/docs40/full/html/jdo_overview_why.html The page includes this sentence: "JPA does not, however, support non-relational databases."