Skip to main content

Notice: this Wiki will be going read only early in 2024 and edits will no longer be possible. Please see: https://gitlab.eclipse.org/eclipsefdn/helpdesk/-/wikis/Wiki-shutdown-plan for the plan.

Jump to: navigation, search

Talk:EclipseLink/Development/Incubator/Extensions/DatabasePlatformPromotion

Revision as of 06:46, 27 January 2010 by Dieskun.yahoo.com (Talk | contribs) (Comments to DatabasePlatformPromotion: added comments to Rainer's)

Thank you for putting this together.

Having been working on the Symfoware platform incubation I wondered about the following points.

Functionality Check List

About the feature list, how about splitting them up in categories: features mandatory/optional by JPA spec, and EclipseLink specific functionality? As EclipseLink is used in Java EE application servers I think users might want to know which restrictions are relevant when running Java EE compliant applications.

About "Default runs of Core SRG and LRG", as SRG is mandatory to run and LRG is not, their test status could be different. So maybe better to put them on separate lines?

BTW, the current JPA test wiki page doesn't explain how to run the SRG test set. The ant file has a 'test-srg' target but found no reference to it on the wiki page (EclipseLink/Test/JPA).

Are there any rules about who executes the final test run to confirm a DatabasePlatform passes the basic test sets, or the LRG's for certification? Still the contributor or someone from the EclipseLink team? I'd suggest the contributors do that and send their results to the EclipseLink team (maybe to be published online on a platfom testing page with details like date, used version, JDK, OS, etc.?).

Maintenance

What is the procedure after a DatabasePlatform has been included in EclipseLink regarding continued regression testing and maintenance?
I suppose some test sets need to be run at certain stages:

  • RT after changes in core code that could affect platforms.
  • RT before releases.
  • RT when new releases of the DB come out.

Also, will the EclipseLink developers update contributed platform classes if they make changes in core code that affect the platform classes, and if so, are the contributors required to review them and run tests as RT again?
Is it possible to provide the EclipseLink team with the database product so that the tests can be included in regular automated test runs? (so contributors can be notified of issues soon after they are introduced?

The latter would be ideal, however, I fear, this will not always be possible. As long as the original contributor
is able to actively maintain the platform, things are just fine. Yet, once that is no more the case it becomes a
challenge to maintain the platform or even to figure out that it is no longer maintained.
One might argue that's just the community way, and the first one who wants to use the platform and finds out it's
become broken ought to fix it. However, it is not very encouraging for mere consumers to download an artefact and
then discover its "archeological" nature.  

Support

The word "supported" is used in several places. A supported function means to me a function works as expected (to the best knowledge of the contributor). But what could "supported by: FooCompany" mean to people? What is expected from Fujitsu if I include that in my DatabasePlatform?
I can (and am eager to) help anyone trying to run EclipseLink with Symfoware, or anyone trying to extend/improve the platform class or investigate test failures that arise after my successful runs due to introduced changes, etc. But my company won't provide 7 days a week, 24 hour support. I think that the use of that word in the template class comment needs clarification.

End Of Life

There might come a time that you'd want to prune database platforms, either because they are for a DB or DB version that is no longer available, or there is no longer anyone in the EclipseLink community (incl. the original contributor) who can maintain it. You might want to include requirements that need to remain satisfied for a contributed platform not to be pruned in a next version.
A contributor also might want a platform class for a particular database version dropped to reduce RT load and focus on the current releases of the database product.

Comments to DatabasePlatformPromotion

Thank you very much for addressing this topic. Please allow for some thoughts on it.


Basic Test Suites

The proposal suggests some basic test suites that must run without failure on a database platform in order for it to qualify for EclipseLink. However, it also states, a failure may be avoided by assessing the failure and determining that a test cannot be made to pass because of limitations on the platform and altering the test not to run for that platform.

While it is certainly true that there will always be tests that run on one database platform and will fail on another, I sincerely believe, to avoid the process of platform promotion to become meaningless we have to define a core set of tests that simply must succeed on all database platforms to be included in EclipseLink. Elsewise, to the utmost extreme, I could provide a database platform where all the tests of the basic test suites are documented to fail.

The core set of tests that need to succeed should probably be based on the load and store statements that are created and executed by EclipseLink (in contrast to JPQL queries). For this purpose, a minimal set of EclipseLink features has to be defined such that one may want to say EclipseLink runs on a database platforms iff EclipseLink configured to that set of features runs successfully on that platform. Then, from the load and store statements used by EclipseLink when configured to those features the set of mandatory core tests may be deduced.

  • The problem I see with 'mandatory tests' is that you assume the database supports the way EclipseLink is driving it. The database might support the feature just fine, in a different way, but the contributor does not have the experience/resources to update core EclipseLink code to implement this for this platform, or the EclipseLink developers cannot accept such a major change for just one DB because of the time it would take to review and test that it does not break things for other DBs. If one test in the 'mandatory' list is one of these, should the contribution be refused? Even though its users might be fine with this restriction, or can easily work around it in their applications?
To address the above issue is, I believe, the art of defining the minimal set of features properly. Yet, not to let this discussion become too abstract, could you perhaps
give an example of what you aim at with "work around" in this context, such that we possibly may clarify our views based on that example ?
  • Okay. With the implementation of the Symfoware platform I ran into an issue where with some tests EclipseLink's generated SQL uses the ANSI INNER JOIN syntax. Symfoware only supports the 'old' syntax. I was told that changing EclipseLink to generate the 'old' syntax for Symfoware would take an experienced EclipseLink developer a fairly large amount of time to correctly implement it (I gave up after half a day). The work-around for users who run into this issue would be to override the query in orm.xml with the native equivalent using the old JOIN. If this occurred with a mandatory test, my contribution would have been refused?

I don't know how to define the proper minimal set of features that will not prevent future contributions. I agree we could have some guideline about what features need to be supported until a contribution can be deemed complete enough to be useful for other users, but I don't think any particular function should be mandatory, each should be negotiable and the whole set of supported functionality should be the base for the decision to include a platform or not.

What is a Database Platform ?

As can be seen from the already existing database platforms for EclipseLink, a database platform may not only be defined by the vendor and/or product name of a DBMS, but may also depend on the database software's major release. However, the functionality and behaviour of a database platform with respect to EclipseLink test suites may also be influenced by minor releases or even applied patchsets of the respective database software. Even worse, test results may also depend on the jdbc driver in use, both on its vendor and its release.

So, if we want to avoid database platform mushrooming, from my pov, not only minor release of db software as well as the jdbc driver used along with its release need to be documented with the expected result, but also some descisions have to be made :

Would we want to say that only one database platform may be contributed per major release of a DBMS ?

If so, who will feel responsible to watch out for new minor releases or important patchsets and observe the test behaviour for them ?

If test results change, will the change simply be documented without further reflection, or, when indicated, are bug reports to be filed to the respective database vendor ?

  • Or filed to EclipseLink, assuming the vendor is aware of the incompatibility and is telling its users to use a different way, in which case the platform class will need to be updated.

Is the database platform to be marked as pending for the minor release/ patchset in the latter case ?

Do we need a vendor contact that to some extent feels responsible for that process or do we trust that would work out as an unsteered community thing ?

  • I think such a requirement might prevent contributions from the community. Where does someone from the community get such a vendor contact? Would the vendor even listen if that community member is not paying for support?
Well, I am aware that establishing such a vendor contact is, indeed, a difficult challenge. What I actually wanted to point attention to with my question,
is basically two things :
1. We have database platforms where the vendor, to some extent, is involved in the EclipseLink community (e.g. Oracle) and such, where the vendor is not. This might
result in different procedures and different handling of the platform's maintenance.
2. While it takes considerable effort to build and contribute a database platform the major challenge is to keep up its maintenance. Elsewise an included platform
might become useless surprisingly soon. That's why I think rules for platform maintenance ought to be considered carefully.

If we leave the maintenance of a database platform completely uncontrolled, would we then alternatively want to establish a rule that, let's say, a platform is to be pruned if its contributed documentation falls behind the vendor's most current minor release by more than whatsoever ?

  • I believe Oracle has multiple platforms to support added functionality. The others seem to have only one. I doubt platforms need updates with each minor version, even with major ones. Should it be pruned even though it's likely still to work as-is? I was thinking platforms could be considered for pruning after the tested configuration was on a DB major version that is so old there are no users or it is not supported by the vendor anymore. Or, to look at it from the other side, a platform is safe from pruning as long as it is clearly still in use and working condition (as seen from bug reports/discussions in the mailing lists), with no regular reports from users that it does not work with the latest version of the DB. Then, part of the pruning process would be a (long) stage (of one minor/major version of EclipseLink, or of the actual DB) where the platform is a 'candidate for pruning' in which we ask the community if someone would certify the platform again on the latest DB release and update the documentation/platform class.
Actually, I should have been more precise with what I was stating above : By "falls behind" I meant that the latest db version the tests run on with the
results expected is whatsoever releases behind, and on newer releases at least one test fails that did succeed before. However, it may also be an issue
if it is simply unknown what results the test may produce on the current db version because no-one in the community has checked it. I am fully d'accord
with putting an outdated (unmaintained) platform on a "red list of endangered species", though, and thus calling for the community to save the enlisted
platform from extinction.

Is a database platform to be documented for multiple jdbc drivers (if avail) or is a specific one to be picked out ? How is that one chosen ?

  • Good one, yes. Wouldn't it be up to the contributor to pick (I suppose we could ask the reason to be documented if there are multiple), where other contributors could add support/testing for other drivers?
Maybe a (short) phase of discussion where interested parties of the community could utter their concerns might be helpful. 
  • Yes, but again, I think it's up to the contributor. I assume a contributor is creating and contributing a platform because (s)he need it for something. So it will be tested with the driver that was required at the time. The contributor could ask the community what driver's support is most in demand, but shouldn't that be at his/her discretion?
With respect to multiple drivers things may become tricky if different jdbc drivers achieve different test results or
even require deviating behaviour of the database platform instance.
  • Would it be? Oracle seems to have different platform classes depending on the DB version. I assume it gets the version info from the driver's metadata. It also contains the driver's details too, so it can be queried in a similar way and the platform class can change its behaviour accordingly.


Technical Requirements

In order to make test results reproducible and comparable it would probably also be necessary to describe the configuration of the DBMS required (such as installed character sets, settings of system parameters and so forth) to achieve the documented test results. This may also include specific client side parameters that need to be set (e.g. as part of the connection url) in order to appropriately connect to the database.

Additional Comments

Some of these overlap with comments above, but noting them here anyway to show that they are shared by several people.

1- How to start, the links to procedures, downloads, how to setup the environment.
I think that a link in the Incubator page to where this information can be found would help.

2- When a test fails, you suggest that the test itself may be modified or bypassed.
Should we have any limit to how far the test can be changed?
If the test procedure was changed and the execution succeeds, will the status receive "passed" or "passed with limitations"

3- Is there any concern about performance?

4- Will someone else check besides the incubator, check the test results?
Is that necessary?

5- When the test suite should be run again?
For major releases, for instance?
Who will be responsible for that?
If a test suite is not executed until a dead line, should this platform be removed from Eclipselink?

Back to the top