Kathey Marsden wrote:
We need users especially those who have complex or performance sensitive queries to try their existing applications and give us feedback. Army can you please give an overview of the changes from a functional perspective and explain what types of usage you think could pop issues?

For "an overview of the changes from a functional perspective", users should see the problem descriptions attached to the relevant Jira issues:

https://issues.apache.org/jira/browse/DERBY-805
  --> DERBY-805_v5.html, sections I and II

https://issues.apache.org/jira/browse/DERBY-781
  --> DERBY-781_v1.html, sections I and II

As for the "types of usage", there are generally three areas that are most directly affected by the optimizer changes:

1. Queries with UNIONs in them (the more unions, the more likely the query is to be affected).

2. Queries with subqueries in them, either explicitly or indirectly through views. The more deeply nested subqueries there are, the more likely the query is to be affected.

3. Queries which perform joins in which at least one of the result sets to be joined is a UNION, a subquery, or a combination of the two.

Thus far all optimizer regressions have been discovered by queries posted by a single user whose app uses large, deeply-nested queries that involve all three of the above areas. See in particular DERBY-1205, DERBY-1633, DERBY-1777.

That said, it's possible that other queries will also be affected since additional optimizer-related bug fixes have been made--esp. DERBY-1007 and DERBY-1357. So as Kathey said, anyone with query-intensive applications might find it beneficial to do some testing with 10.2.

Also I would like your opinion on the value of such testing from the user community?

Very valuable, no doubt there. If users don't test it beforehand, they run the risk of finding problems the hard way. I know it can take a lot of time and effort to test a beta candidate, so it's not too surprising that we've had little response to Derby's multiple requests for more user testing. I guess it's up to the user to decide when and how the time and effort is going to be spent: early on in beta, or later when regressions are (presumably) more critical.

Users may also want to remember the policy of "scratch your own itch" or "fry your own fish": I as a developer tend to have more time and inclination to address issues with contributed code closer to the time I actually made the contribution. If the code is untested for several months and then a user hits a regression at release time, what am I going to be doing at that time? And am I going to have the time and means to resolve the problem right then? Will that regression be my "itch" or my "fish"? I would of course hope the answer is an immediate "Yes"--but there's no guarantee in the world of opensource, and a lot can happen in a couple of months.

So yes, more testing is better. Thanks to Kathey for continuing to push for more user feedback. It can only help.

Army

Reply via email to