Note: unlike my previous emails, this one really isn't _that_ long. It's just a lot of cut-and-paste from the emails I missed last week...
I) On the issue of changing JDBC types:
Jack Klebanoff wrote: > I think that it is best to remain strictly compatible with JDBC. There > always seem to be a few programs that depend on corner cases in the > spec.
Lance J. Andersen wrote: > You do not want to violate what the JDBC spec indicates as the expected > columns to be returned as then your implementation is not compatible nor > would it be compliant.
Daniel John Debrunner wrote: > And as Lance pointed out, we need to maintain the JDBC metadata queries > in line with the spec, so no changing from INT to SMALLINT. Changes to > VARCHAR from CHAR I think are in line with the JDBC spec, as it defines > 'String' as the type.
**** THEREFORE: No changing types from INT to SMALLINT, nor any other kinds of type-casting, for JDBC clients. Any such changes will happen for ODBC clients ONLY, with the exception of CHAR to VARCHAR.
II) On the issue of adding columns to JDBC result sets:
Jack Klebanoff wrote: > I don't think that we should add extra columns to the JDBC metadata > ResultSets. It may cause a problem for a few programs now and it will > put us in a real bind later if new columns are added to getTypeInfo in > later versions of JDBC.
Kathey Marsden wrote: > Well, I figure as long as we have to deal with item name changes too for > getTypeInfo, might as well make the resultset right.
**** THEREFORE: No adding extra columns to result sets for JDBC clients. This will happen ONLY for ODBC clients, and will occur alongside the column renaming that has to happen for the statements in question.
III) On VTIs vs. Auto-generated Subqueries
Jack Klebanoff wrote: > Automatically generating the ODBC metadata queries from the JDBC queries > (or generating both from a common source) is a clever idea [...]
Daniel John Debrunner wrote: > I think VTI's are the wrong way to go. They are overkill I think for > what is required and will increase the static code footprint. [ snip ] > I think that static modified queries based upon the JDBC queries are the > way to go. Since these can be created at derby compile time the > subsequent handling can be just like the existing JDBC meta-data > queries, rather than creating a new mechanism.
**** THEREFORE: We won't use VTIs for this; we'll have a compile-time process that automatically generates the ODBC version of the queries, and then we'll handle the new queries in a way similar to the existing JDBC metadata queries.
IV) Miscellaneous Comments
Jack Klebanoff wrote: > [...] but it might not be worth the trouble. We still have to > maintain two things: the query source and the transformation > process. > > I am not convinced that maintaining separate metadata queries for JDBC > and ODBC is worse than the alternatives.
On the scale of major metadata changes, I think you're right. Any changes to metadata will have to pass the current JDBC metadata tests as well as the ODBC ones (to be submitted with the ODBC-related patch(es)), and anyone making the changes is going to have to verify that both JDBC and OBDC have been updated accordingly. So we don't gain much value there. However, for cases where we're fixing bugs with the metadata or otherwise altering the _value_ of a metadata column (as opposed to the type/name/existence of the column), we _do_ save time by automatically generating the ODBC set of queries.
For example, suppose we make a call to "getScale" in the existing JDBC queries and we later find out that it's supposed to be "getPrecision" (note: there's a reason I'm using this example! More to come on that in a different email...): we can just make the change in the JDBC query, and it will automatically be propagated to the ODBC query at build time. We don't have to track it down and make the change twice. This is a change to the _value_ of the result set, not to the result set itself, and thus should be safe with respect to both JDBC and OBDC.
Any addition feedback/comments? Army
