> 1) Changes that could be made to our metadata.properties that would make > it continue to be JDBC compliant but would satisfy the ODBC > requirements. [ snip ] For these I think we change the metadata.properties > as much as possible to conform to both.
I'll plan to do that. For example, if JDBC says a column is INT and ODBC says it should be SMALLINT, I'll change the metadata function in question to return SMALLINT for _BOTH_ ODBC and JDBC clients, since doing so will 1) satisfy the ODBC requirement and 2) still be compatible with JDBC apps, since a call to "ResultSet.getInt()" on a SMALLINT column will still return the correct value.
> 3) Extra columns in ODBC. > For getProcedureColumns, the spec explicitly says it's ok to have extra > columns. I wonder if that would be ok for other methods too.
It turns out that, aside from "getProcedureColumns", the only other metadata function for which ODBC specifies columns that JDBC does not have is "getTypeInfo". That said, do people think it's okay to add an extra column to the result set of this function, or not? The Java spec doesn't explicitly say that it's okay, so it's not as clean as it is with getProcedureColumns...
That's one option (add the columns to both JDBC and ODBC metadata resultsets). The other is to use whatever means are deemed best (by the Derby community) to resolve the following issue:
> 2) Column Name changes. e.g JDBC returns RADIX for getProcedureColumns > whereas ODBC uses NUM_PREC_RADIX and rearranging of columns. For these > we could either make a VTI or perhaps there is some more direct internal > way to hack at the metadata. [ snip ]
At this point, it seems like there are two possibilities for handling this. It's clear that we do NOT want to have 2 sets of humanly-maintained metadata functions, one for JDBC and another, _slightly_ different one for ODBC. That can lead to a maintenance headache if/when metadata processing changes in the future. That said, we could either:
1) Use VTIs (Virtual Table Interfaces), which are internal to the Derby engine and allow representation of ResultSets as virtual tables that can then be used in other table operations.
With this approach, we would execute the regular, JDBC version of the metadata functions, and then we would take the result set and "massage" it into a virtual table that has ODBC-compliant column names. If we decide to go this route for extra ODBC columns, we would also use the VTI to add those extra columns. Then we would just execute a statement like "SELECT * FROM ODBCColumnsVTI" and return the result set to the client.
Upsides: 1) Can base it on existing VTI structure in the engine, which makes for quicker development (I already have a VTI working for getProcedureColumns, which is probably the most complicated of the metadata functions, so it's a good proof-of-concept); 2) upgrade/downgrade, when it is implemented, should be easy: just drop the statements in metadata.properties and re-load them from the metadata.properties file of the target version of Derby.
Downside: We're doing all the work in Java, when SQL and the Derby engine have the ability to do it for us.
2) Do some "under-the-covers" work to automatically generate ODBC-compliant SQL statements based on the existing JDBC statements, write the new queries out to file, and use the statements in that file for ODBC clients. This statement generation work could be done either at build time (via some kind of Ant process) or else at database creation time. The end result would in fact be two copies of the metadata functions, but only ONE of them would be manually maintained; the other would be generated automatically based on the first. For example, if we had some JDBC query "SELECT A FROM MYTABLE" (I'm simplifying, obviously ;) and the ODBC version required the column to be called "B" instead of "A", we would automatically generate a new query "SELECT VT.A AS B FROM (SELECT A FROM MYTABLE) VT".
Upside: 1) Exploits functionality that already exists in the Derby engine--namely, uses the SQL engine to do the ODBC column renaming for us.
Downsides: 1) Seems less intuitive than using VTIs, and requires further digging into the code to figure out just how it would work; 2) upgrade/downgrade becomes more tricky, since we'd have to figure out how to manage the existing metadata.properties file alongside the newly generated one.
Anyone have any preferences/feedback/knowledge to throw in?
Thanks! Army
