Daniel John Debrunner wrote:

Rick Hillegas wrote:

I like Lance's suggestion and would like to propose it as a general
policy. I think that this will handle Army's XML work as well as the
other new 10.2 datatypes:

If you add a new datatype to a server release (e.g., BOOLEAN or XML),
then you must specify the following:

Maybe replace 'you must' with 'you can'. If someone has the itch to add
a type for embedded only, then I don't think they should be forced to
make it work with old or new clients.

Also for something like XML datatype, is there any requirement to make
it be available (in any form) with old clients? Is it not sufficient to
say if you want to use the XML data type you must use the 10.2 client.
If someone wants to do that work, that's fine, but I don't see it as a
requiement.

I'm concerned that if you can create a column, you ought to be able to poke values into it and then peek at them. In addition, some code has to go into the network layer, if only to raise exceptions when new datavalues try to leak across the wire. I suspect that detecting incompatibilities is the hard part.


1) A legacy datatype to which the new type maps when the server talks to
old clients. The legacy type is some datatype in JDBC 2.0
java.sql.Types. This is the type which old clients see in
DatabaseMetaData and ResultSetMetaData.

Can old clients that are running as JDBC 3.0 see types from JDBC 3.0?
I'm not sure I understand the question. Are you thinking about BOOLEAN and TINYINT? The Derby network layer seems to tightly couple JDBC type with transport format. Your question makes me think of another issue I have not addressed: What happens if a 10.2 client running at JDBC 3.0 selects an XML value?

I am struggling to describe datatype behavior without a matrix of release and vm levels, which would confuse product support. Here's another attempt to summarize the behavior:

1) The new datatype has an associated legacy 2.0 datatype for use with old clients and old JDBC levels.

2) The new datatype's "server level" is the Derby release which introduces the datatype. Similarly, the new datatype's "JDBC level" is the JDBC version which introduced the type.

3) To see the new datatype, the client must run at or above the datatype's server and JDBC levels.

4) Otherwise, the client sees the legacy datatype.

I'm not sure that's simpler than a matrix, but there it is. :)

Is this just for the network server, how about embedded, e.g. running
Derby on JDK 1.3/1.4?

Thanks for raising this case. Let's keep it simple and apply the same rules. Fortunately, in the embedded case the client runs at the server's rev level. So it's just a question of JDBC level. So let's imagine a 10.2 embedded server running at JDBC 3.0. The customer creates BOOLEAN columns and peeks/pokes them as BOOLEAN. The customer create SQLXML columns but peeks/pokes them as CLOB.

2) A pair of ResultSet.get() and PreparedStatement.set() methods which
old clients can use to access the new datavalues. These must be get()
and set() methods which appear in JDBC 2.0 ResultSet and
PreparedStatement. They should be the get() and set() methods most
natural to the legacy datatype in (1). These methods determine how the
datavalues flow across DRDA.

Just curious as to why specifying the getXXX and setXXX method is
required, doesn't it follow from the legacy JDBC type specified? Or is
there some deeper thought here I am missing? For example, in your
example, with NCLOB can the client use setClob, setString etc?
Nothing deep, I'm just being pedantic. As you note, the mapping determines the legacy dataype and datavalue and therefore the transport format. The customer should be able to use any getXXX and setXXX method that works for that transport format. We could leave this as an exercise for the reader.

Dan.




Reply via email to