[ 
https://issues.apache.org/jira/browse/DERBY-5235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13034838#comment-13034838
 ] 

Rick Hillegas commented on DERBY-5235:
--------------------------------------

Thanks for the additional comments, Bryan and Kathey. Some responses follow:

>From a practical point of view, if this limit were lifted, what criteria would 
>a designer use to choose between
>VARCHAR and CLOB as the datatype for a column? 

One difference is that you can use VARCHAR in ordering contexts (ORDER BY, 
GROUP BY, >, <, =). To use CLOBs in ordering contexts, you would need to cast 
them to VARCHAR( 2147483647 ).

I suspect that we have more performance optimizations for CLOBs, but we'd have 
to run some experiments to figure that out. CLOBs may perform better if you are 
just passing them around to routines or copying them into other columns.

I don't know why someone would use LONG VARCHAR rather than VARCHAR( 2147483647 
).

> Does the sql spec say anything about limits on these types?

Upper limits are vendor-specific. The Standard places no upper limit on the 
length of the String types.


> What would be the soft upgrade behavior?

I think that the simplest soft upgrade behavior would be to preserve the old 
limitations. The longer lengths would only be supported after hard upgrade.


> I think there may be more complications regarding implementing this for 
> client/server than embedded. It may be less onerous now than at the time I 
> looked at it years ago, because at the time I was working with the JCC client 
> and strict adherence to DRDA. I think it will be important to keep the limits 
> the same for both client and server.

Good point. We would need to make sure that we could chunk/stream the long 
Strings as we do for CLOBs.


> How would older clients behave with larger columns?

We might be able to make this work transparently. But maybe not. I think the 
worst scenario would be that the client app would have to cast the long column 
to a CLOB when SELECTing it. If that were the case, we would want to make sure 
that we failed gracefully when trying to SELECT an uncast beefy VARCHAR; we 
don't want to raise an indecipherable protocol exception in this case.

Going in the opposite direction: Looks like Knut has found a wrong-results bug 
with our current behavior. Seems that we truncate the data. Maybe setClob() 
rather than setString() would be a workaround. Again, for clients at an 
advanced rev level we would want to chunk/stream the data as for CLOBs.

We would need to test and document the behavior. A release note is probably 
needed.


> Remove the artificial limit on the length of VARCHAR values, allowing them to 
> be java.lang.Integer.MAX_VALUE long
> -----------------------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-5235
>                 URL: https://issues.apache.org/jira/browse/DERBY-5235
>             Project: Derby
>          Issue Type: Improvement
>          Components: SQL
>    Affects Versions: 10.9.0.0
>            Reporter: Rick Hillegas
>
> The original Cloudscape limit for the length of VARCHAR values was 
> java.lang.Integer.MAX_VALUE. That is the limit in Cloudscape 5.1. Nothing in 
> Derby should break if we restore the original limit. The current limit is an 
> artificial bound introduced to make Derby agree with DB2. 32672 is the upper 
> bound on the length of a DB2 VARCHAR: 
> http://publib.boulder.ibm.com/infocenter/db2luw/v8/index.jsp?topic=/com.ibm.db2.udb.doc/admin/r0001029.htm

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to