[ 
https://issues.apache.org/jira/browse/CALCITE-1254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15308769#comment-15308769
 ] 

Josh Elser commented on CALCITE-1254:
-------------------------------------

Thanks again, Julian!

bq. Yes, I suppose that was a little contentious. I figured that if someone was 
asking for at most N rows then clearly there could be no more than N rows in 
the first frame. Given that cap, I can't imagine a scenario where an existing 
client would see a change in behavior. My vote would be to clarify the 
documentation of prepareAndExecute to state that maxRowsInFirstFrame is capped 
by maxRowCount.

Great. I can do that.

bq. And by the way, don't use (int) to cast to int, use saturated cast, so that 
2^32 becomes Integer.MAX_VALUE rather than 0, and importantly -1 remains -1.

Right you are again. I'll make a pass over the codebase. I think there might 
have been some other instances of this.

> Support PreparedStatement.executeLargeBatch
> -------------------------------------------
>
>                 Key: CALCITE-1254
>                 URL: https://issues.apache.org/jira/browse/CALCITE-1254
>             Project: Calcite
>          Issue Type: Bug
>          Components: avatica
>            Reporter: Julian Hyde
>            Assignee: Josh Elser
>            Priority: Blocker
>             Fix For: avatica-1.8.0
>
>
> In CALCITE-1128 we added support for PreparedStatement.executeBatch. This 
> added ExecuteBatchResult with a field {{int[] updateCounts}}.
> I think that field should have been {{long[]}} instead. Elsewhere we have 
> been converting update counts from {{int}} to {{long}}, in line with changes 
> to the JDBC API.
> If changing this field from {{int[]}} to {{long[]}} will be a breaking change 
> we should consider halting 1.8 to get this in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to