Egor,

I don’t know whether your hypothesis is correct, but it sounds plausible and 
well-researched. Therefore I think you should put it into a jira case, so the 
issue and discussion is on the record. If there is a flaw in your argument and 
the jira case is subsequently closed, there’s no shame in that.

Julian


> On Nov 27, 2023, at 10:59 AM, Egor Ryashin <[email protected]> wrote:
> 
> Hi all,
> 
> this exception happens when Apache Druid Avatica protobuf endpoint is used: 
> "Druid can only fetch forward. Requested offset”.
> I tried to debug that and saw that any simple query fails if it triggers 
> fetching of multiple frames. I speculate that when the first ExucuteRequest 
> is sent
> 
>       msg := &message.ExecuteRequest{
>               StatementHandle:    s.handle,
>               ParameterValues:    s.parametersToTypedValues(args),
>               FirstFrameMaxSize:  s.conn.config.frameMaxSize,
>               HasParameterValues: true,
>       }
> 
> and the result set is created after
> 
>               rsets = append(rsets, &resultSet{
>                       columns: columns,
>                       done:    frame.Done,
>                       offset:  frame.Offset,
>                       data:    data,
>               })
> 
> then for the next frame FetchRequest is sent but it uses the same 0 offset 
> (the data for that offset was returned with the ExecuteRequest)
> 
>               res, err := r.conn.httpClient.post(context.Background(), 
> &message.FetchRequest{
>                       ConnectionId: r.conn.connectionId,
>                       StatementId:  r.statementID,
>                       Offset:       resultSet.offset,
>                       FrameMaxSize: r.conn.config.frameMaxSize,
>               })
> 
> So, in short, I think Avatica-Go sends 2 requests with the same offset that 
> makes Druid to fail.
> 
> I wonder if anyone with more experience could confirm that?
> 
> The implementation I mentioned is here: 
> go/pkg/mod/github.com/rilldata/calcite-avatica-go/[email protected]/rows.go
> 
> Thanks

Reply via email to