Hi Sudheesh,

I've filed a JIRA issue for this(DRILL-4912
<https://issues.apache.org/jira/browse/DRILL-4912>), and linked it to
DRILL-1248 <https://issues.apache.org/jira/browse/DRILL-1248>

Thanks,
Kathir



On Mon, Sep 26, 2016 at 4:49 PM, Sudheesh Katkam <[email protected]>
wrote:

> Hi Kathir,
>
> I tried simple filter conditions with aliases.
>
> This query did not return any result:
> select city[0] as cityalias from dfs.tmp.`data.json` where cityalias = 1;
>
> But, this query works fine:
> select city[0] as cityalias from dfs.tmp.`data.json` where city[0] = 1;
>
> So I suppose aliases are supported in join or filter conditions. There is
> an enhancement request for aliases in group by conditions [1]; please open
> an enhancement ticket for this issue and link it to [1].
>
> Thank you,
> Sudheesh
>
> [1] https://issues.apache.org/jira/browse/DRILL-1248
>
> > On Sep 21, 2016, at 2:24 PM, Kathiresan S <[email protected]>
> wrote:
> >
> > Hi Sudheesh,
> >
> > There is another related issue around this.
> >
> > For the same data I've used for DRILL-4890
> > <https://issues.apache.org/jira/browse/DRILL-4890>, below query doesn't
> > return any result (which is supposed to return one row)
> >
> > select city[0] as cityalias from dfs.tmp.`data.json` a join (select id as
> > idalias from dfs.tmp.`cities.json`) b on a.*cityalias  *= b.idalias
> >
> > However, the query below works fine
> >
> > select city[0] as cityalias from dfs.tmp.`data.json` a join (select id as
> > idalias from dfs.tmp.`cities.json`) b on a.*city[0] * = b.idalias
> >
> > Using an alias for city[0] in the join condition makes it return no
> result.
> >
> > Any idea, is this a known issue (is there any JIRA issue already tracked
> > for this) or should a separate JIRA issue be filed for this?
> >
> > *Files used for testing:*
> >
> > *Json file 1: data.json*
> >
> > { "name": "Jim","city" : [1,2]}
> >
> > *Json file 2: cities.json*
> >
> > {id:1,name:"Sendurai"}
> > {id:2,name:"NYC"}
> >
> > Thanks,
> > Kathir
> >
> > On Wed, Sep 14, 2016 at 8:23 AM, Kathiresan S <
> [email protected]>
> > wrote:
> >
> >> ​Hi Sudheesh,
> >>
> >> I've filed a JIRA for this
> >>
> >> https://issues.apache.org/jira/browse/DRILL-4890​
> >>
> >> Thanks,
> >> Kathir
> >>
> >> On Wed, Sep 14, 2016 at 8:09 AM, Kathiresan S <
> >> [email protected]> wrote:
> >>
> >>> Hi Sudheesh,
> >>>
> >>> Thanks for checking this out.
> >>> I do get the same error what you get, when i run Drillbit on my Eclipse
> >>> and run the same query from WebUI pointing to my local instance, and
> on top
> >>> of this error, i do get the "QueryDataBatch was released twice" error
> as
> >>> well.
> >>>
> >>> But, in drillbit.log of one of the nodes on the cluster, where this
> >>> failed, i don't see the IndexOutOfBoundsException. Somehow, the
> IndexOutOfBoundsException
> >>> log is getting suppressed and only the QueryDataBatch error is logged.
> But
> >>> thats a separate issue.
> >>>
> >>> I did run it from WebUI and its in RUNNING state forever (actually i
> >>> started one yesterday and left the tab, its still in RUNNING state)
> >>>
> >>> Sure, I'll file a JIRA and will provide the details here.
> >>>
> >>> Thanks again!
> >>>
> >>> Regards,
> >>> Kathir
> >>>
> >>>
> >>> On Tue, Sep 13, 2016 at 8:17 PM, Sudheesh Katkam <[email protected]
> >
> >>> wrote:
> >>>
> >>>> Hi Kathir,
> >>>>
> >>>> I tried the same query in embedded mode, and I got a different error.
> >>>>
> >>>> java.lang.IndexOutOfBoundsException: index: 0, length: 8 (expected:
> >>>> range(0, 0))
> >>>>        at io.netty.buffer.DrillBuf.checkIndexD(DrillBuf.java:123)
> >>>>        at io.netty.buffer.DrillBuf.chk(DrillBuf.java:147)
> >>>>        at io.netty.buffer.DrillBuf.getLong(DrillBuf.java:493)
> >>>>        at org.apache.drill.exec.vector.BigIntVector$Accessor.get(
> BigIn
> >>>> tVector.java:353)
> >>>>        at org.apache.drill.exec.vector.BigIntVector$Accessor.
> getObject
> >>>> (BigIntVector.java:359)
> >>>>        at org.apache.drill.exec.vector.RepeatedBigIntVector$Accessor.
> g
> >>>> etObject(RepeatedBigIntVector.java:297)
> >>>>        at org.apache.drill.exec.vector.RepeatedBigIntVector$Accessor.
> g
> >>>> etObject(RepeatedBigIntVector.java:288)
> >>>>        at org.apache.drill.exec.vector.accessor.GenericAccessor.
> getObj
> >>>> ect(GenericAccessor.java:44)
> >>>>        at org.apache.drill.exec.vector.accessor.
> BoundCheckingAccessor.
> >>>> getObject(BoundCheckingAccessor.java:148)
> >>>>        at org.apache.drill.jdbc.impl.TypeConvertingSqlAccessor.
> getObje
> >>>> ct(TypeConvertingSqlAccessor.java:795)
> >>>>        at org.apache.drill.jdbc.impl.AvaticaDrillSqlAccessor.
> getObject
> >>>> (AvaticaDrillSqlAccessor.java:179)
> >>>>        ...
> >>>>
> >>>> In this case, the Java client library is not able to consume the
> results
> >>>> sent from the server, and the query was CANCELLED (as seen in the
> query
> >>>> profile, on the web UI). Are you seeing the same?
> >>>>
> >>>> I am not aware of any workarounds; this seems like a bug to me. Can
> you
> >>>> open a ticket <https://issues.apache.org/jira/browse/DRILL>?
> >>>>
> >>>> Thank you,
> >>>> Sudheesh
> >>>>
> >>>>> On Sep 13, 2016, at 7:10 AM, Kathiresan S <
> >>>> [email protected]> wrote:
> >>>>>
> >>>>> ​Hi,
> >>>>>
> >>>>> Additional info on this. Array column ('city' in the example) is the
> >>>> issue.
> >>>>>
> >>>>> 1. When i select the just the first occurrence of the array column,
> the
> >>>>> query works fine
> >>>>>
> >>>>> select a.name,a.city[0],b.name from dfs.tmp.`data.json` a right join
> >>>>> dfs.tmp.`cities.json` b on a.city[0]=b.id
> >>>>>
> >>>>> Result
> >>>>> Jim 1 Sendurai
> >>>>> null null NYC
> >>>>>
> >>>>>
> >>>>> 2. And when i do a repeated_count on the array column, it returns -2
> >>>> on the
> >>>>> second row
> >>>>>
> >>>>> select a.name,repeated_count(a.city),b.name from
> dfs.tmp.`data.json` a
> >>>>> right join dfs.tmp.`cities.json` b on a.city[0]=b.id
> >>>>>
> >>>>> Result
> >>>>> Jim 2 Sendurai
> >>>>> null -2 NYC
> >>>>>
> >>>>> Any idea/work around for this issue would be highly appreciated
> >>>>>
> >>>>> Thanks,
> >>>>> Kathir
> >>>>> ​
> >>>>>
> >>>>> On Sat, Sep 10, 2016 at 9:56 PM, Kathiresan S <
> >>>> [email protected]>
> >>>>> wrote:
> >>>>>
> >>>>>> Hi,  A Query with right outer join fails while the inner and left
> >>>> outer
> >>>>>> joins work for the same data. I've replicated the issue with some
> >>>> simple
> >>>>>> data here and this happens in both 1.6.0 and 1.8.0
> >>>>>>
> >>>>>> *Json file 1: data.json*
> >>>>>>
> >>>>>> { "name": "Jim","city" : [1,2]}
> >>>>>>
> >>>>>> *Json file 2: cities.json*
> >>>>>>
> >>>>>> {id:1,name:"Sendurai"}
> >>>>>> {id:2,name:"NYC"}
> >>>>>>
> >>>>>> *Queries that work:*
> >>>>>> 1.  select a.name,a.city,b.id,b.name from dfs.tmp.`data.json` a
> left
> >>>>>> outer join dfs.tmp.`cities.json` b on a.city[0]=b.id
> >>>>>>
> >>>>>> 2. select a.name,a.city,b.id,b.name from dfs.tmp.`data.json` a join
> >>>>>> dfs.tmp.`cities.json` b on a.city[0]=b.id
> >>>>>>
> >>>>>> *Query that fails:*
> >>>>>>
> >>>>>> select a.name,a.city,b.id,b.name from dfs.tmp.`data.json` a right
> >>>> outer
> >>>>>> join dfs.tmp.`cities.json` b on a.city[0]=b.id
> >>>>>>
> >>>>>> *On the server side, i see below error trace :*
> >>>>>>
> >>>>>> java.lang.IllegalStateException: QueryDataBatch was released twice.
> >>>>>>       at org.apache.drill.exec.rpc.user
> >>>> .QueryDataBatch.release(QueryDataBatch.java:56)
> >>>>>> [drill-java-exec-1.6.0.jar:1.6.0]
> >>>>>>       at org.apache.drill.exec.rpc.user
> >>>> .QueryResultHandler.batchArrived(QueryResultHandler.java:167)
> >>>>>> [drill-java-exec-1.6.0.jar:1.6.0]
> >>>>>>       at org.apache.drill.exec.rpc.user
> >>>> .UserClient.handleReponse(UserClient.java:110)
> >>>>>> ~[drill-java-exec-1.6.0.jar:1.6.0]
> >>>>>>       at org.apache.drill.exec.rpc.BasicClientWithConnection.
> handle(
> >>>>>> BasicClientWithConnection.java:46) ~[drill-rpc-1.6.0.jar:1.6.0]
> >>>>>>       at org.apache.drill.exec.rpc.BasicClientWithConnection.
> handle(
> >>>>>> BasicClientWithConnection.java:31) ~[drill-rpc-1.6.0.jar:1.6.0]
> >>>>>>       at org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:67)
> >>>>>> ~[drill-rpc-1.6.0.jar:1.6.0]
> >>>>>>       at org.apache.drill.exec.rpc.RpcB
> >>>> us$RequestEvent.run(RpcBus.java:374)
> >>>>>> ~[drill-rpc-1.6.0.jar:1.6.0]
> >>>>>>       at org.apache.drill.common.SerializedExecutor$
> >>>>>> RunnableProcessor.run(SerializedExecutor.java:89)
> >>>>>> [drill-rpc-1.6.0.jar:1.6.0]
> >>>>>>       at org.apache.drill.exec.rpc.RpcB
> >>>> us$SameExecutor.execute(RpcBus.java:252)
> >>>>>> [drill-rpc-1.6.0.jar:1.6.0]
> >>>>>>       at org.apache.drill.common.Serial
> >>>> izedExecutor.execute(SerializedExecutor.java:123)
> >>>>>> [drill-rpc-1.6.0.jar:1.6.0]
> >>>>>>       at org.apache.drill.exec.rpc.RpcB
> >>>> us$InboundHandler.decode(RpcBus.java:285)
> >>>>>> [drill-rpc-1.6.0.jar:1.6.0]
> >>>>>>       at org.apache.drill.exec.rpc.RpcB
> >>>> us$InboundHandler.decode(RpcBus.java:257)
> >>>>>> [drill-rpc-1.6.0.jar:1.6.0]
> >>>>>>       at io.netty.handler.codec.MessageToMessageDecoder.
> channelRead(
> >>>>>> MessageToMessageDecoder.java:89) [netty-codec-4.0.27.Final.jar:
> >>>>>> 4.0.27.Final]
> >>>>>>       at io.netty.channel.AbstractChannelHandlerContext.
> >>>>>> invokeChannelRead(AbstractChannelHandlerContext.java:339)
> >>>>>> [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> >>>>>>       at io.netty.channel.AbstractChann
> >>>> elHandlerContext.fireChannelRead(
> >>>>>> AbstractChannelHandlerContext.java:324)
> >>>> [netty-transport-4.0.27.Final.
> >>>>>> jar:4.0.27.Final]
> >>>>>>       at io.netty.handler.timeout.IdleS
> >>>> tateHandler.channelRead(IdleStateHandler.java:254)
> >>>>>> [netty-handler-4.0.27.Final.jar:4.0.27.Final]
> >>>>>>       at io.netty.channel.AbstractChannelHandlerContext.
> >>>>>> invokeChannelRead(AbstractChannelHandlerContext.java:339)
> >>>>>> [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> >>>>>>       at io.netty.channel.AbstractChann
> >>>> elHandlerContext.fireChannelRead(
> >>>>>> AbstractChannelHandlerContext.java:324)
> >>>> [netty-transport-4.0.27.Final.
> >>>>>> jar:4.0.27.Final]
> >>>>>>       at io.netty.handler.codec.MessageToMessageDecoder.
> channelRead(
> >>>>>> MessageToMessageDecoder.java:103) [netty-codec-4.0.27.Final.jar:
> >>>>>> 4.0.27.Final]
> >>>>>>       at io.netty.channel.AbstractChannelHandlerContext.
> >>>>>> invokeChannelRead(AbstractChannelHandlerContext.java:339)
> >>>>>> [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> >>>>>>       at io.netty.channel.AbstractChann
> >>>> elHandlerContext.fireChannelRead(
> >>>>>> AbstractChannelHandlerContext.java:324)
> >>>> [netty-transport-4.0.27.Final.
> >>>>>> jar:4.0.27.Final]
> >>>>>>       at io.netty.handler.codec.ByteToM
> >>>> essageDecoder.channelRead(ByteToMessageDecoder.java:242)
> >>>>>> [netty-codec-4.0.27.Final.jar:4.0.27.Final]
> >>>>>>       at io.netty.channel.AbstractChannelHandlerContext.
> >>>>>> invokeChannelRead(AbstractChannelHandlerContext.java:339)
> >>>>>> [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> >>>>>>       at io.netty.channel.AbstractChann
> >>>> elHandlerContext.fireChannelRead(
> >>>>>> AbstractChannelHandlerContext.java:324)
> >>>> [netty-transport-4.0.27.Final.
> >>>>>> jar:4.0.27.Final]
> >>>>>>       at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(
> >>>>>> ChannelInboundHandlerAdapter.java:86)
> [netty-transport-4.0.27.Final.
> >>>>>> jar:4.0.27.Final]
> >>>>>>       at io.netty.channel.AbstractChannelHandlerContext.
> >>>>>> invokeChannelRead(AbstractChannelHandlerContext.java:339)
> >>>>>> [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> >>>>>>       at io.netty.channel.AbstractChann
> >>>> elHandlerContext.fireChannelRead(
> >>>>>> AbstractChannelHandlerContext.java:324)
> >>>> [netty-transport-4.0.27.Final.
> >>>>>> jar:4.0.27.Final]
> >>>>>>       at io.netty.channel.DefaultChannelPipeline.fireChannelRead(
> >>>>>> DefaultChannelPipeline.java:847) [netty-transport-4.0.27.Final.
> >>>>>> jar:4.0.27.Final]
> >>>>>>
> >>>>>> Thanks,
> >>>>>> Kathir
> >>>>>>
> >>>>
> >>>>
> >>>
> >>
>
>

Reply via email to