Thanks Tugdual. I'm not sure that is the case.

Sharding status:

{  "_id" : "FARM",  "partitioned" : true,  "primary" : "Shard_A" }
                FARM.DAILY
                        shard key: { "_id" : "hashed" }
                        chunks:
                                Shard_A 2
                                Shard_B 2
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" :
NumberLong("-4611686018427387902") } on : Shard_A Timestamp(2, 2)
                        { "_id" : NumberLong("-4611686018427387902") } -->>
{ "_id" : NumberLong(0) } on : Shard_A Timestamp(2, 3)
                        { "_id" : NumberLong(0) } -->> { "_id" :
NumberLong("4611686018427387902") } on : Shard_B Timestamp(2, 4)
                        { "_id" : NumberLong("4611686018427387902") } -->>
{ "_id" : { "$maxKey" : 1 } } on : Shard_B Timestamp(2, 5)
        {  "_id" : "zips",  "partitioned" : false,  "primary" : "Shard_B" }

On Wed, Aug 26, 2015 at 11:33 PM, Tugdual Grall <[email protected]> wrote:

> If you are not sure about the steps look at this tutorial:
>
> https://docs.mongodb.org/manual/tutorial/deploy-shard-cluster/#enable-sharding-for-a-database
>
> So you have to:
>  - enable sharding at the db level
>  - then for each collection that you want to shard, you also have to enable
> it at this level
>
> Based on Kamesh comment, I have not tested, it looks like you must enable
> sharding for all db & collections  (solved by
> https://issues.apache.org/jira/browse/DRILL-1752 )
>
> On Wed, Aug 26, 2015 at 1:26 PM, Kamesh <[email protected]> wrote:
>
> > You have to enable sharding on a per-collection basis.
> >
> > On Wed, Aug 26, 2015 at 2:39 PM, Xiao Yang <[email protected]>
> wrote:
> >
> > > Thanks Kamesh. To my understanding, if a database is sharded, then the
> > > collection is also sharded. The data of the collections within that
> > > database is spread across the two shards.
> > >
> > > On Wed, Aug 26, 2015 at 6:00 PM, Kamesh <[email protected]>
> wrote:
> > >
> > > > Hi,
> > > > ... 4 common frames omitted
> > > > Caused by: java.lang.IllegalArgumentException: Incoming endpoints 1
> is
> > > > greater than number of chunks 0
> > > >
> > > > Exception is same as earlier one. Are you sure both database and
> > > collection
> > > > which you are using in the query are sharded? or is it old log?
> > > >
> > > > On Wed, Aug 26, 2015 at 1:07 PM, Xiao Yang <[email protected]>
> > > wrote:
> > > >
> > > > > Thanks Kamesh.
> > > > >
> > > > > 2015-08-26 15:04:02,483
> > [2a22b73c-e452-2ca1-a468-00f2dea3436b:foreman]
> > > > > ERROR o.a.drill.exec.work.foreman.Foreman - SYSTEM ERROR:
> > > > > IllegalArgumentException: Incoming endpoints 1 is greater than
> number
> > > of
> > > > > chunks 0
> > > > >
> > > > >
> > > > > [Error Id: 4d8b145f-08bc-43a1-8156-b60b7c91fbc3 on
> > > cluster-server:31010]
> > > > > org.apache.drill.common.exceptions.UserException: SYSTEM ERROR:
> > > > > IllegalArgumentException: Incoming endpoints 1 is greater than
> number
> > > of
> > > > > chunks 0
> > > > >
> > > > >
> > > > > [Error Id: 4d8b145f-08bc-43a1-8156-b60b7c91fbc3 on
> > > cluster-server:31010]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:523)
> > > > > ~[drill-common-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.work.foreman.Foreman$ForemanResult.close(Foreman.java:737)
> > > > > [drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:839)
> > > > > [drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:781)
> > > > > [drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > >
> > org.apache.drill.common.EventProcessor.sendEvent(EventProcessor.java:73)
> > > > > [drill-common-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.work.foreman.Foreman$StateSwitch.moveToState(Foreman.java:783)
> > > > > [drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > >
> > org.apache.drill.exec.work.foreman.Foreman.moveToState(Foreman.java:892)
> > > > > [drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:253)
> > > > > [drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
> > > > > [na:1.7.0_79]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > > > > [na:1.7.0_79]
> > > > > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
> > > > > Caused by: org.apache.drill.exec.work.foreman.ForemanException:
> > > > Unexpected
> > > > > exception during fragment initialization: Incoming endpoints 1 is
> > > greater
> > > > > than number of chunks 0
> > > > > ... 4 common frames omitted
> > > > > Caused by: java.lang.IllegalArgumentException: Incoming endpoints 1
> > is
> > > > > greater than number of chunks 0
> > > > > at
> > > > >
> > >
> com.google.common.base.Preconditions.checkArgument(Preconditions.java:92)
> > > > > ~[guava-14.0.1.jar:na]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.store.mongo.MongoGroupScan.applyAssignments(MongoGroupScan.java:352)
> > > > > ~[drill-mongo-storage-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.planner.fragment.Wrapper$AssignEndpointsToScanAndStore.visitGroupScan(Wrapper.java:116)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.planner.fragment.Wrapper$AssignEndpointsToScanAndStore.visitGroupScan(Wrapper.java:103)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.physical.base.AbstractGroupScan.accept(AbstractGroupScan.java:60)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.physical.base.AbstractPhysicalVisitor.visitChildren(AbstractPhysicalVisitor.java:138)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.planner.fragment.Wrapper$AssignEndpointsToScanAndStore.visitOp(Wrapper.java:134)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.planner.fragment.Wrapper$AssignEndpointsToScanAndStore.visitOp(Wrapper.java:103)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.physical.base.AbstractPhysicalVisitor.visitLimit(AbstractPhysicalVisitor.java:92)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> org.apache.drill.exec.physical.config.Limit.accept(Limit.java:57)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.physical.base.AbstractPhysicalVisitor.visitChildren(AbstractPhysicalVisitor.java:138)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.planner.fragment.Wrapper$AssignEndpointsToScanAndStore.visitOp(Wrapper.java:134)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.planner.fragment.Wrapper$AssignEndpointsToScanAndStore.visitOp(Wrapper.java:103)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.physical.config.SelectionVectorRemover.accept(SelectionVectorRemover.java:42)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.physical.base.AbstractPhysicalVisitor.visitChildren(AbstractPhysicalVisitor.java:138)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.planner.fragment.Wrapper$AssignEndpointsToScanAndStore.visitOp(Wrapper.java:134)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.planner.fragment.Wrapper$AssignEndpointsToScanAndStore.visitOp(Wrapper.java:103)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.physical.base.AbstractPhysicalVisitor.visitStore(AbstractPhysicalVisitor.java:132)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.planner.fragment.Wrapper$AssignEndpointsToScanAndStore.visitStore(Wrapper.java:129)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.planner.fragment.Wrapper$AssignEndpointsToScanAndStore.visitStore(Wrapper.java:103)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.physical.base.AbstractPhysicalVisitor.visitScreen(AbstractPhysicalVisitor.java:195)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > org.apache.drill.exec.physical.config.Screen.accept(Screen.java:97)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.planner.fragment.Wrapper.assignEndpoints(Wrapper.java:148)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.planner.fragment.SimpleParallelizer.parallelizeFragment(SimpleParallelizer.java:247)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.planner.fragment.SimpleParallelizer.getFragments(SimpleParallelizer.java:131)
> > > > > ~[drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.work.foreman.Foreman.getQueryWorkUnit(Foreman.java:512)
> > > > > [drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.work.foreman.Foreman.runPhysicalPlan(Foreman.java:394)
> > > > > [drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:905)
> > > > > [drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:242)
> > > > > [drill-java-exec-1.1.0.jar:1.1.0]
> > > > > ... 3 common frames omitted
> > > > > 2015-08-26 15:04:02,488 [Client-1] INFO
> > > > >  o.a.d.j.i.DrillResultSetImpl$ResultsListener - [#28] Query failed:
> > > > > org.apache.drill.common.exceptions.UserRemoteException: SYSTEM
> ERROR:
> > > > > IllegalArgumentException: Incoming endpoints 1 is greater than
> number
> > > of
> > > > > chunks 0
> > > > >
> > > > >
> > > > > [Error Id: 4d8b145f-08bc-43a1-8156-b60b7c91fbc3 on
> > > cluster-server:31010]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:118)
> > > > > [drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:111)
> > > > > [drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:47)
> > > > > [drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:32)
> > > > > [drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:61)
> > > > > [drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > >
> org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:233)
> > > > > [drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > >
> org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:205)
> > > > > [drill-java-exec-1.1.0.jar:1.1.0]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
> > > > > [netty-codec-4.0.27.Final.jar:4.0.27.Final]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> > > > > [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> > > > > [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254)
> > > > > [netty-handler-4.0.27.Final.jar:4.0.27.Final]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> > > > > [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> > > > > [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
> > > > > [netty-codec-4.0.27.Final.jar:4.0.27.Final]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> > > > > [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> > > > > [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
> > > > > [netty-codec-4.0.27.Final.jar:4.0.27.Final]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> > > > > [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> > > > > [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
> > > > > [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> > > > > [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> > > > > [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
> > > > > [netty-transport-4.0.27.Final.jar:4.0.27.Final]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:618)
> > > > > [netty-transport-native-epoll-4.0.27.Final-linux-x86_64.jar:na]
> > > > > at
> > > > >
> > > >
> > >
> >
> io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:329)
> > > > > [netty-transport-native-epoll-4.0.27.Final-linux-x86_64.jar:na]
> > > > > at
> io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:250)
> > > > > [netty-transport-native-epoll-4.0.27.Final-linux-x86_64.jar:na]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
> > > > > [netty-common-4.0.27.Final.jar:4.0.27.Final]
> > > > > at java.lang.Thread.run(Thread.java:745) [na:1.7.0_79]
> > > > > 2015-08-26 15:04:12,600
> > [2a22b732-b234-318f-cf92-fe071d85b950:foreman]
> > > > INFO
> > > > >  o.a.drill.exec.work.foreman.Foreman - State change requested.
> > PENDING
> > > > -->
> > > > > RUNNING
> > > > > 2015-08-26 15:04:12,600
> > [2a22b732-b234-318f-cf92-fe071d85b950:frag:0:0]
> > > > > INFO  o.a.d.e.w.fragment.FragmentExecutor -
> > > > > 2a22b732-b234-318f-cf92-fe071d85b950:0:0: State change requested
> from
> > > > > AWAITING_ALLOCATION --> RUNNING for
> > > > > 2015-08-26 15:04:12,601
> > [2a22b732-b234-318f-cf92-fe071d85b950:frag:0:0]
> > > > > INFO  o.a.d.e.w.f.AbstractStatusReporter - State changed for
> > > > > 2a22b732-b234-318f-cf92-fe071d85b950:0:0. New state: RUNNING
> > > > > 2015-08-26 15:04:12,604
> > [2a22b732-b234-318f-cf92-fe071d85b950:frag:0:0]
> > > > > INFO  o.a.d.e.w.fragment.FragmentExecutor -
> > > > > 2a22b732-b234-318f-cf92-fe071d85b950:0:0: State change requested
> from
> > > > > RUNNING --> FINISHED for
> > > > > 2015-08-26 15:04:12,604
> > [2a22b732-b234-318f-cf92-fe071d85b950:frag:0:0]
> > > > > INFO  o.a.d.e.w.f.AbstractStatusReporter - State changed for
> > > > > 2a22b732-b234-318f-cf92-fe071d85b950:0:0. New state: FINISHED
> > > > > 2015-08-26 15:04:12,605 [BitServer-4] INFO
> > > > >  o.a.drill.exec.work.foreman.Foreman - State change requested.
> > RUNNING
> > > > -->
> > > > > COMPLETED
> > > > > 2015-08-26 15:04:12,605 [BitServer-4] INFO
> > > > >  o.a.drill.exec.work.foreman.Foreman - foreman cleaning up.
> > > > > 2015-08-26 16:49:06,183
> > [2a229e9d-62c9-1976-3a45-8a1c7777608b:foreman]
> > > > INFO
> > > > >  o.a.drill.exec.work.foreman.Foreman - State change requested.
> > PENDING
> > > > -->
> > > > > RUNNING
> > > > > 2015-08-26 16:49:06,190
> > [2a229e9d-62c9-1976-3a45-8a1c7777608b:frag:0:0]
> > > > > INFO  o.a.d.e.w.fragment.FragmentExecutor -
> > > > > 2a229e9d-62c9-1976-3a45-8a1c7777608b:0:0: State change requested
> from
> > > > > AWAITING_ALLOCATION --> RUNNING for
> > > > > 2015-08-26 16:49:06,190
> > [2a229e9d-62c9-1976-3a45-8a1c7777608b:frag:0:0]
> > > > > INFO  o.a.d.e.w.f.AbstractStatusReporter - State changed for
> > > > > 2a229e9d-62c9-1976-3a45-8a1c7777608b:0:0. New state: RUNNING
> > > > > 2015-08-26 16:49:06,206
> > [2a229e9d-62c9-1976-3a45-8a1c7777608b:frag:0:0]
> > > > > INFO  o.a.d.e.w.fragment.FragmentExecutor -
> > > > > 2a229e9d-62c9-1976-3a45-8a1c7777608b:0:0: State change requested
> from
> > > > > RUNNING --> FINISHED for
> > > > > 2015-08-26 16:49:06,207
> > [2a229e9d-62c9-1976-3a45-8a1c7777608b:frag:0:0]
> > > > > INFO  o.a.d.e.w.f.AbstractStatusReporter - State changed for
> > > > > 2a229e9d-62c9-1976-3a45-8a1c7777608b:0:0. New state: FINISHED
> > > > > 2015-08-26 16:49:06,208 [BitServer-4] INFO
> > > > >  o.a.drill.exec.work.foreman.Foreman - State change requested.
> > RUNNING
> > > > -->
> > > > > COMPLETED
> > > > > 2015-08-26 16:49:06,208 [BitServer-4] INFO
> > > > >  o.a.drill.exec.work.foreman.Foreman - foreman cleaning up.
> > > > > 2015-08-26 16:50:11,688
> > [2a229e5c-4ced-5630-a624-dd7ef15e8dec:foreman]
> > > > INFO
> > > > >  o.a.drill.exec.work.foreman.Foreman - State change requested.
> > PENDING
> > > > -->
> > > > > RUNNING
> > > > > 2015-08-26 16:50:11,688
> > [2a229e5c-4ced-5630-a624-dd7ef15e8dec:frag:0:0]
> > > > > INFO  o.a.d.e.w.fragment.FragmentExecutor -
> > > > > 2a229e5c-4ced-5630-a624-dd7ef15e8dec:0:0: State change requested
> from
> > > > > AWAITING_ALLOCATION --> RUNNING for
> > > > > 2015-08-26 16:50:11,689
> > [2a229e5c-4ced-5630-a624-dd7ef15e8dec:frag:0:0]
> > > > > INFO  o.a.d.e.w.f.AbstractStatusReporter - State changed for
> > > > > 2a229e5c-4ced-5630-a624-dd7ef15e8dec:0:0. New state: RUNNING
> > > > > 2015-08-26 16:50:11,693
> > [2a229e5c-4ced-5630-a624-dd7ef15e8dec:frag:0:0]
> > > > > INFO  o.a.d.e.w.fragment.FragmentExecutor -
> > > > > 2a229e5c-4ced-5630-a624-dd7ef15e8dec:0:0: State change requested
> from
> > > > > RUNNING --> FINISHED for
> > > > > 2015-08-26 16:50:11,693
> > [2a229e5c-4ced-5630-a624-dd7ef15e8dec:frag:0:0]
> > > > > INFO  o.a.d.e.w.f.AbstractStatusReporter - State changed for
> > > > > 2a229e5c-4ced-5630-a624-dd7ef15e8dec:0:0. New state: FINISHED
> > > > > 2015-08-26 16:50:11,695 [BitServer-4] INFO
> > > > >  o.a.drill.exec.work.foreman.Foreman - State change requested.
> > RUNNING
> > > > -->
> > > > > COMPLETED
> > > > > 2015-08-26 16:50:11,695 [BitServer-4] INFO
> > > > >  o.a.drill.exec.work.foreman.Foreman - foreman cleaning up.
> > > > >
> > > > >
> > > > > On Wed, Aug 26, 2015 at 5:26 PM, Kamesh <[email protected]>
> > > wrote:
> > > > >
> > > > > > So, both database and collection are sharded and still having
> > issues.
> > > > > > I think, in embedded mode, all the logs will be stored in
> > > sqlline.log.
> > > > > > Could you paste errors which are getting in that file.
> > > > > >
> > > > > > On Wed, Aug 26, 2015 at 11:53 AM, Xiao Yang <
> [email protected]
> > >
> > > > > wrote:
> > > > > >
> > > > > > > Thanks Kamesh. One is sharded and the other one is not sharded.
> > In
> > > > both
> > > > > > > cases, I experienced errors when doing simple select queries
> > > (except
> > > > > that
> > > > > > > count was working).
> > > > > > >
> > > > > > > In the log folder, I can only see sqlline.log and
> > > > sqlline_queries.json.
> > > > > > > When I invoke drill-embedded, do I need to specify any argument
> > to
> > > > > output
> > > > > > > error logs?
> > > > > > >
> > > > > > > Thank you.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On Wed, Aug 26, 2015 at 4:14 PM, Kamesh <
> [email protected]
> > >
> > > > > wrote:
> > > > > > >
> > > > > > > > Hi Xiao,
> > > > > > > >  Both database and collection are sharded? If not, you get
> the
> > > same
> > > > > > > errors.
> > > > > > > > If both of them sharded, Could you send the error logs, if
> > > > possible?
> > > > > > > >
> > > > > > > > On Wed, Aug 26, 2015 at 11:39 AM, Xiao Yang <
> > > [email protected]
> > > > >
> > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Thanks Kamesh. I tested it on another database that is
> > > sharded. I
> > > > > > > > > experienced errors. Are you sure that is the case?
> > > > > > > > >
> > > > > > > > > On Wed, Aug 26, 2015 at 3:44 PM, Kamesh <
> > > [email protected]
> > > > >
> > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi Xiao,
> > > > > > > > > >
> > > > > > > > > > There is an issue when a database or a collection is
> > > unsharded
> > > > in
> > > > > > the
> > > > > > > > > Mongo
> > > > > > > > > > DB cluster.  I submitted patch also for this issue
> > DRILL-1752
> > > > > > > > > > <https://issues.apache.org/jira/browse/DRILL-1752>
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > On Wed, Aug 26, 2015 at 11:09 AM, Xiao Yang <
> > > > > [email protected]
> > > > > > >
> > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > Hi Kamesh,
> > > > > > > > > > >
> > > > > > > > > > > The 'zips' database is not sharded. The MongoDB setup
> is
> > > > > > configured
> > > > > > > > > with
> > > > > > > > > > > two shards. There are other databases that are sharded.
> > > > > > > > > > >
> > > > > > > > > > > Thank you.
> > > > > > > > > > > Xiao
> > > > > > > > > > >
> > > > > > > > > > > On Wed, Aug 26, 2015 at 3:33 PM, Kamesh <
> > > > > [email protected]
> > > > > > >
> > > > > > > > > wrote:
> > > > > > > > > > >
> > > > > > > > > > > > Hi Xiao,
> > > > > > > > > > > >  Is zips collection sharded or unsharded in the mongo
> > > > > cluster?
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > On Wed, Aug 26, 2015 at 10:34 AM, Xiao Yang <
> > > > > > > [email protected]
> > > > > > > > >
> > > > > > > > > > > wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > > Hi,
> > > > > > > > > > > > >
> > > > > > > > > > > > > I setup a test environment for using Drill with
> > > MongoDB.
> > > > I
> > > > > > had
> > > > > > > > > > problems
> > > > > > > > > > > > > getting queries to work.
> > > > > > > > > > > > >
> > > > > > > > > > > > > ======A brief description of my setup=========
> > > > > > > > > > > > >
> > > > > > > > > > > > > Mongo cluster configuration:
> > > > > > > > > > > > > 1 MongoDB router
> > > > > > > > > > > > > 1 config server
> > > > > > > > > > > > > 2 shards
> > > > > > > > > > > > >
> > > > > > > > > > > > > MongoDB versions:
> > > > > > > > > > > > > 3.0.4
> > > > > > > > > > > > > Configured to use Wiredtiger as the storage engine
> > > > > > > > > > > > >
> > > > > > > > > > > > > The Linux version that I am using is:
> > > > > > > > > > > > > Ubuntu 14.04.2 LTS (GNU/Linux 3.13.0-54-generic
> > x86_64
> > > > > > > > > > > > >
> > > > > > > > > > > > > The Java version number is:
> > > > > > > > > > > > > OpenJDK 64-bit Server VM (build 24.79-b02, mixed
> > mode)
> > > > > > > > > > > > >
> > > > > > > > > > > > > Drill version: 1.1.0
> > > > > > > > > > > > >
> > > > > > > > > > > > > ======A brief description of the problems that I
> > > > > > > > > encountered=========
> > > > > > > > > > > > >
> > > > > > > > > > > > > 1. I imported the zips.json database
> > > > > > > > > > > > > 2. I started the drill console 'drill-embedded'
> > > > > > > > > > > > > 3. I then use the browser admin interfaced to
> enable
> > > > > MongoDB
> > > > > > > > > storage
> > > > > > > > > > > > engine
> > > > > > > > > > > > > 4. I followed the steps and was able to use zips
> > > > collection
> > > > > > > > > > > > > 5. I was able to do count
> > > > > > > > > > > > > 6. I can't do queries for some reason:
> > > > > > > > > > > > >
> > > > > > > > > > > > > 0: jdbc:drill:zk=local> show Tables;
> > > > > > > > > > > > > +---------------+-------------+
> > > > > > > > > > > > > | TABLE_SCHEMA  | TABLE_NAME  |
> > > > > > > > > > > > > +---------------+-------------+
> > > > > > > > > > > > > | mongo.zips    | zips        |
> > > > > > > > > > > > > +---------------+-------------+
> > > > > > > > > > > > > 1 row selected (0.131 seconds)
> > > > > > > > > > > > > 0: jdbc:drill:zk=local> alter system set
> > > > > > > > > > > > > `store.mongo.read_numbers_as_double` = true;
> > > > > > > > > > > > >
> > > +-------+----------------------------------------------+
> > > > > > > > > > > > > |  ok   |                   summary
> > > |
> > > > > > > > > > > > >
> > > +-------+----------------------------------------------+
> > > > > > > > > > > > > | true  | store.mongo.read_numbers_as_double
> updated.
> > > |
> > > > > > > > > > > > >
> > > +-------+----------------------------------------------+
> > > > > > > > > > > > > 1 row selected (0.088 seconds)
> > > > > > > > > > > > > 0: jdbc:drill:zk=local> select * from zips limit
> 10;
> > > > > > > > > > > > > Error: SYSTEM ERROR: IllegalArgumentException:
> > Incoming
> > > > > > > > endpoints 1
> > > > > > > > > > is
> > > > > > > > > > > > > greater than number of chunks 0
> > > > > > > > > > > > >
> > > > > > > > > > > > >
> > > > > > > > > > > > > [Error Id: 4d8b145f-08bc-43a1-8156-b60b7c91fbc3 on
> > > > > > > > > > > cluster-server:31010]
> > > > > > > > > > > > > (state=,code=0)
> > > > > > > > > > > > > 0: jdbc:drill:zk=local> select count(*) from zips;
> > > > > > > > > > > > > +---------+
> > > > > > > > > > > > > | EXPR$0  |
> > > > > > > > > > > > > +---------+
> > > > > > > > > > > > > | 29353   |
> > > > > > > > > > > > > +---------+
> > > > > > > > > > > > > 1 row selected (0.119 seconds)
> > > > > > > > > > > > > 0: jdbc:drill:zk=local>
> > > > > > > > > > > > >
> > > > > > > > > > > > > Please help.
> > > > > > > > > > > > >
> > > > > > > > > > > > > Thank you.
> > > > > > > > > > > > > Xiao
> > > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > >
> > > > > > > > > > > > --
> > > > > > > > > > > > Kamesh.
> > > > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > > Kamesh.
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > Kamesh.
> > > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Kamesh.
> > > > > >
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Kamesh.
> > > >
> > >
> >
> >
> >
> > --
> > Kamesh.
> >
>

Reply via email to