You mentioned you were able to successfully run queries now, right? I tried
on the latest build and on MapR Drill RPM - don't see any issues. So this
might have been a one-off issue. If you happen to reproduce it again, we
could investigate this further.

On Tue, Mar 8, 2016 at 10:11 AM, Khurram Faraaz <kfar...@maprtech.com>
wrote:

> I am running against MapR rpm. I did not build from source. This is the RPM
> that was used mapr-drill-1.6.0.201603072015-1.noarch.rpm
>
> On Tue, Mar 8, 2016 at 11:34 PM, Abhishek Girish <agir...@mapr.com> wrote:
>
> > Khurram,
> >
> > Can you confirm if this issue is specific to MapR RPMs or is seen with
> > latest builds as well?
> >
> > -Abhishek
> >
> > On Tue, Mar 8, 2016 at 9:22 AM, Khurram Faraaz <kfar...@maprtech.com>
> > wrote:
> >
> > > Thanks Jason. Here is what I did before I hit the Exception
> > >
> > > clush -g khurram rpm -e mapr-drill --noscripts
> > > clush -g khurram wget
> > >
> http://yum.qa.lab/opensource/mapr-drill-1.6.0.201603072015-1.noarch.rpm
> > > clush -g khurram rpm -i mapr-drill-1.6.0.201603072015-1.noarch.rpm
> > >
> > > cd /opt/mapr/zookeeper/zookeeper-3.4.5/bin
> > >
> > > ./zkCli.sh
> > >
> > > connect localhost:5181
> > >
> > > ls /drill
> > >
> > > rmr /drill/sys.options
> > >
> > > cd /opt/mapr/drill/drill-1.6.0/bin
> > > ./sqlline -u "jdbc:drill:schema=dfs.tmp -n mapr -p mapr"
> > >
> > > Any query on sqlline would give that Exception.
> > >
> > > I then restarted warden, clush -g khurram service mapr-warden stop and
> > then
> > > start, and I am able to run queries now.
> > >
> > > Do we need a JIRA to track this problem ?
> > >
> > > - Khurram
> > >
> > > On Tue, Mar 8, 2016 at 9:40 PM, Jason Altekruse <
> > altekruseja...@gmail.com>
> > > wrote:
> > >
> > > > This exception should only occur if you start an older version of
> Drill
> > > > using a configuration (stored in zookeeper or your local temp
> > directory)
> > > > that was created by starting a version of Drill after 4383 was merged
> > > > (0842851c854595f140779e9ed09331dbb63f6623).
> > > >
> > > > This change added a new property to filesystem configuration to allow
> > > > passing custom options to the filesystem config. This can be used in
> > > place
> > > > of core-site.xml to set things like your AWS private keys, as well as
> > any
> > > > other properties normally provided to an implementation of the Hadoop
> > > > FileSystem API.
> > > >
> > > > Removing the new configuration should allow it to start up, but you
> > > > shouldn't be seeing this if you are running the build you mentioned.
> > Can
> > > > you verify that this version successfully built and that you are not
> > > > running an older version?
> > > >
> > > > - Jason
> > > >
> > > > P.S. I will be trying to get in a change soon that give a better
> error
> > in
> > > > this case, it should only happen with downgrades, which we generally
> > > don't
> > > > thoroughly test, but would still be good to fix. I'm sure there are
> > > several
> > > > bugs filed about these kinds of issues, this is one of them and I've
> > > > assigned it to myself, hoping to post a fix soon.
> > > >
> > > > https://issues.apache.org/jira/browse/DRILL-2048
> > > >
> > > >
> > > > On Tue, Mar 8, 2016 at 2:33 AM, Khurram Faraaz <kfar...@maprtech.com
> >
> > > > wrote:
> > > >
> > > > > Hi All,
> > > > >
> > > > > I am seeing an Exception on Drill 1.6.0 commit ID 447b093c (I am
> > using
> > > > the
> > > > > RPM)
> > > > >
> > > > > I did not see this Exception on earlier version of Drill 1.6.0
> commit
> > > ID
> > > > > 6d5f4983
> > > > >
> > > > > Could this be related to DRILL-4383
> > > > > <https://issues.apache.org/jira/browse/DRILL-4383>
> > > > >
> > > > > Drill version where we see the Exception is
> > > > >
> > > > > git.commit.id=447b093cd2b05bfeae001844a7e3573935e84389
> > > > > git.commit.message.short=DRILL-4332\: Makes vector comparison order
> > > > stable
> > > > > in test framework
> > > > >
> > > > > oadd.org.apache.drill.common.exceptions.UserRemoteException: SYSTEM
> > > > ERROR:
> > > > > UnrecognizedPropertyException: Unrecognized field "config" (class
> > > > > org.apache.drill.exec.store.dfs.FileSystemConfig), not marked as
> > > > ignorable
> > > > > (4 known properties: "enabled", "formats", "connection",
> > "workspaces"])
> > > > >  at [Source: [B@2b88d9b2; line: 5, column: 18] (through reference
> > > chain:
> > > > > org.apache.drill.exec.store.dfs.FileSystemConfig["config"])
> > > > >
> > > > >
> > > > > [Error Id: 7fdc89ac-91ac-46eb-8201-8fe5e1acf278 on
> > > > centos-02.qa.lab:31010]
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.org.apache.drill.exec.rpc.user.QueryResultHandler.resultArrived(QueryResultHandler.java:119)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.org.apache.drill.exec.rpc.user.UserClient.handleReponse(UserClient.java:113)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:46)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.org.apache.drill.exec.rpc.BasicClientWithConnection.handle(BasicClientWithConnection.java:31)
> > > > > at oadd.org.apache.drill.exec.rpc.RpcBus.handle(RpcBus.java:67)
> > > > > at
> > > >
> oadd.org.apache.drill.exec.rpc.RpcBus$RequestEvent.run(RpcBus.java:374)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.org.apache.drill.common.SerializedExecutor$RunnableProcessor.run(SerializedExecutor.java:89)
> > > > > at
> > > > >
> > > >
> > >
> >
> oadd.org.apache.drill.exec.rpc.RpcBus$SameExecutor.execute(RpcBus.java:252)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.org.apache.drill.common.SerializedExecutor.execute(SerializedExecutor.java:123)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:285)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.org.apache.drill.exec.rpc.RpcBus$InboundHandler.decode(RpcBus.java:257)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:89)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:254)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> > > > > at
> oadd.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
> > > > > at java.lang.Thread.run(Thread.java:745)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:242)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:339)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:324)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:847)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:131)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
> > > > > at
> oadd.io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354)
> > > > > at
> > > > >
> > > > >
> > > >
> > >
> >
> oadd.io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
> > > > > ... 1 more
> > > > >
> > > >
> > >
> >
>

Reply via email to