Jacques,
I didn't find the Hadoop 1.0 from the hadoop's official website ,but
just Hadoop1.2.Dose it ok?
Cheers,
Guo Ying
-----Original Message-----
From: [email protected] [mailto:[email protected]] On Behalf Of
Jacques Nadeau
Sent: Friday, December 06, 2013 11:22 PM
To: [email protected]
Subject: Re: How connect Drill with HDFS
The message isn't DrillClient > DrillBit. Its Drillbit > HDFS. If looks like
you're trying to connect to an incompatible HDFS cluster with the HDFS version
that comes prepackaged with Drill. I believe the current Drill package is
Hadoop ~1.10. If you're running something like Hadoop2, you can try to switch
out the Hadoop jars in the Drill lib directory and see what happens. Since the
first milestone of Drill came out before the GA release of Hadoop 2 (2.10 I
believe), we didn't include that in the libs.
Additionally, it would be good if you filed a JIRA so that Drill can support a
Hadoop2 build profile. For future reference, what version of HDFS are you
running?
Jacques
On Fri, Dec 6, 2013 at 7:10 AM, Timothy Chen <[email protected]> wrote:
> Have you tried to run without your changes?
>
> It seems like it can't even connect to the drillbit in the first place.
>
> Tim
>
> Sent from my iPhone
>
> > On Dec 6, 2013, at 1:38 AM, Rajika Kumarasiri <
> [email protected]> wrote:
> >
> > According to the log it means it's a client compatibility issue.
> >
> > Rajika
> >
> >
> > On Fri, Dec 6, 2013 at 4:32 AM, Michael Hausenblas <
> > [email protected]> wrote:
> >
> >>
> >> Thank you, Guo Ying. I must admit that I’ve not seen this one
> >> before but I’d expect that Jason would have an idea … let’s see
> >> when the West
> coast of
> >> the US and A wakes up ;)
> >>
> >> Cheers,
> >> Michael
> >>
> >> --
> >> Michael Hausenblas
> >> Ireland, Europe
> >> http://mhausenblas.info/
> >>
> >>> On 6 Dec 2013, at 09:27, Guo, Ying Y <[email protected]> wrote:
> >>>
> >>> Hi Michael,
> >>> Thanks for your reply!
> >>> The errors are:
> >>> |-ERROR in ch.qos.logback.core.joran.spi.Interpreter@33:27 - no
> >> applicable action for [level], current ElementPath is
> >> [[configuration][appender][level]]
> >>>
> >>> Error: Failure trying to connect to Drill. (state=,code=0)
> >>> java.sql.SQLException: Failure trying to connect to Drill.
> >>> at
> >>
> org.apache.drill.jdbc.DrillHandler.onConnectionInit(DrillHandler.java:
> 131)
> >>> at
> >>
> net.hydromatic.optiq.jdbc.UnregisteredDriver.connect(UnregisteredDrive
> r.java:127)
> >>> at sqlline.SqlLine$DatabaseConnection.connect(SqlLine.java:4802)
> >>> at
> >> sqlline.SqlLine$DatabaseConnection.getConnection(SqlLine.java:4853)
> >>> at sqlline.SqlLine$Commands.connect(SqlLine.java:4094)
> >>> at sqlline.SqlLine$Commands.connect(SqlLine.java:4003)
> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>> at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
> ava:57)
> >>> at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess
> orImpl.java:43)
> >>> at java.lang.reflect.Method.invoke(Method.java:606)
> >>> at
> >> sqlline.SqlLine$ReflectiveCommandHandler.execute(SqlLine.java:2964)
> >>> at sqlline.SqlLine.dispatch(SqlLine.java:878)
> >>> at sqlline.SqlLine.initArgs(SqlLine.java:652)
> >>> at sqlline.SqlLine.begin(SqlLine.java:699)
> >>> at sqlline.SqlLine.mainWithInputRedirection(SqlLine.java:460)
> >>> at sqlline.SqlLine.main(SqlLine.java:443)
> >>> Caused by: org.apache.drill.exec.exception.SetupException: Failure
> >> setting up new storage engine configuration for config
> >> org.apache.drill.exec.store.parquet.ParquetStorageEngineConfig@617e
> >> 8cc0
> >>> at
> >>
> org.apache.drill.exec.store.SchemaProviderRegistry.getSchemaProvider(S
> chemaProviderRegistry.java:76)
> >>> at
> >>
> org.apache.drill.jdbc.DrillHandler.onConnectionInit(DrillHandler.java:
> 116)
> >>> ... 15 more
> >>> Caused by: java.lang.RuntimeException: Error setting up filesystem
> >>> at
> >>
> org.apache.drill.exec.store.parquet.ParquetSchemaProvider.<init>(Parqu
> etSchemaProvider.java:49)
> >>> at
> >>> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> >> Method)
> >>> at
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructo
> rAccessorImpl.java:57)
> >>> at
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingCo
> nstructorAccessorImpl.java:45)
> >>> at
> java.lang.reflect.Constructor.newInstance(Constructor.java:526)
> >>> at
> >>
> org.apache.drill.exec.store.SchemaProviderRegistry.getSchemaProvider(S
> chemaProviderRegistry.java:72)
> >>> ... 16 more
> >>> Caused by: org.apache.hadoop.ipc.RemoteException: Server IPC
> >>> version 9
> >> cannot communicate with client version 4
> >>> at org.apache.hadoop.ipc.Client.call(Client.java:1113)
> >>> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229)
> >>> at com.sun.proxy.$Proxy18.getProtocolVersion(Unknown Source)
> >>> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >>> at
> >>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.j
> ava:57)
> >>> at
> >>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccess
> orImpl.java:43)
> >>> at java.lang.reflect.Method.invoke(Method.java:606)
> >>> at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryIn
> vocationHandler.java:85)
> >>> at
> >>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocati
> onHandler.java:62)
> >>> at com.sun.proxy.$Proxy18.getProtocolVersion(Unknown Source)
> >>> at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422)
> >>> at
> >> org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183)
> >>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281)
> >>> at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245)
> >>> at
> >>
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFil
> eSystem.java:100)
> >>> at
> >> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:14
> >> 46)
> >>> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67)
> >>> at
> org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1464)
> >>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:263)
> >>> at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:124)
> >>> at
> >>
> org.apache.drill.exec.store.parquet.ParquetSchemaProvider.<init>(Parqu
> etSchemaProvider.java:47)
> >>> ... 21 more
> >>>
> >>> B.R.
> >>> Guo Ying
> >>>
> >>>
> >>>
> >>> -----Original Message-----
> >>> From: Michael Hausenblas [mailto:[email protected]]
> >>> Sent: Friday, December 06, 2013 5:16 PM
> >>> To: Apache Drill User
> >>> Subject: Re: How connect Drill with HDFS
> >>>
> >>>
> >>>> But when we run “./sqlline -u jdbc:drill:schema=parquet -n admin
> >>>> -p
> >> admin” there are some ERRORs and Failure trying to connect to Drill.
> >>>
> >>> In order to help you, it would certainly help if you share these
> errors,
> >> either here via copy and paste or put it on pastebin/gist and link
> >> to
> it.
> >>>
> >>> Cheers,
> >>> Michael
> >>>
> >>> --
> >>> Michael Hausenblas
> >>> Ireland, Europe
> >>> http://mhausenblas.info/
> >>>
> >>>> On 6 Dec 2013, at 09:09, Guo, Ying Y <[email protected]> wrote:
> >>>>
> >>>> Hi all,
> >>>> We have modified
> >>>> ./sqlparser/target/classes/storage-engines.json:
> >>>> "parquet" :
> >>>> {
> >>>> "type":"parquet",
> >>>> "dfsName" : "hdfs:// localhost:9000"
> >>>> }
> >>>> We also have recompiled and generated new
> >> drill-sqlparser-1.0.0-m2-incubating-SNAPSHOT.jar.
> >>>> But when we run “./sqlline -u jdbc:drill:schema=parquet -n admin
> >>>> -p
> >> admin” there are some ERRORs and Failure trying to connect to Drill.
> >>>> I don't know why. Do you know what else need to do?
> >>>>
> >>>> .
> >>
> >>
>