I'm not sure how this is resolved since Hive is directly accessed by Drill
using the Hive storage plugin, instead of via the JDBC storage plugin.
Perhaps you can share the parameters of the JDBC storage plugin you used,
so that folks more familiar with the JDBC storage plugin can help.
I'll see what I can find out in the meanwhile.
~ Kunal

On Sat, Mar 10, 2018 at 7:23 PM, Asim Kanungo <asim...@gmail.com> wrote:

> Hi Kunal,
>
> I have tried the steps and getting the below error:-
>
> 2018-03-08 22:39:59,234 [qtp433826182-75] ERROR
> o.a.d.e.server.rest.StorageResources - Unable to create/ update plugin:
> test
> org.apache.drill.common.exceptions.ExecutionSetupException: Failure
> setting
> up new storage plugin configuration for config
> org.apache.drill.exec.store.jdbc.JdbcStorageConfig@8ef5d26f
>         at
> org.apache.drill.exec.store.StoragePluginRegistryImpl.create(
> StoragePluginRegistryImpl.java:355)
> ~[drill-java-exec-1.12.0.jar:1.12.0]
>         at
> org.apache.drill.exec.store.StoragePluginRegistryImpl.createOrUpdate(
> StoragePluginRegistryImpl.java:239)
> ~[drill-java-exec-1.12.0.jar:1.12.0]
>         at
> org.apache.drill.exec.server.rest.PluginConfigWrapper.
> createOrUpdateInStorage(PluginConfigWrapper.java:57)
> ~[drill-java-exec-1.12.0.jar:1.12.0]
>         at
> org.apache.drill.exec.server.rest.StorageResources.
> createOrUpdatePluginJSON(StorageResources.java:162)
> [drill-java-exec-1.12.0.jar:1.12.0]
>         at
> org.apache.drill.exec.server.rest.StorageResources.createOrUpdatePlugin(
> StorageResources.java:177)
> [drill-java-exec-1.12.0.jar:1.12.0]
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[na:1.8.0_73]
>         at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:
> 62)
> ~[na:1.8.0_73]
>         at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> ~[na:1.8.0_73]
>         at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_73]
>         at
> org.glassfish.jersey.server.model.internal.ResourceMethodInvocationHandle
> rFactory$1.invoke(ResourceMethodInvocationHandlerFactory.java:81)
> [jersey-server-2.8.jar:na]
>         at
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDisp
> atcher$1.run(AbstractJavaResourceMethodDispatcher.java:151)
> [jersey-server-2.8.jar:na]
>         at
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDisp
> atcher.invoke(AbstractJavaResourceMethodDispatcher.java:171)
> [jersey-server-2.8.jar:na]
>         at
> org.glassfish.jersey.server.model.internal.JavaResourceMethodDispatcherPr
> ovider$TypeOutInvoker.doDispatch(JavaResourceMethodDispatcherPr
> ovider.java:195)
> [jersey-server-2.8.jar:na]
>         at
> org.glassfish.jersey.server.model.internal.AbstractJavaResourceMethodDisp
> atcher.dispatch(AbstractJavaResourceMethodDispatcher.java:104)
> [jersey-server-2.8.jar:na]
>         at
> org.glassfish.jersey.server.model.ResourceMethodInvoker.
> invoke(ResourceMethodInvoker.java:387)
> [jersey-server-2.8.jar:na]
>         at
> org.glassfish.jersey.server.model.ResourceMethodInvoker.
> apply(ResourceMethodInvoker.java:331)
> [jersey-server-2.8.jar:na]
>         at
> org.glassfish.jersey.server.model.ResourceMethodInvoker.
> apply(ResourceMethodInvoker.java:103)
> [jersey-server-2.8.jar:na]
>         at
> org.glassfish.jersey.server.ServerRuntime$1.run(ServerRuntime.java:269)
> [jersey-server-2.8.jar:na]
>         at org.glassfish.jersey.internal.Errors$1.call(Errors.java:271)
> [jersey-common-2.8.jar:na]
>         at org.glassfish.jersey.internal.Errors$1.call(Errors.java:267)
> [jersey-common-2.8.jar:na]
>         at org.glassfish.jersey.internal.Errors.process(Errors.java:315)
> [jersey-common-2.8.jar:na]
>         at org.glassfish.jersey.internal.Errors.process(Errors.java:297)
> [jersey-common-2.8.jar:na]
>         at org.glassfish.jersey.internal.Errors.process(Errors.java:267)
> [jersey-common-2.8.jar:na]
>         at
> org.glassfish.jersey.process.internal.RequestScope.
> runInScope(RequestScope.java:297)
> [jersey-common-2.8.jar:na]
>         at
> org.glassfish.jersey.server.ServerRuntime.process(ServerRuntime.java:252)
> [jersey-server-2.8.jar:na]
>         at
> org.glassfish.jersey.server.ApplicationHandler.handle(
> ApplicationHandler.java:1023)
> [jersey-server-2.8.jar:na]
>         at
> org.glassfish.jersey.servlet.WebComponent.service(WebComponent.java:372)
> [jersey-container-servlet-core-2.8.jar:na]
>         at
> org.glassfish.jersey.servlet.ServletContainer.service(
> ServletContainer.java:382)
> [jersey-container-servlet-core-2.8.jar:na]
>         at
> org.glassfish.jersey.servlet.ServletContainer.service(
> ServletContainer.java:345)
> [jersey-container-servlet-core-2.8.jar:na]
>         at
> org.glassfish.jersey.servlet.ServletContainer.service(
> ServletContainer.java:220)
> [jersey-container-servlet-core-2.8.jar:na]
>         at
> org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:738)
> [jetty-servlet-9.1.5.v20140505.jar:9.1.5.v20140505]
>         at
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:551)
> [jetty-servlet-9.1.5.v20140505.jar:9.1.5.v20140505]
>         at
> org.eclipse.jetty.server.session.SessionHandler.
> doHandle(SessionHandler.java:219)
> [jetty-server-9.1.5.v20140505.jar:9.1.5.v20140505]
>         at
> org.eclipse.jetty.server.handler.ContextHandler.
> doHandle(ContextHandler.java:1111)
> [jetty-server-9.1.5.v20140505.jar:9.1.5.v20140505]
>         at
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:478)
> [jetty-servlet-9.1.5.v20140505.jar:9.1.5.v20140505]
>         at
> org.eclipse.jetty.server.session.SessionHandler.
> doScope(SessionHandler.java:183)
> [jetty-server-9.1.5.v20140505.jar:9.1.5.v20140505]
>         at
> org.eclipse.jetty.server.handler.ContextHandler.
> doScope(ContextHandler.java:1045)
> [jetty-server-9.1.5.v20140505.jar:9.1.5.v20140505]
>         at
> org.eclipse.jetty.server.handler.ScopedHandler.handle(
> ScopedHandler.java:141)
> [jetty-server-9.1.5.v20140505.jar:9.1.5.v20140505]
>         at
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(
> HandlerWrapper.java:97)
> [jetty-server-9.1.5.v20140505.jar:9.1.5.v20140505]
>         at org.eclipse.jetty.server.Server.handle(Server.java:462)
> [jetty-server-9.1.5.v20140505.jar:9.1.5.v20140505]
>         at
> org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:279)
> [jetty-server-9.1.5.v20140505.jar:9.1.5.v20140505]
>         at
> org.eclipse.jetty.server.HttpConnection.onFillable(
> HttpConnection.java:232)
> [jetty-server-9.1.5.v20140505.jar:9.1.5.v20140505]
>         at
> org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:534)
> [jetty-io-9.1.5.v20140505.jar:9.1.5.v20140505]
>         at
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(
> QueuedThreadPool.java:607)
> [jetty-util-9.1.5.v20140505.jar:9.1.5.v20140505]
>         at
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(
> QueuedThreadPool.java:536)
> [jetty-util-9.1.5.v20140505.jar:9.1.5.v20140505]
>         at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73]
> *Caused by: java.lang.NoSuchFieldError: INSTANCE*
>         at
> org.apache.http.conn.ssl.SSLConnectionSocketFactory.<clinit>(
> SSLConnectionSocketFactory.java:144)
> ~[hive-jdbc.jar:1.2.1000.2.5.3.0-37]
>         at
> org.apache.hive.jdbc.HiveConnection.getHttpClient(HiveConnection.java:365)
> ~[hive-jdbc.jar:1.2.1000.2.5.3.0-37]
>         at
> org.apache.hive.jdbc.HiveConnection.createHttpTransport(
> HiveConnection.java:244)
> ~[hive-jdbc.jar:1.2.1000.2.5.3.0-37]
>         at
> org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:191)
> ~[hive-jdbc.jar:1.2.1000.2.5.3.0-37]
>         at
> org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:155)
> ~[hive-jdbc.jar:1.2.1000.2.5.3.0-37]
>         at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:105)
> ~[hive-jdbc.jar:1.2.1000.2.5.3.0-37]
>         at
> org.apache.commons.dbcp.DriverConnectionFactory.createConnection(
> DriverConnectionFactory.java:38)
> ~[commons-dbcp-1.4.jar:1.4]
>         at
> org.apache.commons.dbcp.PoolableConnectionFactory.makeObject(
> PoolableConnectionFactory.java:582)
> ~[commons-dbcp-1.4.jar:1.4]
>         at
> org.apache.commons.dbcp.BasicDataSource.validateConnectionFactory(
> BasicDataSource.java:1556)
> ~[commons-dbcp-1.4.jar:1.4]
>         at
> org.apache.commons.dbcp.BasicDataSource.createPoolableConnectionFactor
> y(BasicDataSource.java:1545)
> ~[commons-dbcp-1.4.jar:1.4]
>         at
> org.apache.commons.dbcp.BasicDataSource.createDataSource(
> BasicDataSource.java:1388)
> ~[commons-dbcp-1.4.jar:1.4]
>         at
> org.apache.commons.dbcp.BasicDataSource.getConnection(
> BasicDataSource.java:1044)
> ~[commons-dbcp-1.4.jar:1.4]
>         at
> org.apache.calcite.adapter.jdbc.JdbcUtils$DialectPool.
> get(JdbcUtils.java:73)
> ~[calcite-core-1.4.0-drill-r23.jar:1.4.0-drill-r23]
>         at
> org.apache.calcite.adapter.jdbc.JdbcSchema.createDialect(
> JdbcSchema.java:138)
> ~[calcite-core-1.4.0-drill-r23.jar:1.4.0-drill-r23]
>         at
> org.apache.drill.exec.store.jdbc.JdbcStoragePlugin.<init>(
> JdbcStoragePlugin.java:103)
> ~[drill-jdbc-storage-1.12.0.jar:1.12.0]
>         at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method) ~[na:1.8.0_73]
>         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance(
> NativeConstructorAccessorImpl.java:62)
> ~[na:1.8.0_73]
>         at
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(
> DelegatingConstructorAccessorImpl.java:45)
> ~[na:1.8.0_73]
>         at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
> ~[na:1.8.0_73]
>         at
> org.apache.drill.exec.store.StoragePluginRegistryImpl.create(
> StoragePluginRegistryImpl.java:346)
> ~[drill-java-exec-1.12.0.jar:1.12.0]
>         ... 45 common frames omitted
>
> The JAR file for Hive and HTTP is added. and the
> "drill.exec.sys.store.provider.local.path" is set in
> the drill-override.conf file. After searching in google I found that this
> issue occurs when there is a version mismatch or if one class is present in
> two or more JAR files. I have not very much idea in Java, so can you please
> let me know any particular JAR which has to be removed or added to resolve
> this issue.
> Thanks again for your time and help !!!!
>
> Thanks
> Asim
>
> On Wed, Mar 7, 2018 at 10:24 PM, Asim Kanungo <asim...@gmail.com> wrote:
>
> > Thanks Kunal for clarifying...
> > I am still learning the things, so as per my first project I am trying to
> > create a successful connection.
> > I will work on the optimization in the second phase.
> > Thanks for your valuable tips, let me try to create the hive connection
> > through JDBC then.
> > I suppose I need to put the hive jdbc drivers in the 3rd party directory,
> > please let me know if you have the list of the drivers or jar I need to
> put
> > in the 3rd party directory.
> >
> > Thanks
> > Asim
> >
> > On Wed, Mar 7, 2018 at 6:06 PM, Kunal Khatua <kunalkha...@gmail.com>
> > wrote:
> >
> >> You should be able to connect to a Hive cluster via JDBC. However, the
> >> benefit of using Drill co-located on the same cluster is that Drill can
> >> directly access the data based on locality information from Hive and
> >> process across the distributed FS cluster.
> >>
> >> With JDBC, any filters you have, will (most probably) not be pushed down
> >> to Hive. So, you'll end up loading the unfiltered data through a single
> >> data channel from the Hive cluster, into your Drill cluster, before it
> can
> >> start processing.
> >>
> >> If using JDBC is the only option, it might be worth using the 'create
> >> table as' (or the temporary table variant) to offload that data into
> your
> >> Drill cluster and then execute your analytical queries against this
> >> offloaded dataset.
> >>
> >> On 3/7/2018 2:46:55 PM, Asim Kanungo <asim...@gmail.com> wrote:
> >> Hi Team,
> >>
> >> Can I connect to hive database as a generic jdbc protocol like I am
> doing
> >> for other RDBMS.Or DRILL can only connect to hive residing in the same
> >> cluster where Hadoop is installed.
> >>
> >> I am talking about the cases where DRILL is installed in one cluster and
> >> my
> >> Hadoop cluster is different. Can you please guide if I can connect to
> that
> >> hive, using jdbc protocol.
> >> I have tried, but it failed with the error "unable to add/update
> storage".
> >> I am not sure why it has failed as it is not giving any other messages,
> >> and
> >> I know version 13 will contain more message.
> >>
> >> Please advise.
> >>
> >> Thanks
> >> Asim
> >>
> >
> >
>

Reply via email to