0: jdbc:drill:zk=local> SELECT * FROM cp.`employee.json` LIMIT 3;
12:04:30.671 [2823c018-9538-18cd-80ba-82b49039d58b:foreman] ERROR hive.log - Got
exception: org.apache.thrift.transport.TTransportException java.net.SocketTimeo
utException: Read timed out
org.apache.thrift.transport.TTransportException: java.net.SocketTimeoutException
: Read timed out
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTranspor
t.java:129) ~[drill-hive-exec-shaded-1.8.0.jar:1.8.0]
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) ~[
drill-hive-exec-shaded-1.8.0.jar:1.8.0]
at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.ja
va:429) ~[drill-hive-exec-shaded-1.8.0.jar:1.8.0]
at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.ja
va:318) ~[drill-hive-exec-shaded-1.8.0.jar:1.8.0]
at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryPr
otocol.java:219) ~[drill-hive-exec-shaded-1.8.0.jar:1.8.0]
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
~[drill-hive-exec-shaded-1.8.0.jar:1.8.0]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_
get_all_databases(ThriftHiveMetastore.java:739) ~[hive-metastore-1.2.1.jar:1.2.1
]
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.get_a
ll_databases(ThriftHiveMetastore.java:727) ~[hive-metastore-1.2.1.jar:1.2.1]
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getAllDatabases(
HiveMetaStoreClient.java:1031) ~[hive-metastore-1.2.1.jar:1.2.1]
at org.apache.drill.exec.store.hive.DrillHiveMetaStoreClient.getDatabase
sHelper(DrillHiveMetaStoreClient.java:202) [drill-storage-hive-core-1.8.0.jar:1.
8.0]
at org.apache.drill.exec.store.hive.DrillHiveMetaStoreClient$DatabaseLoa
der.load(DrillHiveMetaStoreClient.java:455) [drill-storage-hive-core-1.8.0.jar:1
.8.0]
at org.apache.drill.exec.store.hive.DrillHiveMetaStoreClient$DatabaseLoa
der.load(DrillHiveMetaStoreClient.java:448) [drill-storage-hive-core-1.8.0.jar:1
.8.0]
at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(L
ocalCache.java:3527) [guava-18.0.jar:na]
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2
319) [guava-18.0.jar:na]
at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache
.java:2282) [guava-18.0.jar:na]
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2197)
[guava-18.0.jar:na]
at com.google.common.cache.LocalCache.get(LocalCache.java:3937) [guava-1
8.0.jar:na]
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3941) [g
uava-18.0.jar:na]
at com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.j
ava:4824) [guava-18.0.jar:na]
at org.apache.drill.exec.store.hive.DrillHiveMetaStoreClient$HiveClientW
ithCaching.getDatabases(DrillHiveMetaStoreClient.java:415) [drill-storage-hive-c
ore-1.8.0.jar:1.8.0]
at org.apache.drill.exec.store.hive.schema.HiveSchemaFactory$HiveSchema.
getSubSchema(HiveSchemaFactory.java:139) [drill-storage-hive-core-1.8.0.jar:1.8.
0]
at org.apache.drill.exec.store.hive.schema.HiveSchemaFactory$HiveSchema.
<init>(HiveSchemaFactory.java:133) [drill-storage-hive-core-1.8.0.jar:1.8.0]
at org.apache.drill.exec.store.hive.schema.HiveSchemaFactory.registerSch
emas(HiveSchemaFactory.java:118) [drill-storage-hive-core-1.8.0.jar:1.8.0]
at org.apache.drill.exec.store.hive.HiveStoragePlugin.registerSchemas(Hi
veStoragePlugin.java:100) [drill-storage-hive-core-1.8.0.jar:1.8.0]
at org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFact
ory.registerSchemas(StoragePluginRegistryImpl.java:365) [drill-java-exec-1.8.0.j
ar:1.8.0]
at org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema(Schem
aTreeProvider.java:72) [drill-java-exec-1.8.0.jar:1.8.0]
at org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema(Schem
aTreeProvider.java:61) [drill-java-exec-1.8.0.jar:1.8.0]
at org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.jav
a:146) [drill-java-exec-1.8.0.jar:1.8.0]
at org.apache.drill.exec.ops.QueryContext.getRootSchema(QueryContext.jav
a:136) [drill-java-exec-1.8.0.jar:1.8.0]
at org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema(QueryConte
xt.java:122) [drill-java-exec-1.8.0.jar:1.8.0]
at org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWork
er.java:59) [drill-java-exec-1.8.0.jar:1.8.0]
at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:1008)
[drill-java-exec-1.8.0.jar:1.8.0]
at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:264) [dri
ll-java-exec-1.8.0.jar:1.8.0]
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [na
:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [n
a:1.8.0_77]
at java.lang.Thread.run(Unknown Source) [na:1.8.0_77]
Caused by: java.net.SocketTimeoutException: Read timed out
at java.net.SocketInputStream.socketRead0(Native Method) ~[na:1.8.0_77]
at java.net.SocketInputStream.socketRead(Unknown Source) ~[na:1.8.0_77]
at java.net.SocketInputStream.read(Unknown Source) ~[na:1.8.0_77]
at java.net.SocketInputStream.read(Unknown Source) ~[na:1.8.0_77]
at java.io.BufferedInputStream.fill(Unknown Source) ~[na:1.8.0_77]
at java.io.BufferedInputStream.read1(Unknown Source) ~[na:1.8.0_77]
at java.io.BufferedInputStream.read(Unknown Source) ~[na:1.8.0_77]
at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTranspor
t.java:127) ~[drill-hive-exec-shaded-1.8.0.jar:1.8.0]
... 35 common frames omitted
12:04:30.684 [2823c018-9538-18cd-80ba-82b49039d58b:foreman] ERROR hive.log - Con
verting exception to MetaException
-----Original Message-----
From: Sudheesh Katkam [mailto:[email protected]]
Sent: Friday, September 16, 2016 9:01 AM
To: [email protected]
Subject: Re: Drill 1.8.0 Error: RESOURCE ERROR: Failed to create schema tree.
This is how to get a verbose error:
First set the option:
> SET `exec.errors.verbose` = true;
And then run the query. The detailed output will point us to where the error
occurred.
Thank you,
Sudheesh
> On Sep 15, 2016, at 9:12 PM, Abhishek Girish <[email protected]>
> wrote:
>
> Hi Kartik,
>
> Can you take a look at the logs (or turn on verbose errors) and share
> the relevant stack trace? Also what platform is this on?
>
> -Abhishek
>
> On Thu, Sep 15, 2016 at 4:26 PM, Kartik Bhatia <[email protected]> wrote:
>
>> When I run the following
>> 0: jdbc:drill:zk=local> SELECT * FROM cp.`employee.json` LIMIT 5; It
>> gives me java expection with Error: RESOURCE ERROR: Failed to create
>> schema tree.
>>
>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>> This e-mail message from State Compensation Insurance Fund and all
>> attachments transmitted with it may be privileged or confidential and
>> protected from disclosure. If you are not the intended recipient, you
>> are hereby notified that any dissemination, distribution, copying, or
>> taking any action based on it is strictly prohibited and may have
>> legal consequences. If you have received this e-mail in error, please
>> notify the sender by reply e-mail and destroy the original message
>> and all copies.
>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>