Try /etc/alternatives/java_sdk_1.8.0/bin/jinfo <pid> or use jconsole to get the classpath.

Thank you,

Vlad

On 8/24/17 09:38, Zubair, Muhammad wrote:
$ yarn version
Hadoop 2.7.1.2.4.2.0-258
Subversion [email protected]:hortonworks/hadoop.git -r 
13debf893a605e8a88df18a7d8d214f571e05289
Compiled by jenkins on 2016-04-25T05:46Z
Compiled with protoc 2.5.0
 From source with checksum 2a2d95f05ec6c3ac547ed58cab713ac
This command was run using 
/usr/hdp/2.4.2.0-258/hadoop/hadoop-common-2.7.1.2.4.2.0-258.jar

Drill process:

10961 pts/0    Sl+    0:18 /etc/alternatives/java_sdk_1.8.0/bin/java 
-XX:MaxPermSize=512M -Dlog.path=/app /tools/drill/apache-drill

$ jinfo 10961
Attaching to process ID 10961, please wait...
Debugger attached successfully.
Server compiler detected.
JVM version is 25.71-b15
Java System Properties:

Exception in thread "main" java.lang.reflect.InvocationTargetException
         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
         at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
         at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
         at java.lang.reflect.Method.invoke(Method.java:497)
         at sun.tools.jinfo.JInfo.runTool(JInfo.java:108)
         at sun.tools.jinfo.JInfo.main(JInfo.java:76)
Caused by: java.lang.InternalError: Metadata does not appear to be polymorphic
         at 
sun.jvm.hotspot.types.basic.BasicTypeDataBase.findDynamicTypeForAddress(BasicTypeDataBase.java:278)
         at 
sun.jvm.hotspot.runtime.VirtualBaseConstructor.instantiateWrapperFor(VirtualBaseConstructor.java:102)
         at 
sun.jvm.hotspot.oops.Metadata.instantiateWrapperFor(Metadata.java:68)
         at 
sun.jvm.hotspot.memory.SystemDictionary.getSystemKlass(SystemDictionary.java:127)
         at sun.jvm.hotspot.runtime.VM.readSystemProperties(VM.java:879)
         at sun.jvm.hotspot.runtime.VM.getSystemProperties(VM.java:873)
         at sun.jvm.hotspot.tools.SysPropsDumper.run(SysPropsDumper.java:44)
         at sun.jvm.hotspot.tools.JInfo$1.run(JInfo.java:79)
         at sun.jvm.hotspot.tools.JInfo.run(JInfo.java:94)
         at sun.jvm.hotspot.tools.Tool.startInternal(Tool.java:260)
         at sun.jvm.hotspot.tools.Tool.start(Tool.java:223)
         at sun.jvm.hotspot.tools.Tool.execute(Tool.java:118)
         at sun.jvm.hotspot.tools.JInfo.main(JInfo.java:138)
         ... 6 more



-----Original Message-----
From: Vlad Rozov [mailto:[email protected]]
Sent: August 24, 17 12:17 PM
To: [email protected]
Subject: Re: Apache Drill unable to read files from HDFS (Resource error: 
Failed to create schema tree)

One of possible problems is a mismatch of yarn libraries on the edge node. What hadoop distro 
and version do you have on the edge node? Can you provide output (classpath) of "jinfo 
<pid>" where pid is Foreman/Drillbit process id.

Caused By (java.lang.NoClassDefFoundError) 
org/apache/hadoop/yarn/api/ApplicationClientProtocolPB 
org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo.getTokenInfo():65 
org.apache.hadoop.security.SecurityUtil.getTokenInfo():331 
org.apache.hadoop.security.SaslRpcClient.getServerToken():263 
org.apache.hadoop.security.SaslRpcClient.createSaslClient():219 
org.apache.hadoop.security.SaslRpcClient.selectSaslClient():159 
org.apache.hadoop.security.SaslRpcClient.saslConnect():396 
org.apache.hadoop.ipc.Client$Connection.setupSaslConnection():555 
org.apache.hadoop.ipc.Client$Connection.access$1800():370 
org.apache.hadoop.ipc.Client$Connection$2.run():724

Thank you,

Vlad

On 8/24/17 08:39, Zubair, Muhammad wrote:
Padma,
I've already modified the configuration as specified, but the error is still 
there.

Running hdfs dfs -ls /folder does return list of files

I enabled Verbose error logging, here's the full error message:

org.apache.drill.common.exceptions.UserRemoteException: RESOURCE
ERROR: Failed to create schema tree. [Error Id:
28c0c9a2-460d-460e-b93b-1d34e341cc65 on server:31010]
(java.io.IOException) Failed on local exception: java.io.IOException:
Couldn't set up IO streams; Host Details : local host is:
"server/10.61.60.113"; destination host is: "hdfs-server":8020;
org.apache.hadoop.net.NetUtils.wrapException():776
org.apache.hadoop.ipc.Client.call():1480
org.apache.hadoop.ipc.Client.call():1407
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229
com.sun.proxy.$Proxy63.getListing():-1
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.g
etListing():573 sun.reflect.GeneratedMethodAccessor3.invoke():-1
sun.reflect.DelegatingMethodAccessorImpl.invoke():43
java.lang.reflect.Method.invoke():497
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102
com.sun.proxy.$Proxy64.getListing():-1
org.apache.hadoop.hdfs.DFSClient.listPaths():2094
org.apache.hadoop.hdfs.DFSClient.listPaths():2077
org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791
org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849
org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81
org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860
org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522
org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():16
0
org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSche
ma.():77
org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchema
s():64
org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():149
org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFacto
ry.registerSchemas():396
org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110
org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99
org.apache.drill.exec.ops.QueryContext.getRootSchema():164
org.apache.drill.exec.ops.QueryContext.getRootSchema():153
org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139
org.apache.drill.exec.planner.sql.SqlConverter.():111
org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79
org.apache.drill.exec.work.foreman.Foreman.runSQL():1050
org.apache.drill.exec.work.foreman.Foreman.run():280
java.util.concurrent.ThreadPoolExecutor.runWorker():1142
java.util.concurrent.ThreadPoolExecutor$Worker.run():617
java.lang.Thread.run():745 Caused By (java.io.IOException) Couldn't
set up IO streams
org.apache.hadoop.ipc.Client$Connection.setupIOstreams():788
org.apache.hadoop.ipc.Client$Connection.access$2800():370
org.apache.hadoop.ipc.Client.getConnection():1529
org.apache.hadoop.ipc.Client.call():1446
org.apache.hadoop.ipc.Client.call():1407
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229
com.sun.proxy.$Proxy63.getListing():-1
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.g
etListing():573 sun.reflect.GeneratedMethodAccessor3.invoke():-1
sun.reflect.DelegatingMethodAccessorImpl.invoke():43
java.lang.reflect.Method.invoke():497
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102
com.sun.proxy.$Proxy64.getListing():-1
org.apache.hadoop.hdfs.DFSClient.listPaths():2094
org.apache.hadoop.hdfs.DFSClient.listPaths():2077
org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791
org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849
org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81
org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860
org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522
org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():16
0
org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSche
ma.():77
org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchema
s():64
org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():149
org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFacto
ry.registerSchemas():396
org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110
org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99
org.apache.drill.exec.ops.QueryContext.getRootSchema():164
org.apache.drill.exec.ops.QueryContext.getRootSchema():153
org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139
org.apache.drill.exec.planner.sql.SqlConverter.():111
org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79
org.apache.drill.exec.work.foreman.Foreman.runSQL():1050
org.apache.drill.exec.work.foreman.Foreman.run():280
java.util.concurrent.ThreadPoolExecutor.runWorker():1142
java.util.concurrent.ThreadPoolExecutor$Worker.run():617
java.lang.Thread.run():745 Caused By (java.lang.NoClassDefFoundError)
org/apache/hadoop/yarn/api/ApplicationClientProtocolPB
org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo.getTokenIn
fo():65 org.apache.hadoop.security.SecurityUtil.getTokenInfo():331
org.apache.hadoop.security.SaslRpcClient.getServerToken():263
org.apache.hadoop.security.SaslRpcClient.createSaslClient():219
org.apache.hadoop.security.SaslRpcClient.selectSaslClient():159
org.apache.hadoop.security.SaslRpcClient.saslConnect():396
org.apache.hadoop.ipc.Client$Connection.setupSaslConnection():555
org.apache.hadoop.ipc.Client$Connection.access$1800():370
org.apache.hadoop.ipc.Client$Connection$2.run():724
org.apache.hadoop.ipc.Client$Connection$2.run():720
java.security.AccessController.doPrivileged():-2
javax.security.auth.Subject.doAs():422
org.apache.hadoop.security.UserGroupInformation.doAs():1657
org.apache.hadoop.ipc.Client$Connection.setupIOstreams():720
org.apache.hadoop.ipc.Client$Connection.access$2800():370
org.apache.hadoop.ipc.Client.getConnection():1529
org.apache.hadoop.ipc.Client.call():1446
org.apache.hadoop.ipc.Client.call():1407
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229
com.sun.proxy.$Proxy63.getListing():-1
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.g
etListing():573 sun.reflect.GeneratedMethodAccessor3.invoke():-1
sun.reflect.DelegatingMethodAccessorImpl.invoke():43
java.lang.reflect.Method.invoke():497
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102
com.sun.proxy.$Proxy64.getListing():-1
org.apache.hadoop.hdfs.DFSClient.listPaths():2094
org.apache.hadoop.hdfs.DFSClient.listPaths():2077
org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791
org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849
org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81
org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860
org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522
org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():16
0
org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSche
ma.():77
org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchema
s():64
org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():149
org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFacto
ry.registerSchemas():396
org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110
org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99
org.apache.drill.exec.ops.QueryContext.getRootSchema():164
org.apache.drill.exec.ops.QueryContext.getRootSchema():153
org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139
org.apache.drill.exec.planner.sql.SqlConverter.():111
org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79
org.apache.drill.exec.work.foreman.Foreman.runSQL():1050
org.apache.drill.exec.work.foreman.Foreman.run():280
java.util.concurrent.ThreadPoolExecutor.runWorker():1142
java.util.concurrent.ThreadPoolExecutor$Worker.run():617
java.lang.Thread.run():745



-----Original Message-----
From: Padma Penumarthy [mailto:[email protected]]
Sent: August 23, 17 9:09 PM
To: [email protected]
Subject: Re: Apache Drill unable to read files from HDFS (Resource
error: Failed to create schema tree)

For HDFS, your storage plugin configuration should be something like this:

{
    "type": "file",
    "enabled": true,
    "connection": "hdfs://<IP Address>:<Port>”,   // IP address and port number 
of name node metadata service
    "config": null,
    "workspaces": {
      "root": {
        "location": "/",
        "writable": true,
        "defaultInputFormat": null
      },
      "tmp": {
        "location": "/tmp",
        "writable": true,
        "defaultInputFormat": null
      }
    },

Also, try hadoop dfs -ls command to see if you can list the files.

Thanks,
Padma


On Aug 23, 2017, at 12:18 PM, Lee, David 
<[email protected]<mailto:[email protected]>> wrote:

HDFS storage plugin should be set to your HDFS name node url..

-----Original Message-----
From: Zubair, Muhammad [mailto:[email protected]]
Sent: Wednesday, August 23, 2017 11:33 AM
To: [email protected]<mailto:[email protected]>
Subject: Apache Drill unable to read files from HDFS (Resource error:
Failed to create schema tree)

Hello,
After setting up drill on one of the edge nodes of our HDFS cluster, I am 
unable to read any hdfs files. I can query data from local files (as long as 
they are in a folder that has 777 permissions) but querying data from hdfs 
fails with the following error:
Error: RESOURCE ERROR: Failed to create schema tree.
[Error Id: d9f7908c-6c3b-49c0-a11e-71c004d27f46 on server-name:31010]
(state=,code=0)
Query:
0: jdbc:drill:zk=local> select * from hdfs.`/names/city.parquet` limit 2; 
Querying from local file works fine:
0: jdbc:drill:zk=local> select * from dfs.`/tmp/city.parquet` limit 2; My HDFS 
settings are similar to the DFS settings, except for the connection URL being the 
server address instead of file:/// I can't find anything online regarding this 
error for drill.
______________________________________________________________________
_ If you received this email in error, please advise the sender (by
return email or otherwise) immediately. You have consented to receive the 
attached electronically at the above-noted email address; please retain a copy 
of this confirmation for future reference.

Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur 
immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté 
de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse 
courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation 
pour les fins de reference future.


This message may contain information that is confidential or privileged. If you 
are not the intended recipient, please advise the sender immediately and delete 
this message. See 
http://www.blackrock.com/corporate/en-us/compliance/email-disclaimers for 
further information.  Please refer to 
http://www.blackrock.com/corporate/en-us/compliance/privacy-policy for more 
information about BlackRock’s Privacy Policy.

For a list of BlackRock's office addresses worldwide, see 
http://www.blackrock.com/corporate/en-us/about-us/contacts-locations.

© 2017 BlackRock, Inc. All rights reserved.

______________________________________________________________________
_ If you received this email in error, please advise the sender (by
return email or otherwise) immediately. You have consented to receive the 
attached electronically at the above-noted email address; please retain a copy 
of this confirmation for future reference.

Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur 
immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté 
de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse 
courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation 
pour les fins de reference future.

Thank you,

Vlad


_______________________________________________________________________
If you received this email in error, please advise the sender (by return email 
or otherwise) immediately. You have consented to receive the attached 
electronically at the above-noted email address; please retain a copy of this 
confirmation for future reference.

Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur 
immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté 
de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse 
courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation 
pour les fins de reference future.


Thank you,

Vlad

Reply via email to