Padma,
I've already modified the configuration as specified, but the error is still 
there.

Running hdfs dfs -ls /folder does return list of files

I enabled Verbose error logging, here's the full error message:

org.apache.drill.common.exceptions.UserRemoteException: RESOURCE ERROR: Failed 
to create schema tree. [Error Id: 28c0c9a2-460d-460e-b93b-1d34e341cc65 on 
server:31010] (java.io.IOException) Failed on local exception: 
java.io.IOException: Couldn't set up IO streams; Host Details : local host is: 
"server/10.61.60.113"; destination host is: "hdfs-server":8020; 
org.apache.hadoop.net.NetUtils.wrapException():776 
org.apache.hadoop.ipc.Client.call():1480 
org.apache.hadoop.ipc.Client.call():1407 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229 
com.sun.proxy.$Proxy63.getListing():-1 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing():573
 sun.reflect.GeneratedMethodAccessor3.invoke():-1 
sun.reflect.DelegatingMethodAccessorImpl.invoke():43 
java.lang.reflect.Method.invoke():497 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102 
com.sun.proxy.$Proxy64.getListing():-1 
org.apache.hadoop.hdfs.DFSClient.listPaths():2094 
org.apache.hadoop.hdfs.DFSClient.listPaths():2077 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791 
org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106 
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853 
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860 
org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522 
org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():160 
org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.():77 
org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas():64 
org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():149 
org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFactory.registerSchemas():396
 org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110 
org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99 
org.apache.drill.exec.ops.QueryContext.getRootSchema():164 
org.apache.drill.exec.ops.QueryContext.getRootSchema():153 
org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139 
org.apache.drill.exec.planner.sql.SqlConverter.():111 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79 
org.apache.drill.exec.work.foreman.Foreman.runSQL():1050 
org.apache.drill.exec.work.foreman.Foreman.run():280 
java.util.concurrent.ThreadPoolExecutor.runWorker():1142 
java.util.concurrent.ThreadPoolExecutor$Worker.run():617 
java.lang.Thread.run():745 Caused By (java.io.IOException) Couldn't set up IO 
streams org.apache.hadoop.ipc.Client$Connection.setupIOstreams():788 
org.apache.hadoop.ipc.Client$Connection.access$2800():370 
org.apache.hadoop.ipc.Client.getConnection():1529 
org.apache.hadoop.ipc.Client.call():1446 
org.apache.hadoop.ipc.Client.call():1407 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229 
com.sun.proxy.$Proxy63.getListing():-1 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing():573
 sun.reflect.GeneratedMethodAccessor3.invoke():-1 
sun.reflect.DelegatingMethodAccessorImpl.invoke():43 
java.lang.reflect.Method.invoke():497 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102 
com.sun.proxy.$Proxy64.getListing():-1 
org.apache.hadoop.hdfs.DFSClient.listPaths():2094 
org.apache.hadoop.hdfs.DFSClient.listPaths():2077 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791 
org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106 
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853 
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860 
org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522 
org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():160 
org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.():77 
org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas():64 
org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():149 
org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFactory.registerSchemas():396
 org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110 
org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99 
org.apache.drill.exec.ops.QueryContext.getRootSchema():164 
org.apache.drill.exec.ops.QueryContext.getRootSchema():153 
org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139 
org.apache.drill.exec.planner.sql.SqlConverter.():111 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79 
org.apache.drill.exec.work.foreman.Foreman.runSQL():1050 
org.apache.drill.exec.work.foreman.Foreman.run():280 
java.util.concurrent.ThreadPoolExecutor.runWorker():1142 
java.util.concurrent.ThreadPoolExecutor$Worker.run():617 
java.lang.Thread.run():745 Caused By (java.lang.NoClassDefFoundError) 
org/apache/hadoop/yarn/api/ApplicationClientProtocolPB 
org.apache.hadoop.yarn.security.client.ClientRMSecurityInfo.getTokenInfo():65 
org.apache.hadoop.security.SecurityUtil.getTokenInfo():331 
org.apache.hadoop.security.SaslRpcClient.getServerToken():263 
org.apache.hadoop.security.SaslRpcClient.createSaslClient():219 
org.apache.hadoop.security.SaslRpcClient.selectSaslClient():159 
org.apache.hadoop.security.SaslRpcClient.saslConnect():396 
org.apache.hadoop.ipc.Client$Connection.setupSaslConnection():555 
org.apache.hadoop.ipc.Client$Connection.access$1800():370 
org.apache.hadoop.ipc.Client$Connection$2.run():724 
org.apache.hadoop.ipc.Client$Connection$2.run():720 
java.security.AccessController.doPrivileged():-2 
javax.security.auth.Subject.doAs():422 
org.apache.hadoop.security.UserGroupInformation.doAs():1657 
org.apache.hadoop.ipc.Client$Connection.setupIOstreams():720 
org.apache.hadoop.ipc.Client$Connection.access$2800():370 
org.apache.hadoop.ipc.Client.getConnection():1529 
org.apache.hadoop.ipc.Client.call():1446 
org.apache.hadoop.ipc.Client.call():1407 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke():229 
com.sun.proxy.$Proxy63.getListing():-1 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing():573
 sun.reflect.GeneratedMethodAccessor3.invoke():-1 
sun.reflect.DelegatingMethodAccessorImpl.invoke():43 
java.lang.reflect.Method.invoke():497 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod():187 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke():102 
com.sun.proxy.$Proxy64.getListing():-1 
org.apache.hadoop.hdfs.DFSClient.listPaths():2094 
org.apache.hadoop.hdfs.DFSClient.listPaths():2077 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal():791 
org.apache.hadoop.hdfs.DistributedFileSystem.access$700():106 
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():853 
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall():849 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve():81 
org.apache.hadoop.hdfs.DistributedFileSystem.listStatus():860 
org.apache.drill.exec.store.dfs.DrillFileSystem.listStatus():522 
org.apache.drill.exec.store.dfs.WorkspaceSchemaFactory.accessible():160 
org.apache.drill.exec.store.dfs.FileSystemSchemaFactory$FileSystemSchema.():77 
org.apache.drill.exec.store.dfs.FileSystemSchemaFactory.registerSchemas():64 
org.apache.drill.exec.store.dfs.FileSystemPlugin.registerSchemas():149 
org.apache.drill.exec.store.StoragePluginRegistryImpl$DrillSchemaFactory.registerSchemas():396
 org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():110 
org.apache.drill.exec.store.SchemaTreeProvider.createRootSchema():99 
org.apache.drill.exec.ops.QueryContext.getRootSchema():164 
org.apache.drill.exec.ops.QueryContext.getRootSchema():153 
org.apache.drill.exec.ops.QueryContext.getNewDefaultSchema():139 
org.apache.drill.exec.planner.sql.SqlConverter.():111 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan():101 
org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan():79 
org.apache.drill.exec.work.foreman.Foreman.runSQL():1050 
org.apache.drill.exec.work.foreman.Foreman.run():280 
java.util.concurrent.ThreadPoolExecutor.runWorker():1142 
java.util.concurrent.ThreadPoolExecutor$Worker.run():617 
java.lang.Thread.run():745



-----Original Message-----
From: Padma Penumarthy [mailto:[email protected]] 
Sent: August 23, 17 9:09 PM
To: [email protected]
Subject: Re: Apache Drill unable to read files from HDFS (Resource error: 
Failed to create schema tree)

For HDFS, your storage plugin configuration should be something like this:

{
  "type": "file",
  "enabled": true,
  "connection": "hdfs://<IP Address>:<Port>”,   // IP address and port number 
of name node metadata service
  "config": null,
  "workspaces": {
    "root": {
      "location": "/",
      "writable": true,
      "defaultInputFormat": null
    },
    "tmp": {
      "location": "/tmp",
      "writable": true,
      "defaultInputFormat": null
    }
  },

Also, try hadoop dfs -ls command to see if you can list the files.

Thanks,
Padma


On Aug 23, 2017, at 12:18 PM, Lee, David 
<[email protected]<mailto:[email protected]>> wrote:

HDFS storage plugin should be set to your HDFS name node url..

-----Original Message-----
From: Zubair, Muhammad [mailto:[email protected]]
Sent: Wednesday, August 23, 2017 11:33 AM
To: [email protected]<mailto:[email protected]>
Subject: Apache Drill unable to read files from HDFS (Resource error: Failed to 
create schema tree)

Hello,
After setting up drill on one of the edge nodes of our HDFS cluster, I am 
unable to read any hdfs files. I can query data from local files (as long as 
they are in a folder that has 777 permissions) but querying data from hdfs 
fails with the following error:
Error: RESOURCE ERROR: Failed to create schema tree.
[Error Id: d9f7908c-6c3b-49c0-a11e-71c004d27f46 on server-name:31010] 
(state=,code=0)
Query:
0: jdbc:drill:zk=local> select * from hdfs.`/names/city.parquet` limit 2; 
Querying from local file works fine:
0: jdbc:drill:zk=local> select * from dfs.`/tmp/city.parquet` limit 2; My HDFS 
settings are similar to the DFS settings, except for the connection URL being 
the server address instead of file:/// I can't find anything online regarding 
this error for drill.
_______________________________________________________________________
If you received this email in error, please advise the sender (by return email 
or otherwise) immediately. You have consented to receive the attached 
electronically at the above-noted email address; please retain a copy of this 
confirmation for future reference.

Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur 
immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté 
de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse 
courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation 
pour les fins de reference future.


This message may contain information that is confidential or privileged. If you 
are not the intended recipient, please advise the sender immediately and delete 
this message. See 
http://www.blackrock.com/corporate/en-us/compliance/email-disclaimers for 
further information.  Please refer to 
http://www.blackrock.com/corporate/en-us/compliance/privacy-policy for more 
information about BlackRock’s Privacy Policy.

For a list of BlackRock's office addresses worldwide, see 
http://www.blackrock.com/corporate/en-us/about-us/contacts-locations.

© 2017 BlackRock, Inc. All rights reserved.

_______________________________________________________________________
If you received this email in error, please advise the sender (by return email 
or otherwise) immediately. You have consented to receive the attached 
electronically at the above-noted email address; please retain a copy of this 
confirmation for future reference.  

Si vous recevez ce courriel par erreur, veuillez en aviser l'expéditeur 
immédiatement, par retour de courriel ou par un autre moyen. Vous avez accepté 
de recevoir le(s) document(s) ci-joint(s) par voie électronique à l'adresse 
courriel indiquée ci-dessus; veuillez conserver une copie de cette confirmation 
pour les fins de reference future.

Reply via email to