[ 
https://issues.apache.org/jira/browse/DRILL-5733?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arina Ielchiieva reassigned DRILL-5733:
---------------------------------------

    Assignee:     (was: Arina Ielchiieva)

> Unable to SELECT from parquet file with Hadoop 2.7.4
> ----------------------------------------------------
>
>                 Key: DRILL-5733
>                 URL: https://issues.apache.org/jira/browse/DRILL-5733
>             Project: Apache Drill
>          Issue Type: Bug
>    Affects Versions: 1.11.0
>            Reporter: Michele Lamarca
>
> {{SELECT * FROM hdfs.`/user/drill/nation.parquet`;}} fails with Hadoop 2.7.4 
> with {noformat}
> 1/2          SELECT * FROM hdfs.`/user/drill/nation.parquet`;
> Error: SYSTEM ERROR: RemoteException: /user/drill/nation.parquet (is
> not a directory)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:272)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:215)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:199)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
>         at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3820)
>         at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
>         at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
>         at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>         at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>         at java.security.AccessController.doPrivileged(Native Method)
>         at javax.security.auth.Subject.doAs(Subject.java:422)
>         at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
> {noformat}
> Query correctly executes with Hadoop 2.7.3, while it fails with:
> - Hadoop 2.7.4 with Drill 1.11 (default pom.xml)
> - Hadoop 2.7.4 with Drill 1.11 (with -Dhadoop.version=2.7.4)
> - Hadoop 2.8.0 with Drill 1.11 (default pom.xml)
> - Hadoop 3.0.0-alpha4 with Drill 1.11 (default pom.xml)
> thus looking related to https://issues.apache.org/jira/browse/HDFS-10673
> Temporary workaround consists in querying on an enclosing directory, as 
> suggested by [~kkhatua] on drill-user mailinglist.
> Relevant stacktrace from drillbit log
> {noformat}
> 2017-08-19 09:00:45,570 [26681de9-2b48-2c3a-cc7c-2c7ceeb1beae:foreman] INFO  
> o.a.drill.exec.work.foreman.Foreman - Query text for query id 
> 26681de9-2b48-2c3a-cc7c-2c7ceeb1beae: SELECT * FROM 
> hdfs.`/user/drill/nation.parquet`
> 2017-08-19 09:00:45,571 [UserServer-1] WARN  
> o.a.drill.exec.rpc.user.UserServer - Message of mode REQUEST of rpc type 3 
> took longer than 500ms.  Actual duration was 7137ms.
> 2017-08-19 09:00:45,617 [26681de9-2b48-2c3a-cc7c-2c7ceeb1beae:foreman] INFO  
> o.a.d.c.s.persistence.ScanResult - loading 7 classes for 
> org.apache.drill.exec.store.dfs.FormatPlugin took 0ms
> 2017-08-19 09:00:45,618 [26681de9-2b48-2c3a-cc7c-2c7ceeb1beae:foreman] INFO  
> o.a.d.c.s.persistence.ScanResult - loading 8 classes for 
> org.apache.drill.common.logical.FormatPluginConfig took 0ms
> 2017-08-19 09:00:45,619 [26681de9-2b48-2c3a-cc7c-2c7ceeb1beae:foreman] INFO  
> o.a.d.c.s.persistence.ScanResult - loading 8 classes for 
> org.apache.drill.common.logical.FormatPluginConfig took 0ms
> 2017-08-19 09:00:45,619 [26681de9-2b48-2c3a-cc7c-2c7ceeb1beae:foreman] INFO  
> o.a.d.c.s.persistence.ScanResult - loading 8 classes for 
> org.apache.drill.common.logical.FormatPluginConfig took 0ms
> 2017-08-19 09:00:45,648 [26681de9-2b48-2c3a-cc7c-2c7ceeb1beae:foreman] INFO  
> o.a.d.c.s.persistence.ScanResult - loading 7 classes for 
> org.apache.drill.exec.store.dfs.FormatPlugin took 0ms
> 2017-08-19 09:00:45,649 [26681de9-2b48-2c3a-cc7c-2c7ceeb1beae:foreman] INFO  
> o.a.d.c.s.persistence.ScanResult - loading 8 classes for 
> org.apache.drill.common.logical.FormatPluginConfig took 0ms
> 2017-08-19 09:00:45,649 [26681de9-2b48-2c3a-cc7c-2c7ceeb1beae:foreman] INFO  
> o.a.d.c.s.persistence.ScanResult - loading 8 classes for 
> org.apache.drill.common.logical.FormatPluginConfig took 0ms
> 2017-08-19 09:00:45,650 [26681de9-2b48-2c3a-cc7c-2c7ceeb1beae:foreman] INFO  
> o.a.d.c.s.persistence.ScanResult - loading 8 classes for 
> org.apache.drill.common.logical.FormatPluginConfig took 0ms
> 2017-08-19 09:00:45,726 [26681de9-2b48-2c3a-cc7c-2c7ceeb1beae:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2017-08-19 09:00:45,726 [26681de9-2b48-2c3a-cc7c-2c7ceeb1beae:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2017-08-19 09:00:45,726 [26681de9-2b48-2c3a-cc7c-2c7ceeb1beae:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2017-08-19 09:00:45,726 [26681de9-2b48-2c3a-cc7c-2c7ceeb1beae:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2017-08-19 09:00:45,726 [26681de9-2b48-2c3a-cc7c-2c7ceeb1beae:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2017-08-19 09:00:45,726 [26681de9-2b48-2c3a-cc7c-2c7ceeb1beae:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2017-08-19 09:00:45,726 [26681de9-2b48-2c3a-cc7c-2c7ceeb1beae:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2017-08-19 09:00:45,726 [26681de9-2b48-2c3a-cc7c-2c7ceeb1beae:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2017-08-19 09:00:45,726 [26681de9-2b48-2c3a-cc7c-2c7ceeb1beae:foreman] INFO  
> o.a.d.exec.store.dfs.FileSelection - FileSelection.getStatuses() took 0 ms, 
> numFiles: 1
> 2017-08-19 09:00:45,775 [26681de9-2b48-2c3a-cc7c-2c7ceeb1beae:foreman] ERROR 
> o.a.drill.exec.work.foreman.Foreman - SYSTEM ERROR: RemoteException: 
> /user/drill/nation.parquet (is not a directory)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:272)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:215)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:199)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3820)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
>     at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
>     at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
>     at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
> [Error Id: 8f351c63-d3f7-4b61-a5e6-1a09c6c2ba8d on node001.cm.cluster:31010]
> org.apache.drill.common.exceptions.UserException: SYSTEM ERROR: 
> RemoteException: /user/drill/nation.parquet (is not a directory)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:272)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:215)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:199)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3820)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
>     at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
>     at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
>     at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
> [Error Id: 8f351c63-d3f7-4b61-a5e6-1a09c6c2ba8d on node001.cm.cluster:31010]
>     at 
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:550)
>  ~[drill-common-1.11.0.jar:1.11.0]
>     at 
> org.apache.drill.exec.work.foreman.Foreman$ForemanResult.close(Foreman.java:847)
>  [drill-java-exec-1.11.0.jar:1.11.0]
>     at 
> org.apache.drill.exec.work.foreman.Foreman.moveToState(Foreman.java:977) 
> [drill-java-exec-1.11.0.jar:1.11.0]
>     at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:297) 
> [drill-java-exec-1.11.0.jar:1.11.0]
>     at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  [na:1.8.0_141]
>     at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  [na:1.8.0_141]
>     at java.lang.Thread.run(Thread.java:748) [na:1.8.0_141]
> Caused by: org.apache.drill.exec.work.foreman.ForemanException: Unexpected 
> exception during fragment initialization: Internal error: Error while 
> applying rule DrillTableRule, args 
> [rel#152:EnumerableTableScan.ENUMERABLE.ANY([]).[](table=[hdfs, 
> /user/drill/nation.parquet])]
>     ... 4 common frames omitted
> Caused by: java.lang.AssertionError: Internal error: Error while applying 
> rule DrillTableRule, args 
> [rel#152:EnumerableTableScan.ENUMERABLE.ANY([]).[](table=[hdfs, 
> /user/drill/nation.parquet])]
>     at org.apache.calcite.util.Util.newInternal(Util.java:792) 
> ~[calcite-core-1.4.0-drill-r21.jar:1.4.0-drill-r21]
>     at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:251)
>  ~[calcite-core-1.4.0-drill-r21.jar:1.4.0-drill-r21]
>     at 
> org.apache.calcite.plan.volcano.VolcanoPlanner.findBestExp(VolcanoPlanner.java:811)
>  ~[calcite-core-1.4.0-drill-r21.jar:1.4.0-drill-r21]
>     at 
> org.apache.calcite.tools.Programs$RuleSetProgram.run(Programs.java:310) 
> ~[calcite-core-1.4.0-drill-r21.jar:1.4.0-drill-r21]
>     at 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.transform(DefaultSqlHandler.java:401)
>  ~[drill-java-exec-1.11.0.jar:1.11.0]
>     at 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.transform(DefaultSqlHandler.java:343)
>  ~[drill-java-exec-1.11.0.jar:1.11.0]
>     at 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToRawDrel(DefaultSqlHandler.java:242)
>  ~[drill-java-exec-1.11.0.jar:1.11.0]
>     at 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.convertToDrel(DefaultSqlHandler.java:292)
>  ~[drill-java-exec-1.11.0.jar:1.11.0]
>     at 
> org.apache.drill.exec.planner.sql.handlers.DefaultSqlHandler.getPlan(DefaultSqlHandler.java:169)
>  ~[drill-java-exec-1.11.0.jar:1.11.0]
>     at 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getQueryPlan(DrillSqlWorker.java:131)
>  ~[drill-java-exec-1.11.0.jar:1.11.0]
>     at 
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:79)
>  ~[drill-java-exec-1.11.0.jar:1.11.0]
>     at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:1050) 
> [drill-java-exec-1.11.0.jar:1.11.0]
>     at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:280) 
> [drill-java-exec-1.11.0.jar:1.11.0]
>     ... 3 common frames omitted
> Caused by: org.apache.drill.common.exceptions.DrillRuntimeException: Failure 
> creating scan.
>     at 
> org.apache.drill.exec.planner.logical.DrillScanRel.<init>(DrillScanRel.java:92)
>  ~[drill-java-exec-1.11.0.jar:1.11.0]
>     at 
> org.apache.drill.exec.planner.logical.DrillScanRel.<init>(DrillScanRel.java:70)
>  ~[drill-java-exec-1.11.0.jar:1.11.0]
>     at 
> org.apache.drill.exec.planner.logical.DrillScanRel.<init>(DrillScanRel.java:63)
>  ~[drill-java-exec-1.11.0.jar:1.11.0]
>     at 
> org.apache.drill.exec.planner.logical.DrillScanRule.onMatch(DrillScanRule.java:37)
>  ~[drill-java-exec-1.11.0.jar:1.11.0]
>     at 
> org.apache.calcite.plan.volcano.VolcanoRuleCall.onMatch(VolcanoRuleCall.java:228)
>  ~[calcite-core-1.4.0-drill-r21.jar:1.4.0-drill-r21]
>     ... 14 common frames omitted
> Caused by: org.apache.hadoop.security.AccessControlException: 
> /user/drill/nation.parquet (is not a directory)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:272)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:215)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:199)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3820)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
>     at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
>     at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
>     at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
>     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) 
> ~[na:1.8.0_141]
>     at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>  ~[na:1.8.0_141]
>     at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>  ~[na:1.8.0_141]
>     at java.lang.reflect.Constructor.newInstance(Constructor.java:423) 
> ~[na:1.8.0_141]
>     at 
> org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
>  ~[hadoop-common-2.7.1.jar:na]
>     at 
> org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:73)
>  ~[hadoop-common-2.7.1.jar:na]
>     at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2118) 
> ~[hadoop-hdfs-2.7.1.jar:na]
>     at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1305)
>  ~[hadoop-hdfs-2.7.1.jar:na]
>     at 
> org.apache.hadoop.hdfs.DistributedFileSystem$22.doCall(DistributedFileSystem.java:1301)
>  ~[hadoop-hdfs-2.7.1.jar:na]
>     at 
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
>  ~[hadoop-common-2.7.1.jar:na]
>     at 
> org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1317)
>  ~[hadoop-hdfs-2.7.1.jar:na]
>     at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1424) 
> ~[hadoop-common-2.7.1.jar:na]
>     at 
> org.apache.drill.exec.store.dfs.DrillFileSystem.exists(DrillFileSystem.java:603)
>  ~[drill-java-exec-1.11.0.jar:1.11.0]
>     at 
> org.apache.drill.exec.store.parquet.ParquetGroupScan.expandIfNecessary(ParquetGroupScan.java:270)
>  ~[drill-java-exec-1.11.0.jar:1.11.0]
>     at 
> org.apache.drill.exec.store.parquet.ParquetGroupScan.<init>(ParquetGroupScan.java:207)
>  ~[drill-java-exec-1.11.0.jar:1.11.0]
>     at 
> org.apache.drill.exec.store.parquet.ParquetGroupScan.<init>(ParquetGroupScan.java:186)
>  ~[drill-java-exec-1.11.0.jar:1.11.0]
>     at 
> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(ParquetFormatPlugin.java:170)
>  ~[drill-java-exec-1.11.0.jar:1.11.0]
>     at 
> org.apache.drill.exec.store.parquet.ParquetFormatPlugin.getGroupScan(ParquetFormatPlugin.java:66)
>  ~[drill-java-exec-1.11.0.jar:1.11.0]
>     at 
> org.apache.drill.exec.store.dfs.FileSystemPlugin.getPhysicalScan(FileSystemPlugin.java:144)
>  ~[drill-java-exec-1.11.0.jar:1.11.0]
>     at 
> org.apache.drill.exec.store.AbstractStoragePlugin.getPhysicalScan(AbstractStoragePlugin.java:100)
>  ~[drill-java-exec-1.11.0.jar:1.11.0]
>     at 
> org.apache.drill.exec.planner.logical.DrillTable.getGroupScan(DrillTable.java:85)
>  ~[drill-java-exec-1.11.0.jar:1.11.0]
>     at 
> org.apache.drill.exec.planner.logical.DrillScanRel.<init>(DrillScanRel.java:90)
>  ~[drill-java-exec-1.11.0.jar:1.11.0]
>     ... 18 common frames omitted
> Caused by: org.apache.hadoop.ipc.RemoteException: /user/drill/nation.parquet 
> (is not a directory)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkTraverse(FSPermissionChecker.java:272)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:215)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:199)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory.checkPermission(FSDirectory.java:1752)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp.getFileInfo(FSDirStatAndListingOp.java:100)
>     at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3820)
>     at 
> org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:1012)
>     at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:855)
>     at 
> org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616)
>     at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:982)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2217)
>     at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2213)
>     at java.security.AccessController.doPrivileged(Native Method)
>     at javax.security.auth.Subject.doAs(Subject.java:422)
>     at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746)
>     at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2213)
>     at org.apache.hadoop.ipc.Client.call(Client.java:1476) 
> ~[hadoop-common-2.7.1.jar:na]
>     at org.apache.hadoop.ipc.Client.call(Client.java:1407) 
> ~[hadoop-common-2.7.1.jar:na]
>     at 
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:229)
>  ~[hadoop-common-2.7.1.jar:na]
>     at com.sun.proxy.$Proxy65.getFileInfo(Unknown Source) ~[na:na]
>     at 
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:771)
>  ~[hadoop-hdfs-2.7.1.jar:na]
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
> ~[na:1.8.0_141]
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> ~[na:1.8.0_141]
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  ~[na:1.8.0_141]
>     at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_141]
>     at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
>  ~[hadoop-common-2.7.1.jar:na]
>     at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
>  ~[hadoop-common-2.7.1.jar:na]
>     at com.sun.proxy.$Proxy66.getFileInfo(Unknown Source) ~[na:na]
>     at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:2116) 
> ~[hadoop-hdfs-2.7.1.jar:na]
>     ... 33 common frames omitted
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to