​Are you using wildcard values while configuring proxy users in
core-site.xml? some of the hadoop version dont allow you to do so.​

On Sun, May 31, 2015 at 5:25 PM, George Lu <[email protected]> wrote:

> I created a storage plugin on hdfs,
>
> curl -H "Content-type: application/json" -X POST -d '{"name":"myhdfs",
> "config": {"type" : "file","enabled" : true,"connection" :
> "hdfs://prod7:9000/","workspaces" : {"dw" : {"location" :
> "/drill/datawarehouse","writable" : true,"defaultInputFormat" :
> "parquet"}},"formats" : {"parquet" : {"type" : "parquet"}}}}'
> http://localhost:8047/storage/myhdfs.json
>
> After that, I "use myhdfs.dw" and use "show files" and "CREATE TABLE AS",
> CTAS stuck and never proceed, I checked the log,
>
> ======================================================
>
> [29365f51-4d27-4b23-9d76-6f5ec0ab7ffc on prod9:31010]
> at
>
> org.apache.drill.common.exceptions.UserException$Builder.build(UserException.java:465)
> ~[drill-common-0.9.0-rebuffed.jar:0.9.0]
> at
>
> org.apache.drill.exec.work.foreman.Foreman$ForemanResult.close(Foreman.java:620)
> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> at
>
> org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:717)
> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> at
>
> org.apache.drill.exec.work.foreman.Foreman$StateSwitch.processEvent(Foreman.java:659)
> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> at org.apache.drill.common.EventProcessor.sendEvent(EventProcessor.java:73)
> [drill-common-0.9.0-rebuffed.jar:0.9.0]
> at
>
> org.apache.drill.exec.work.foreman.Foreman$StateSwitch.moveToState(Foreman.java:661)
> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> at org.apache.drill.exec.work.foreman.Foreman.moveToState(Foreman.java:762)
> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:210)
> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [na:1.8.0_25]
> at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [na:1.8.0_25]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_25]
> Caused by: org.apache.drill.exec.planner.sql.QueryInputException: Failure
> handling SQL.
> at
>
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:174)
> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> at org.apache.drill.exec.work.foreman.Foreman.runSQL(Foreman.java:773)
> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> at org.apache.drill.exec.work.foreman.Foreman.run(Foreman.java:203)
> [drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> ... 3 common frames omitted
> *Caused by: org.apache.hadoop.ipc.RemoteException: User: root is not
> allowed to impersonate root*
> at org.apache.hadoop.ipc.Client.call(Client.java:1410)
> ~[hadoop-common-2.4.1.jar:na]
> at org.apache.hadoop.ipc.Client.call(Client.java:1363)
> ~[hadoop-common-2.4.1.jar:na]
> at
>
> org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
> ~[hadoop-common-2.4.1.jar:na]
> at com.sun.proxy.$Proxy87.getListing(Unknown Source) ~[na:na]
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> ~[na:1.8.0_25]
> at
>
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> ~[na:1.8.0_25]
> at
>
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> ~[na:1.8.0_25]
> at java.lang.reflect.Method.invoke(Method.java:483) ~[na:1.8.0_25]
> at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:190)
> ~[hadoop-common-2.4.1.jar:na]
> at
>
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:103)
> ~[hadoop-common-2.4.1.jar:na]
> at com.sun.proxy.$Proxy87.getListing(Unknown Source) ~[na:na]
> at
>
> org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getListing(ClientNamenodeProtocolTranslatorPB.java:515)
> ~[hadoop-hdfs-2.4.1.jar:na]
> at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1743)
> ~[hadoop-hdfs-2.4.1.jar:na]
> at org.apache.hadoop.hdfs.DFSClient.listPaths(DFSClient.java:1726)
> ~[hadoop-hdfs-2.4.1.jar:na]
> at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:650)
> ~[hadoop-hdfs-2.4.1.jar:na]
> at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:102)
> ~[hadoop-hdfs-2.4.1.jar:na]
> at
>
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:712)
> ~[hadoop-hdfs-2.4.1.jar:na]
> at
>
> org.apache.hadoop.hdfs.DistributedFileSystem$14.doCall(DistributedFileSystem.java:708)
> ~[hadoop-hdfs-2.4.1.jar:na]
> at
>
> org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> ~[hadoop-common-2.4.1.jar:na]
> at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:708)
> ~[hadoop-hdfs-2.4.1.jar:na]
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1483)
> ~[hadoop-common-2.4.1.jar:na]
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1560)
> ~[hadoop-common-2.4.1.jar:na]
> at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1540)
> ~[hadoop-common-2.4.1.jar:na]
> at
>
> org.apache.drill.exec.store.dfs.DrillFileSystem.list(DrillFileSystem.java:699)
> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> at
>
> org.apache.drill.exec.planner.sql.handlers.ShowFileHandler.getPlan(ShowFileHandler.java:95)
> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> at
>
> org.apache.drill.exec.planner.sql.DrillSqlWorker.getPlan(DrillSqlWorker.java:167)
> ~[drill-java-exec-0.9.0-rebuffed.jar:0.9.0]
> ... 5 common frames omitted
> ==========================================================
>
> I run the command in root and when I use "hdfs dfs -ls /drill/"
> drwxr-xr-x   - root supergroup          0 2015-05-31 18:58
> /drill/datawarehouse
>
> Thanks!
>
> Regards,
> George Lu
>



-- 
Rajkumar Singh
MapR Technologies

Reply via email to