[jira] [Created] (HDFS-15089) RBF: SmallFix for RBFMetrics in doc

2019-12-31 Thread luhuachao (Jira)
luhuachao created HDFS-15089:


 Summary: RBF: SmallFix for RBFMetrics in doc
 Key: HDFS-15089
 URL: https://issues.apache.org/jira/browse/HDFS-15089
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: luhuachao
Assignee: luhuachao


SmallFix for RBFMetrics in doc



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2534) scmcli container delete not working

2019-11-18 Thread luhuachao (Jira)
luhuachao created HDDS-2534:
---

 Summary: scmcli container delete not working
 Key: HDDS-2534
 URL: https://issues.apache.org/jira/browse/HDDS-2534
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: SCM
Reporter: luhuachao
Assignee: luhuachao
 Fix For: 0.5.0


{code:java}
java.lang.IllegalArgumentException: Unknown command type: DeleteContainer
at 
org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocolServerSideTranslatorPB.processRequest(StorageContainerLocationProtocolServerSideTranslatorPB.java:219)
at 
org.apache.hadoop.hdds.server.OzoneProtocolMessageDispatcher.processRequest(OzoneProtocolMessageDispatcher.java:72)
at 
org.apache.hadoop.hdds.scm.protocol.StorageContainerLocationProtocolServerSideTranslatorPB.submitRequest(StorageContainerLocationProtocolServerSideTranslatorPB.java:112)
at 
org.apache.hadoop.hdds.protocol.proto.StorageContainerLocationProtocolProtos$StorageContainerLocationProtocolService$2.callBlockingMethod(StorageContainerLocationProtocolProtos.java:30454)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)

{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDDS-529) Some Ozone DataNode logs go to a separate ozone.log file

2019-11-04 Thread luhuachao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16967133#comment-16967133
 ] 

luhuachao edited comment on HDDS-529 at 11/5/19 1:19 AM:
-

https://issues.apache.org/jira/browse/HDDS-2348 

[~cxorm] [~arp], in this issue, revome the logger OZONE,FILE; the FILE logger 
will write the log in package org.apache.hadoop.ozone to ozone.log.


was (Author: huachao):
https://issues.apache.org/jira/browse/HDDS-2348 

[~cxorm] [~arp], in this issue, cut off the logger OZONE,FILE; the FILE logger 
will write the log in package org.apache.hadoop.ozone to ozone.log.

> Some Ozone DataNode logs go to a separate ozone.log file
> 
>
> Key: HDDS-529
> URL: https://issues.apache.org/jira/browse/HDDS-529
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: YiSheng Lien
>Priority: Blocker
>  Labels: beta1
>
> Some, but not all DataNode logs go to a separate ozone.log file. Couple of 
> things to fix here:
> # The behavior should be consistent. All log messages should go to the new 
> log file.
> # The new log file name should follow the Hadoop log file convention e.g. 
> _hadoop---.log_



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-529) Some Ozone DataNode logs go to a separate ozone.log file

2019-11-04 Thread luhuachao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-529?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16967133#comment-16967133
 ] 

luhuachao commented on HDDS-529:


https://issues.apache.org/jira/browse/HDDS-2348 

[~cxorm] [~arp], in this issue, cut off the logger OZONE,FILE; the FILE logger 
will write the log in package org.apache.hadoop.ozone to ozone.log.

> Some Ozone DataNode logs go to a separate ozone.log file
> 
>
> Key: HDDS-529
> URL: https://issues.apache.org/jira/browse/HDDS-529
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Datanode
>Reporter: Arpit Agarwal
>Assignee: YiSheng Lien
>Priority: Blocker
>  Labels: beta1
>
> Some, but not all DataNode logs go to a separate ozone.log file. Couple of 
> things to fix here:
> # The behavior should be consistent. All log messages should go to the new 
> log file.
> # The new log file name should follow the Hadoop log file convention e.g. 
> _hadoop---.log_



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2370) Support Ozone HddsDatanodeService run as plugin with HDFS Datanode

2019-11-04 Thread luhuachao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDDS-2370:

Attachment: (was: HDDS-2370.1.patch)

> Support Ozone HddsDatanodeService run as plugin with HDFS Datanode
> --
>
> Key: HDDS-2370
> URL: https://issues.apache.org/jira/browse/HDDS-2370
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: documentation
>Reporter: luhuachao
>Assignee: luhuachao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In RunningWithHDFS.md 
> {code:java}
> export 
> HADOOP_CLASSPATH=/opt/ozone/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin.jar{code}
> ozone-hdfs/docker-compose.yaml
>  
> {code:java}
>   environment:
>  HADOOP_CLASSPATH: /opt/ozone/share/hadoop/ozoneplugin/*.jar
> {code}
> when i run hddsdatanodeservice as pulgin in hdfs datanode, it comes out with 
> the error below , there is no constructor without parameter.
>  
>  
> {code:java}
> 2019-10-21 21:38:56,391 ERROR datanode.DataNode 
> (DataNode.java:startPlugins(972)) - Unable to load DataNode plugins. 
> Specified list of plugins: org.apache.hadoop.ozone.HddsDatanodeService
> java.lang.RuntimeException: java.lang.NoSuchMethodException: 
> org.apache.hadoop.ozone.HddsDatanodeService.()
> {code}
> what i doubt is that, ozone-0.5 not support running as a plugin in hdfs 
> datanode now ? if so, 
> why donnot  we remove doc RunningWithHDFS.md ? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2370) Support Ozone HddsDatanodeService run as plugin with HDFS Datanode

2019-11-04 Thread luhuachao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16966642#comment-16966642
 ] 

luhuachao commented on HDDS-2370:
-

[~elek] [~adoroszlai] [~aengineer] thanx for comment and review. open a 
sub-task for add test.sh in ozone-hdfs.

> Support Ozone HddsDatanodeService run as plugin with HDFS Datanode
> --
>
> Key: HDDS-2370
> URL: https://issues.apache.org/jira/browse/HDDS-2370
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: documentation
>Reporter: luhuachao
>Assignee: luhuachao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In RunningWithHDFS.md 
> {code:java}
> export 
> HADOOP_CLASSPATH=/opt/ozone/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin.jar{code}
> ozone-hdfs/docker-compose.yaml
>  
> {code:java}
>   environment:
>  HADOOP_CLASSPATH: /opt/ozone/share/hadoop/ozoneplugin/*.jar
> {code}
> when i run hddsdatanodeservice as pulgin in hdfs datanode, it comes out with 
> the error below , there is no constructor without parameter.
>  
>  
> {code:java}
> 2019-10-21 21:38:56,391 ERROR datanode.DataNode 
> (DataNode.java:startPlugins(972)) - Unable to load DataNode plugins. 
> Specified list of plugins: org.apache.hadoop.ozone.HddsDatanodeService
> java.lang.RuntimeException: java.lang.NoSuchMethodException: 
> org.apache.hadoop.ozone.HddsDatanodeService.()
> {code}
> what i doubt is that, ozone-0.5 not support running as a plugin in hdfs 
> datanode now ? if so, 
> why donnot  we remove doc RunningWithHDFS.md ? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2401) Add robot test file for ozone-hdfs

2019-11-04 Thread luhuachao (Jira)
luhuachao created HDDS-2401:
---

 Summary: Add robot test file for ozone-hdfs
 Key: HDDS-2401
 URL: https://issues.apache.org/jira/browse/HDDS-2401
 Project: Hadoop Distributed Data Store
  Issue Type: Sub-task
Reporter: luhuachao






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2370) Support Ozone HddsDatanodeService run as plugin with HDFS Datanode

2019-11-04 Thread luhuachao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDDS-2370:

Summary: Support Ozone HddsDatanodeService run as plugin with HDFS Datanode 
 (was: Remove classpath in RunningWithHDFS.md ozone-hdfs/docker-compose as dir 
'ozoneplugin' is not exist anymore)

> Support Ozone HddsDatanodeService run as plugin with HDFS Datanode
> --
>
> Key: HDDS-2370
> URL: https://issues.apache.org/jira/browse/HDDS-2370
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: documentation
>Reporter: luhuachao
>Assignee: luhuachao
>Priority: Major
>  Labels: pull-request-available
> Attachments: HDDS-2370.1.patch
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> In RunningWithHDFS.md 
> {code:java}
> export 
> HADOOP_CLASSPATH=/opt/ozone/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin.jar{code}
> ozone-hdfs/docker-compose.yaml
>  
> {code:java}
>   environment:
>  HADOOP_CLASSPATH: /opt/ozone/share/hadoop/ozoneplugin/*.jar
> {code}
> when i run hddsdatanodeservice as pulgin in hdfs datanode, it comes out with 
> the error below , there is no constructor without parameter.
>  
>  
> {code:java}
> 2019-10-21 21:38:56,391 ERROR datanode.DataNode 
> (DataNode.java:startPlugins(972)) - Unable to load DataNode plugins. 
> Specified list of plugins: org.apache.hadoop.ozone.HddsDatanodeService
> java.lang.RuntimeException: java.lang.NoSuchMethodException: 
> org.apache.hadoop.ozone.HddsDatanodeService.()
> {code}
> what i doubt is that, ozone-0.5 not support running as a plugin in hdfs 
> datanode now ? if so, 
> why donnot  we remove doc RunningWithHDFS.md ? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDDS-2348) remove log4j properties for org.apache.hadoop.ozone

2019-11-04 Thread luhuachao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao reassigned HDDS-2348:
---

Assignee: luhuachao

> remove log4j properties for org.apache.hadoop.ozone
> ---
>
> Key: HDDS-2348
> URL: https://issues.apache.org/jira/browse/HDDS-2348
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Affects Versions: 0.5.0
>Reporter: luhuachao
>Assignee: luhuachao
>Priority: Major
>  Labels: pull-request-available
> Fix For: 0.5.0
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Properties int log4j.properties cause logger in package 
> org.apache.hadoop.ozone cannot write log to .log file ;such as OM startup_msg 
> .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-2370) Remove classpath in RunningWithHDFS.md ozone-hdfs/docker-compose as dir 'ozoneplugin' is not exist anymore

2019-10-28 Thread luhuachao (Jira)


[ 
https://issues.apache.org/jira/browse/HDDS-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16960844#comment-16960844
 ] 

luhuachao commented on HDDS-2370:
-

[~adoroszlai]  thanks for reply , I would like to work on this.

 

 

> Remove classpath in RunningWithHDFS.md ozone-hdfs/docker-compose as dir 
> 'ozoneplugin' is not exist anymore
> --
>
> Key: HDDS-2370
> URL: https://issues.apache.org/jira/browse/HDDS-2370
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: documentation
>Reporter: luhuachao
>Priority: Major
> Attachments: HDDS-2370.1.patch
>
>
> In RunningWithHDFS.md 
> {code:java}
> export 
> HADOOP_CLASSPATH=/opt/ozone/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin.jar{code}
> ozone-hdfs/docker-compose.yaml
>  
> {code:java}
>   environment:
>  HADOOP_CLASSPATH: /opt/ozone/share/hadoop/ozoneplugin/*.jar
> {code}
> when i run hddsdatanodeservice as pulgin in hdfs datanode, it comes out with 
> the error below , there is no constructor without parameter.
>  
>  
> {code:java}
> 2019-10-21 21:38:56,391 ERROR datanode.DataNode 
> (DataNode.java:startPlugins(972)) - Unable to load DataNode plugins. 
> Specified list of plugins: org.apache.hadoop.ozone.HddsDatanodeService
> java.lang.RuntimeException: java.lang.NoSuchMethodException: 
> org.apache.hadoop.ozone.HddsDatanodeService.()
> {code}
> what i doubt is that, ozone-0.5 not support running as a plugin in hdfs 
> datanode now ? if so, 
> why donnot  we remove doc RunningWithHDFS.md ? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2370) Remove classpath in RunningWithHDFS.md ozone-hdfs/docker-compose as dir 'ozoneplugin' is not exist anymore

2019-10-27 Thread luhuachao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDDS-2370:

Status: Patch Available  (was: Open)

> Remove classpath in RunningWithHDFS.md ozone-hdfs/docker-compose as dir 
> 'ozoneplugin' is not exist anymore
> --
>
> Key: HDDS-2370
> URL: https://issues.apache.org/jira/browse/HDDS-2370
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: documentation
>Reporter: luhuachao
>Priority: Major
> Attachments: HDDS-2370.1.patch
>
>
> In RunningWithHDFS.md 
> {code:java}
> export 
> HADOOP_CLASSPATH=/opt/ozone/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin.jar{code}
> ozone-hdfs/docker-compose.yaml
>  
> {code:java}
>   environment:
>  HADOOP_CLASSPATH: /opt/ozone/share/hadoop/ozoneplugin/*.jar
> {code}
> when i run hddsdatanodeservice as pulgin in hdfs datanode, it comes out with 
> the error below , there is no constructor without parameter.
>  
>  
> {code:java}
> 2019-10-21 21:38:56,391 ERROR datanode.DataNode 
> (DataNode.java:startPlugins(972)) - Unable to load DataNode plugins. 
> Specified list of plugins: org.apache.hadoop.ozone.HddsDatanodeService
> java.lang.RuntimeException: java.lang.NoSuchMethodException: 
> org.apache.hadoop.ozone.HddsDatanodeService.()
> {code}
> what i doubt is that, ozone-0.5 not support running as a plugin in hdfs 
> datanode now ? if so, 
> why donnot  we remove doc RunningWithHDFS.md ? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2370) Remove classpath in RunningWithHDFS.md ozone-hdfs/docker-compose as dir 'ozoneplugin' is not exist anymore

2019-10-27 Thread luhuachao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDDS-2370:

Attachment: HDDS-2370.1.patch

> Remove classpath in RunningWithHDFS.md ozone-hdfs/docker-compose as dir 
> 'ozoneplugin' is not exist anymore
> --
>
> Key: HDDS-2370
> URL: https://issues.apache.org/jira/browse/HDDS-2370
> Project: Hadoop Distributed Data Store
>  Issue Type: Task
>  Components: documentation
>Reporter: luhuachao
>Priority: Major
> Attachments: HDDS-2370.1.patch
>
>
> In RunningWithHDFS.md 
> {code:java}
> export 
> HADOOP_CLASSPATH=/opt/ozone/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin.jar{code}
> ozone-hdfs/docker-compose.yaml
>  
> {code:java}
>   environment:
>  HADOOP_CLASSPATH: /opt/ozone/share/hadoop/ozoneplugin/*.jar
> {code}
> when i run hddsdatanodeservice as pulgin in hdfs datanode, it comes out with 
> the error below , there is no constructor without parameter.
>  
>  
> {code:java}
> 2019-10-21 21:38:56,391 ERROR datanode.DataNode 
> (DataNode.java:startPlugins(972)) - Unable to load DataNode plugins. 
> Specified list of plugins: org.apache.hadoop.ozone.HddsDatanodeService
> java.lang.RuntimeException: java.lang.NoSuchMethodException: 
> org.apache.hadoop.ozone.HddsDatanodeService.()
> {code}
> what i doubt is that, ozone-0.5 not support running as a plugin in hdfs 
> datanode now ? if so, 
> why donnot  we remove doc RunningWithHDFS.md ? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2370) Remove classpath in RunningWithHDFS.md ozone-hdfs/docker-compose as dir 'ozoneplugin' is not exist anymore

2019-10-27 Thread luhuachao (Jira)
luhuachao created HDDS-2370:
---

 Summary: Remove classpath in RunningWithHDFS.md 
ozone-hdfs/docker-compose as dir 'ozoneplugin' is not exist anymore
 Key: HDDS-2370
 URL: https://issues.apache.org/jira/browse/HDDS-2370
 Project: Hadoop Distributed Data Store
  Issue Type: Task
  Components: documentation
Reporter: luhuachao


In RunningWithHDFS.md 
{code:java}
export 
HADOOP_CLASSPATH=/opt/ozone/share/hadoop/ozoneplugin/hadoop-ozone-datanode-plugin.jar{code}
ozone-hdfs/docker-compose.yaml

 
{code:java}
  environment:
 HADOOP_CLASSPATH: /opt/ozone/share/hadoop/ozoneplugin/*.jar
{code}
when i run hddsdatanodeservice as pulgin in hdfs datanode, it comes out with 
the error below , there is no constructor without parameter.

 

 
{code:java}
2019-10-21 21:38:56,391 ERROR datanode.DataNode 
(DataNode.java:startPlugins(972)) - Unable to load DataNode plugins. Specified 
list of plugins: org.apache.hadoop.ozone.HddsDatanodeService
java.lang.RuntimeException: java.lang.NoSuchMethodException: 
org.apache.hadoop.ozone.HddsDatanodeService.()
{code}
what i doubt is that, ozone-0.5 not support running as a plugin in hdfs 
datanode now ? if so, 

why donnot  we remove doc RunningWithHDFS.md ? 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2348) remove log4j properties for org.apache.hadoop.ozone

2019-10-22 Thread luhuachao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDDS-2348:

Description: Properties int log4j.properties cause logger in package 
org.apache.hadoop.ozone cannot write log to .log file ;such as OM startup_msg . 
 (was: The log in package org.apache.hadoop.ozone cannot be logged to .log file 
;such as OM startup_msg .)

> remove log4j properties for org.apache.hadoop.ozone
> ---
>
> Key: HDDS-2348
> URL: https://issues.apache.org/jira/browse/HDDS-2348
> Project: Hadoop Distributed Data Store
>  Issue Type: Improvement
>  Components: Ozone Manager
>Affects Versions: 0.5.0
>Reporter: luhuachao
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Properties int log4j.properties cause logger in package 
> org.apache.hadoop.ozone cannot write log to .log file ;such as OM startup_msg 
> .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-2348) remove log4j properties for org.apache.hadoop.ozone

2019-10-22 Thread luhuachao (Jira)
luhuachao created HDDS-2348:
---

 Summary: remove log4j properties for org.apache.hadoop.ozone
 Key: HDDS-2348
 URL: https://issues.apache.org/jira/browse/HDDS-2348
 Project: Hadoop Distributed Data Store
  Issue Type: Improvement
  Components: Ozone Manager
Affects Versions: 0.5.0
Reporter: luhuachao


The log in package org.apache.hadoop.ozone cannot be logged to .log file ;such 
as OM startup_msg .



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDDS-2095) Submit mr job to yarn failed, Error messegs is "Provider org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found"

2019-09-06 Thread luhuachao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDDS-2095:

Labels: kerberos  (was: )

> Submit mr job to yarn failed,   Error messegs is "Provider 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found"
> ---
>
> Key: HDDS-2095
> URL: https://issues.apache.org/jira/browse/HDDS-2095
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.4.1
>Reporter: luhuachao
>Priority: Major
>  Labels: kerberos
> Attachments: HDDS-2095.001.patch
>
>
> below is the submit command 
> {code:java}
> hadoop jar hadoop-mapreduce-client-jobclient-3.2.0-tests.jar  nnbench 
> -Dfs.defaultFS=o3fs://buc.volume-test  -maps 3   -bytesToWrite 1 
> -numberOfFiles 1000  -blockSize 16  -operation create_write
> {code}
> clinet fail with message 
> {code:java}
> 19/09/06 15:26:52 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
> /user/hdfs/.staging/job_1567754782562_000119/09/06 15:26:52 INFO 
> mapreduce.JobSubmitter: Cleaning up the staging area 
> /user/hdfs/.staging/job_1567754782562_0001java.io.IOException: 
> org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1567754782562_0001 to YARN : 
> org.apache.hadoop.security.token.TokenRenewer: Provider 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found at 
> org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:345) at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:254)
>  at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) at 
> org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>  at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) at 
> org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576) at 
> org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>  at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571) 
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562) at 
> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:873) at 
> org.apache.hadoop.hdfs.NNBench.runTests(NNBench.java:487) at 
> org.apache.hadoop.hdfs.NNBench.run(NNBench.java:604) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at 
> org.apache.hadoop.hdfs.NNBench.main(NNBench.java:579) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>  at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:144) at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:152) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.util.RunJar.run(RunJar.java:308) at 
> org.apache.hadoop.util.RunJar.main(RunJar.java:222)Caused by: 
> org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1567754782562_0001 to YARN : 
> org.apache.hadoop.security.token.TokenRenewer: Provider 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:304)
>  at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:299)
>  at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:330) ... 34 
> more
> {code}
> the log in resourcemanager 
> {code:java}
> 2019-09-06 15:26:51,836 WARN  security.DelegationTokenRenewer 
> (DelegationTokenRenewer.java:handleDTRenewerAppSubmitEvent(923)) - Unable to 
> add the application to the delegation token renewer.
> java.util.ServiceConfigurationError: 
> 

[jira] [Updated] (HDDS-2095) Submit mr job to yarn failed, Error messegs is "Provider org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found"

2019-09-06 Thread luhuachao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDDS-2095:

Status: Patch Available  (was: Open)

> Submit mr job to yarn failed,   Error messegs is "Provider 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found"
> ---
>
> Key: HDDS-2095
> URL: https://issues.apache.org/jira/browse/HDDS-2095
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.4.1
>Reporter: luhuachao
>Priority: Major
> Attachments: HDDS-2095.001.patch
>
>
> below is the submit command 
> {code:java}
> hadoop jar hadoop-mapreduce-client-jobclient-3.2.0-tests.jar  nnbench 
> -Dfs.defaultFS=o3fs://buc.volume-test  -maps 3   -bytesToWrite 1 
> -numberOfFiles 1000  -blockSize 16  -operation create_write
> {code}
> clinet fail with message 
> {code:java}
> 19/09/06 15:26:52 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
> /user/hdfs/.staging/job_1567754782562_000119/09/06 15:26:52 INFO 
> mapreduce.JobSubmitter: Cleaning up the staging area 
> /user/hdfs/.staging/job_1567754782562_0001java.io.IOException: 
> org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1567754782562_0001 to YARN : 
> org.apache.hadoop.security.token.TokenRenewer: Provider 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found at 
> org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:345) at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:254)
>  at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) at 
> org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>  at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) at 
> org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576) at 
> org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>  at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571) 
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562) at 
> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:873) at 
> org.apache.hadoop.hdfs.NNBench.runTests(NNBench.java:487) at 
> org.apache.hadoop.hdfs.NNBench.run(NNBench.java:604) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at 
> org.apache.hadoop.hdfs.NNBench.main(NNBench.java:579) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>  at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:144) at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:152) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.util.RunJar.run(RunJar.java:308) at 
> org.apache.hadoop.util.RunJar.main(RunJar.java:222)Caused by: 
> org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1567754782562_0001 to YARN : 
> org.apache.hadoop.security.token.TokenRenewer: Provider 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:304)
>  at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:299)
>  at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:330) ... 34 
> more
> {code}
> the log in resourcemanager 
> {code:java}
> 2019-09-06 15:26:51,836 WARN  security.DelegationTokenRenewer 
> (DelegationTokenRenewer.java:handleDTRenewerAppSubmitEvent(923)) - Unable to 
> add the application to the delegation token renewer.
> java.util.ServiceConfigurationError: 
> 

[jira] [Updated] (HDDS-2095) Submit mr job to yarn failed, Error messegs is "Provider org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found"

2019-09-06 Thread luhuachao (Jira)


 [ 
https://issues.apache.org/jira/browse/HDDS-2095?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDDS-2095:

Attachment: HDDS-2095.001.patch

> Submit mr job to yarn failed,   Error messegs is "Provider 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found"
> ---
>
> Key: HDDS-2095
> URL: https://issues.apache.org/jira/browse/HDDS-2095
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Filesystem
>Affects Versions: 0.4.1
>Reporter: luhuachao
>Priority: Major
> Attachments: HDDS-2095.001.patch
>
>
> below is the submit command 
> {code:java}
> hadoop jar hadoop-mapreduce-client-jobclient-3.2.0-tests.jar  nnbench 
> -Dfs.defaultFS=o3fs://buc.volume-test  -maps 3   -bytesToWrite 1 
> -numberOfFiles 1000  -blockSize 16  -operation create_write
> {code}
> clinet fail with message 
> {code:java}
> 19/09/06 15:26:52 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
> /user/hdfs/.staging/job_1567754782562_000119/09/06 15:26:52 INFO 
> mapreduce.JobSubmitter: Cleaning up the staging area 
> /user/hdfs/.staging/job_1567754782562_0001java.io.IOException: 
> org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1567754782562_0001 to YARN : 
> org.apache.hadoop.security.token.TokenRenewer: Provider 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found at 
> org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:345) at 
> org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:254)
>  at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) at 
> org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>  at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) at 
> org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576) at 
> org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571) at 
> java.security.AccessController.doPrivileged(Native Method) at 
> javax.security.auth.Subject.doAs(Subject.java:422) at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
>  at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571) 
> at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562) at 
> org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:873) at 
> org.apache.hadoop.hdfs.NNBench.runTests(NNBench.java:487) at 
> org.apache.hadoop.hdfs.NNBench.run(NNBench.java:604) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at 
> org.apache.hadoop.hdfs.NNBench.main(NNBench.java:579) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
>  at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) at 
> org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:144) at 
> org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:152) at 
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>  at java.lang.reflect.Method.invoke(Method.java:498) at 
> org.apache.hadoop.util.RunJar.run(RunJar.java:308) at 
> org.apache.hadoop.util.RunJar.main(RunJar.java:222)Caused by: 
> org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
> application_1567754782562_0001 to YARN : 
> org.apache.hadoop.security.token.TokenRenewer: Provider 
> org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found at 
> org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:304)
>  at 
> org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:299)
>  at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:330) ... 34 
> more
> {code}
> the log in resourcemanager 
> {code:java}
> 2019-09-06 15:26:51,836 WARN  security.DelegationTokenRenewer 
> (DelegationTokenRenewer.java:handleDTRenewerAppSubmitEvent(923)) - Unable to 
> add the application to the delegation token renewer.
> java.util.ServiceConfigurationError: 
> 

[jira] [Created] (HDDS-2095) Submit mr job to yarn failed, Error messegs is "Provider org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found"

2019-09-06 Thread luhuachao (Jira)
luhuachao created HDDS-2095:
---

 Summary: Submit mr job to yarn failed,   Error messegs is 
"Provider org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found"
 Key: HDDS-2095
 URL: https://issues.apache.org/jira/browse/HDDS-2095
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Filesystem
Affects Versions: 0.4.1
Reporter: luhuachao


below is the submit command 
{code:java}
hadoop jar hadoop-mapreduce-client-jobclient-3.2.0-tests.jar  nnbench 
-Dfs.defaultFS=o3fs://buc.volume-test  -maps 3   -bytesToWrite 1 -numberOfFiles 
1000  -blockSize 16  -operation create_write
{code}
clinet fail with message 
{code:java}
19/09/06 15:26:52 INFO mapreduce.JobSubmitter: Cleaning up the staging area 
/user/hdfs/.staging/job_1567754782562_000119/09/06 15:26:52 INFO 
mapreduce.JobSubmitter: Cleaning up the staging area 
/user/hdfs/.staging/job_1567754782562_0001java.io.IOException: 
org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
application_1567754782562_0001 to YARN : 
org.apache.hadoop.security.token.TokenRenewer: Provider 
org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found at 
org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:345) at 
org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:254)
 at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1570) at 
org.apache.hadoop.mapreduce.Job$11.run(Job.java:1567) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
 at org.apache.hadoop.mapreduce.Job.submit(Job.java:1567) at 
org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:576) at 
org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:571) at 
java.security.AccessController.doPrivileged(Native Method) at 
javax.security.auth.Subject.doAs(Subject.java:422) at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1685)
 at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:571) at 
org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:562) at 
org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:873) at 
org.apache.hadoop.hdfs.NNBench.runTests(NNBench.java:487) at 
org.apache.hadoop.hdfs.NNBench.run(NNBench.java:604) at 
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at 
org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at 
org.apache.hadoop.hdfs.NNBench.main(NNBench.java:579) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
 at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144) at 
org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:144) at 
org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:152) at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) 
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:498) at 
org.apache.hadoop.util.RunJar.run(RunJar.java:308) at 
org.apache.hadoop.util.RunJar.main(RunJar.java:222)Caused by: 
org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit 
application_1567754782562_0001 to YARN : 
org.apache.hadoop.security.token.TokenRenewer: Provider 
org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found at 
org.apache.hadoop.yarn.client.api.impl.YarnClientImpl.submitApplication(YarnClientImpl.java:304)
 at 
org.apache.hadoop.mapred.ResourceMgrDelegate.submitApplication(ResourceMgrDelegate.java:299)
 at org.apache.hadoop.mapred.YARNRunner.submitJob(YARNRunner.java:330) ... 34 
more
{code}
the log in resourcemanager 
{code:java}
2019-09-06 15:26:51,836 WARN  security.DelegationTokenRenewer 
(DelegationTokenRenewer.java:handleDTRenewerAppSubmitEvent(923)) - Unable to 
add the application to the delegation token renewer.
java.util.ServiceConfigurationError: 
org.apache.hadoop.security.token.TokenRenewer: Provider 
org.apache.hadoop.fs.ozone.OzoneClientAdapterImpl$Renewer not found
at java.util.ServiceLoader.fail(ServiceLoader.java:239)
at java.util.ServiceLoader.access$300(ServiceLoader.java:185)
at 
java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:372)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at 

[jira] [Commented] (HDFS-14620) RBF: Fix 'not a super user' error when disabling a namespace in kerberos with superuser principal

2019-07-04 Thread luhuachao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16878402#comment-16878402
 ] 

luhuachao commented on HDFS-14620:
--

thanks [~ayushtkn] [~hexiaoqiao] [~elgoiri] for advice and revierw.

> RBF: Fix 'not a super user' error when disabling a namespace in kerberos with 
> superuser principal
> -
>
> Key: HDFS-14620
> URL: https://issues.apache.org/jira/browse/HDFS-14620
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.30
>Reporter: luhuachao
>Assignee: luhuachao
>Priority: Major
> Attachments: HDFS-14620-HDFS-13891-01.patch, 
> HDFS-14620-HDFS-13891-02.patch, HDFS-14620-HDFS-13891-03.patch, 
> HDFS-14620-HDFS-13891-04.patch
>
>
> use superuser hdfs's principal hdfs-test@EXAMPLE cannot disable namespace 
> with error info below, as the code judge the principal not equals to hdfs, 
> also hdfs is not belong to supergroup.
> {code:java}
> [hdfs@host1 ~]$ hdfs dfsrouteradmin -nameservice disable ns2 nameservice: 
> hdfs-test@EXAMPLE is not a super user at 
> org.apache.hadoop.hdfs.server.federation.router.RouterPermissionChecker.checkSuperuserPrivilege(RouterPermissionChecker.java:136)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14620) RBF: Fix 'not a super user' error when disabling a namespace in kerberos with superuser principal

2019-07-01 Thread luhuachao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876683#comment-16876683
 ] 

luhuachao commented on HDFS-14620:
--

[~elgoiri] [~hexiaoqiao] Thanx for review, all nits are fixed in patch04

> RBF: Fix 'not a super user' error when disabling a namespace in kerberos with 
> superuser principal
> -
>
> Key: HDFS-14620
> URL: https://issues.apache.org/jira/browse/HDFS-14620
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.30
>Reporter: luhuachao
>Assignee: luhuachao
>Priority: Major
> Attachments: HDFS-14620-HDFS-13891-01.patch, 
> HDFS-14620-HDFS-13891-02.patch, HDFS-14620-HDFS-13891-03.patch, 
> HDFS-14620-HDFS-13891-04.patch
>
>
> use superuser hdfs's principal hdfs-test@EXAMPLE cannot disable namespace 
> with error info below, as the code judge the principal not equals to hdfs, 
> also hdfs is not belong to supergroup.
> {code:java}
> [hdfs@host1 ~]$ hdfs dfsrouteradmin -nameservice disable ns2 nameservice: 
> hdfs-test@EXAMPLE is not a super user at 
> org.apache.hadoop.hdfs.server.federation.router.RouterPermissionChecker.checkSuperuserPrivilege(RouterPermissionChecker.java:136)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14620) RBF: Fix 'not a super user' error when disabling a namespace in kerberos with superuser principal

2019-07-01 Thread luhuachao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDFS-14620:
-
Attachment: HDFS-14620-HDFS-13891-04.patch

> RBF: Fix 'not a super user' error when disabling a namespace in kerberos with 
> superuser principal
> -
>
> Key: HDFS-14620
> URL: https://issues.apache.org/jira/browse/HDFS-14620
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.30
>Reporter: luhuachao
>Assignee: luhuachao
>Priority: Major
> Attachments: HDFS-14620-HDFS-13891-01.patch, 
> HDFS-14620-HDFS-13891-02.patch, HDFS-14620-HDFS-13891-03.patch, 
> HDFS-14620-HDFS-13891-04.patch
>
>
> use superuser hdfs's principal hdfs-test@EXAMPLE cannot disable namespace 
> with error info below, as the code judge the principal not equals to hdfs, 
> also hdfs is not belong to supergroup.
> {code:java}
> [hdfs@host1 ~]$ hdfs dfsrouteradmin -nameservice disable ns2 nameservice: 
> hdfs-test@EXAMPLE is not a super user at 
> org.apache.hadoop.hdfs.server.federation.router.RouterPermissionChecker.checkSuperuserPrivilege(RouterPermissionChecker.java:136)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14620) RBF: when Disable namespace in kerberos with superuser's principal, ERROR appear 'not a super user'

2019-07-01 Thread luhuachao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876261#comment-16876261
 ] 

luhuachao edited comment on HDFS-14620 at 7/1/19 4:09 PM:
--

[~hexiaoqiao] [~ayushtkn] thanks for review, the test failure seems have no 
relationship with the 02patch,  03patch fix checkstyle error


was (Author: huachao):
[~hexiaoqiao] [~ayushtkn] thanks for review, the test failure seems to have no 
relationship with this patch

> RBF: when Disable namespace in kerberos with superuser's principal, ERROR 
> appear 'not a super user' 
> 
>
> Key: HDFS-14620
> URL: https://issues.apache.org/jira/browse/HDFS-14620
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.30
>Reporter: luhuachao
>Assignee: luhuachao
>Priority: Major
> Attachments: HDFS-14620-HDFS-13891-01.patch, 
> HDFS-14620-HDFS-13891-02.patch, HDFS-14620-HDFS-13891-03.patch
>
>
> use superuser hdfs's principal hdfs-test@EXAMPLE cannot disable namespace 
> with error info below, as the code judge the principal not equals to hdfs, 
> also hdfs is not belong to supergroup.
> {code:java}
> [hdfs@host1 ~]$ hdfs dfsrouteradmin -nameservice disable ns2 nameservice: 
> hdfs-test@EXAMPLE is not a super user at 
> org.apache.hadoop.hdfs.server.federation.router.RouterPermissionChecker.checkSuperuserPrivilege(RouterPermissionChecker.java:136)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14620) RBF: when Disable namespace in kerberos with superuser's principal, ERROR appear 'not a super user'

2019-07-01 Thread luhuachao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDFS-14620:
-
Attachment: HDFS-14620-HDFS-13891-03.patch

> RBF: when Disable namespace in kerberos with superuser's principal, ERROR 
> appear 'not a super user' 
> 
>
> Key: HDFS-14620
> URL: https://issues.apache.org/jira/browse/HDFS-14620
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Affects Versions: 3.30
>Reporter: luhuachao
>Assignee: luhuachao
>Priority: Major
> Attachments: HDFS-14620-HDFS-13891-01.patch, 
> HDFS-14620-HDFS-13891-02.patch, HDFS-14620-HDFS-13891-03.patch
>
>
> use superuser hdfs's principal hdfs-test@EXAMPLE cannot disable namespace 
> with error info below, as the code judge the principal not equals to hdfs, 
> also hdfs is not belong to supergroup.
> {code:java}
> [hdfs@host1 ~]$ hdfs dfsrouteradmin -nameservice disable ns2 nameservice: 
> hdfs-test@EXAMPLE is not a super user at 
> org.apache.hadoop.hdfs.server.federation.router.RouterPermissionChecker.checkSuperuserPrivilege(RouterPermissionChecker.java:136)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14620) RBF: when Disable namespace in kerberos with superuser's principal, ERROR appear 'not a super user'

2019-07-01 Thread luhuachao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16876261#comment-16876261
 ] 

luhuachao commented on HDFS-14620:
--

[~hexiaoqiao] [~ayushtkn] thanks for review, the test failure seems to have no 
relationship with this patch

> RBF: when Disable namespace in kerberos with superuser's principal, ERROR 
> appear 'not a super user' 
> 
>
> Key: HDFS-14620
> URL: https://issues.apache.org/jira/browse/HDFS-14620
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.30
>Reporter: luhuachao
>Assignee: luhuachao
>Priority: Major
> Attachments: HDFS-14620-HDFS-13891-01.patch, 
> HDFS-14620-HDFS-13891-02.patch
>
>
> use superuser hdfs's principal hdfs-test@EXAMPLE cannot disable namespace 
> with error info below, as the code judge the principal not equals to hdfs, 
> also hdfs is not belong to supergroup.
> {code:java}
> [hdfs@host1 ~]$ hdfs dfsrouteradmin -nameservice disable ns2 nameservice: 
> hdfs-test@EXAMPLE is not a super user at 
> org.apache.hadoop.hdfs.server.federation.router.RouterPermissionChecker.checkSuperuserPrivilege(RouterPermissionChecker.java:136)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HDFS-14620) RBF: when Disable namespace in kerberos with superuser's principal, ERROR appear 'not a super user'

2019-07-01 Thread luhuachao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875979#comment-16875979
 ] 

luhuachao edited comment on HDFS-14620 at 7/1/19 7:13 AM:
--

 

[~ayushtkn]  Is the test below is suitable in TestRouterAdmin?
{code:java}
@Test
public void testNameserviceManagerWithDefaultRules() throws Exception {
  final String username = RouterAdminServer.getSuperUser() + "@Example.com";
  UserGroupInformation user =
  UserGroupInformation.createRemoteUser(username);
  user.doAs(new PrivilegedExceptionAction() {
@Override
public Void run() throws Exception {
  RouterClient client = routerContext.getAdminClient();
  NameserviceManager nameservices = client.getNameserviceManager();
  DisableNameserviceRequest disableReq =
  DisableNameserviceRequest.newInstance("ns0");
  try {
DisableNameserviceResponse disableResp =
nameservices.disableNameservice(disableReq);
assertTrue(disableResp.getStatus());
  } catch (IOException ioe) {
fail(username + " is not a super user");
  }
  return null;
}
  });
}
{code}
 


was (Author: huachao):
 

[~ayushtkn]  Is the test below is suitable?
{code:java}
@Test
public void testNameserviceManagerWithDefaultRules() throws Exception {
  final String username = RouterAdminServer.getSuperUser() + "@Example.com";
  UserGroupInformation user =
  UserGroupInformation.createRemoteUser(username);
  user.doAs(new PrivilegedExceptionAction() {
@Override
public Void run() throws Exception {
  RouterClient client = routerContext.getAdminClient();
  NameserviceManager nameservices = client.getNameserviceManager();
  DisableNameserviceRequest disableReq =
  DisableNameserviceRequest.newInstance("ns0");
  try {
DisableNameserviceResponse disableResp =
nameservices.disableNameservice(disableReq);
assertTrue(disableResp.getStatus());
  } catch (IOException ioe) {
fail(username + " is not a super user");
  }
  return null;
}
  });
}
{code}
 

> RBF: when Disable namespace in kerberos with superuser's principal, ERROR 
> appear 'not a super user' 
> 
>
> Key: HDFS-14620
> URL: https://issues.apache.org/jira/browse/HDFS-14620
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.30
>Reporter: luhuachao
>Assignee: luhuachao
>Priority: Major
> Attachments: HDFS-14620-HDFS-13891-01.patch
>
>
> use superuser hdfs's principal hdfs-test@EXAMPLE cannot disable namespace 
> with error info below, as the code judge the principal not equals to hdfs, 
> also hdfs is not belong to supergroup.
> {code:java}
> [hdfs@host1 ~]$ hdfs dfsrouteradmin -nameservice disable ns2 nameservice: 
> hdfs-test@EXAMPLE is not a super user at 
> org.apache.hadoop.hdfs.server.federation.router.RouterPermissionChecker.checkSuperuserPrivilege(RouterPermissionChecker.java:136)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-14620) RBF: when Disable namespace in kerberos with superuser's principal, ERROR appear 'not a super user'

2019-07-01 Thread luhuachao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao reassigned HDFS-14620:


Assignee: luhuachao

> RBF: when Disable namespace in kerberos with superuser's principal, ERROR 
> appear 'not a super user' 
> 
>
> Key: HDFS-14620
> URL: https://issues.apache.org/jira/browse/HDFS-14620
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.30
>Reporter: luhuachao
>Assignee: luhuachao
>Priority: Major
> Attachments: HDFS-14620-HDFS-13891-01.patch
>
>
> use superuser hdfs's principal hdfs-test@EXAMPLE cannot disable namespace 
> with error info below, as the code judge the principal not equals to hdfs, 
> also hdfs is not belong to supergroup.
> {code:java}
> [hdfs@host1 ~]$ hdfs dfsrouteradmin -nameservice disable ns2 nameservice: 
> hdfs-test@EXAMPLE is not a super user at 
> org.apache.hadoop.hdfs.server.federation.router.RouterPermissionChecker.checkSuperuserPrivilege(RouterPermissionChecker.java:136)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14620) RBF: when Disable namespace in kerberos with superuser's principal, ERROR appear 'not a super user'

2019-07-01 Thread luhuachao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16875979#comment-16875979
 ] 

luhuachao commented on HDFS-14620:
--

 

[~ayushtkn]  Is the test below is suitable?
{code:java}
@Test
public void testNameserviceManagerWithDefaultRules() throws Exception {
  final String username = RouterAdminServer.getSuperUser() + "@Example.com";
  UserGroupInformation user =
  UserGroupInformation.createRemoteUser(username);
  user.doAs(new PrivilegedExceptionAction() {
@Override
public Void run() throws Exception {
  RouterClient client = routerContext.getAdminClient();
  NameserviceManager nameservices = client.getNameserviceManager();
  DisableNameserviceRequest disableReq =
  DisableNameserviceRequest.newInstance("ns0");
  try {
DisableNameserviceResponse disableResp =
nameservices.disableNameservice(disableReq);
assertTrue(disableResp.getStatus());
  } catch (IOException ioe) {
fail(username + " is not a super user");
  }
  return null;
}
  });
}
{code}
 

> RBF: when Disable namespace in kerberos with superuser's principal, ERROR 
> appear 'not a super user' 
> 
>
> Key: HDFS-14620
> URL: https://issues.apache.org/jira/browse/HDFS-14620
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: 3.30
>Reporter: luhuachao
>Priority: Major
> Attachments: HDFS-14620-HDFS-13891-01.patch
>
>
> use superuser hdfs's principal hdfs-test@EXAMPLE cannot disable namespace 
> with error info below, as the code judge the principal not equals to hdfs, 
> also hdfs is not belong to supergroup.
> {code:java}
> [hdfs@host1 ~]$ hdfs dfsrouteradmin -nameservice disable ns2 nameservice: 
> hdfs-test@EXAMPLE is not a super user at 
> org.apache.hadoop.hdfs.server.federation.router.RouterPermissionChecker.checkSuperuserPrivilege(RouterPermissionChecker.java:136)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14620) RBF: when Disable namespace in kerberos with superuser's principal, ERROR appear 'not a super user'

2019-06-29 Thread luhuachao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDFS-14620:
-
Status: Patch Available  (was: Open)

> RBF: when Disable namespace in kerberos with superuser's principal, ERROR 
> appear 'not a super user' 
> 
>
> Key: HDFS-14620
> URL: https://issues.apache.org/jira/browse/HDFS-14620
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-13891
>Reporter: luhuachao
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14620-HDFS-13891-01.patch
>
>
> use superuser hdfs's principal hdfs-test@EXAMPLE cannot disable namespace 
> with error info below, as the code judge the principal not equals to hdfs, 
> also hdfs is not belong to supergroup.
> {code:java}
> [hdfs@host1 ~]$ hdfs dfsrouteradmin -nameservice disable ns2 nameservice: 
> hdfs-test@EXAMPLE is not a super user at 
> org.apache.hadoop.hdfs.server.federation.router.RouterPermissionChecker.checkSuperuserPrivilege(RouterPermissionChecker.java:136)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14620) RBF: when Disable namespace in kerberos with superuser's principal, ERROR appear 'not a super user'

2019-06-29 Thread luhuachao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14620?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDFS-14620:
-
Attachment: HDFS-14620-HDFS-13891-01.patch

> RBF: when Disable namespace in kerberos with superuser's principal, ERROR 
> appear 'not a super user' 
> 
>
> Key: HDFS-14620
> URL: https://issues.apache.org/jira/browse/HDFS-14620
> Project: Hadoop HDFS
>  Issue Type: Bug
>Affects Versions: HDFS-13891
>Reporter: luhuachao
>Priority: Major
> Fix For: HDFS-13891
>
> Attachments: HDFS-14620-HDFS-13891-01.patch
>
>
> use superuser hdfs's principal hdfs-test@EXAMPLE cannot disable namespace 
> with error info below, as the code judge the principal not equals to hdfs, 
> also hdfs is not belong to supergroup.
> {code:java}
> [hdfs@host1 ~]$ hdfs dfsrouteradmin -nameservice disable ns2 nameservice: 
> hdfs-test@EXAMPLE is not a super user at 
> org.apache.hadoop.hdfs.server.federation.router.RouterPermissionChecker.checkSuperuserPrivilege(RouterPermissionChecker.java:136)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14620) RBF: when Disable namespace in kerberos with superuser's principal, ERROR appear 'not a super user'

2019-06-29 Thread luhuachao (JIRA)
luhuachao created HDFS-14620:


 Summary: RBF: when Disable namespace in kerberos with superuser's 
principal, ERROR appear 'not a super user' 
 Key: HDFS-14620
 URL: https://issues.apache.org/jira/browse/HDFS-14620
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: HDFS-13891
Reporter: luhuachao
 Fix For: HDFS-13891


use superuser hdfs's principal hdfs-test@EXAMPLE cannot disable namespace with 
error info below, as the code judge the principal not equals to hdfs, also hdfs 
is not belong to supergroup.
{code:java}
[hdfs@host1 ~]$ hdfs dfsrouteradmin -nameservice disable ns2 nameservice: 
hdfs-test@EXAMPLE is not a super user at 
org.apache.hadoop.hdfs.server.federation.router.RouterPermissionChecker.checkSuperuserPrivilege(RouterPermissionChecker.java:136)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDDS-1714) When restart om with Kerberos, NPException happened at addPersistedDelegationToken

2019-06-21 Thread luhuachao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDDS-1714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16869277#comment-16869277
 ] 

luhuachao commented on HDDS-1714:
-

this error appear after i exec *_ozone sh token get_* or *mr job on ozone* and 
restart OM; The quick fix for me is delete dir om.db, but after doing  this , 
the data writed to ozone befor will not be accessed .

> When restart om with Kerberos, NPException happened at 
> addPersistedDelegationToken 
> ---
>
> Key: HDDS-1714
> URL: https://issues.apache.org/jira/browse/HDDS-1714
> Project: Hadoop Distributed Data Store
>  Issue Type: Bug
>  Components: Ozone Manager
>Affects Versions: 0.4.0
>Reporter: luhuachao
>Priority: Major
>
> the error stack:
> {code:java}
> 2019-06-21 15:17:41,744 [main] INFO - Loaded 11 tokens
> 2019-06-21 15:17:41,745 [main] INFO - Loading token state into token manager.
> 2019-06-21 15:17:41,748 [main] ERROR - Failed to start the OzoneManager.
> java.lang.NullPointerException
> at 
> org.apache.hadoop.ozone.security.OzoneDelegationTokenSecretManager.addPersistedDelegationToken(OzoneDelegationTokenSecretManager.java:371)
> at 
> org.apache.hadoop.ozone.security.OzoneDelegationTokenSecretManager.loadTokenSecretState(OzoneDelegationTokenSecretManager.java:358)
> at 
> org.apache.hadoop.ozone.security.OzoneDelegationTokenSecretManager.(OzoneDelegationTokenSecretManager.java:96)
> at 
> org.apache.hadoop.ozone.om.OzoneManager.createDelegationTokenSecretManager(OzoneManager.java:608)
> at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:332)
> at org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:941)
> at org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:859)
> 2019-06-21 15:17:41,753 [pool-2-thread-1] INFO - SHUTDOWN_MSG:
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDDS-1714) When restart om with Kerberos, NPException happened at addPersistedDelegationToken

2019-06-21 Thread luhuachao (JIRA)
luhuachao created HDDS-1714:
---

 Summary: When restart om with Kerberos, NPException happened at 
addPersistedDelegationToken 
 Key: HDDS-1714
 URL: https://issues.apache.org/jira/browse/HDDS-1714
 Project: Hadoop Distributed Data Store
  Issue Type: Bug
  Components: Ozone Manager
Affects Versions: 0.4.0
Reporter: luhuachao


the error stack:
{code:java}
2019-06-21 15:17:41,744 [main] INFO - Loaded 11 tokens
2019-06-21 15:17:41,745 [main] INFO - Loading token state into token manager.
2019-06-21 15:17:41,748 [main] ERROR - Failed to start the OzoneManager.
java.lang.NullPointerException
at 
org.apache.hadoop.ozone.security.OzoneDelegationTokenSecretManager.addPersistedDelegationToken(OzoneDelegationTokenSecretManager.java:371)
at 
org.apache.hadoop.ozone.security.OzoneDelegationTokenSecretManager.loadTokenSecretState(OzoneDelegationTokenSecretManager.java:358)
at 
org.apache.hadoop.ozone.security.OzoneDelegationTokenSecretManager.(OzoneDelegationTokenSecretManager.java:96)
at 
org.apache.hadoop.ozone.om.OzoneManager.createDelegationTokenSecretManager(OzoneManager.java:608)
at org.apache.hadoop.ozone.om.OzoneManager.(OzoneManager.java:332)
at org.apache.hadoop.ozone.om.OzoneManager.createOm(OzoneManager.java:941)
at org.apache.hadoop.ozone.om.OzoneManager.main(OzoneManager.java:859)
2019-06-21 15:17:41,753 [pool-2-thread-1] INFO - SHUTDOWN_MSG:

{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14457) RBF: Add order text SPACE in CLI command 'hdfs dfsrouteradmin'

2019-04-24 Thread luhuachao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDFS-14457:
-
Attachment: HDFS-14457-HDFS-13891-02.patch

> RBF: Add order text SPACE in CLI command 'hdfs dfsrouteradmin'
> --
>
> Key: HDFS-14457
> URL: https://issues.apache.org/jira/browse/HDFS-14457
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: HDFS-13891
>Reporter: luhuachao
>Assignee: luhuachao
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14457-HDFS-13891-01.patch, 
> HDFS-14457-HDFS-13891-02.patch, HDFS-14457.01.patch
>
>
> when execute cli comand 'hdfs dfsrouteradmin' ,the text in -order donot 
> contain SPACE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-14457) RBF: Add order text SPACE in CLI command 'hdfs dfsrouteradmin'

2019-04-24 Thread luhuachao (JIRA)


[ 
https://issues.apache.org/jira/browse/HDFS-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825643#comment-16825643
 ] 

luhuachao commented on HDFS-14457:
--

Thanks for review  :)  [~ayushtkn]

> RBF: Add order text SPACE in CLI command 'hdfs dfsrouteradmin'
> --
>
> Key: HDFS-14457
> URL: https://issues.apache.org/jira/browse/HDFS-14457
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: HDFS-13891
>Reporter: luhuachao
>Assignee: luhuachao
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14457-HDFS-13891-01.patch, HDFS-14457.01.patch
>
>
> when execute cli comand 'hdfs dfsrouteradmin' ,the text in -order donot 
> contain SPACE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14457) RBF: Add order text SPACE in CLI command 'hdfs dfsrouteradmin'

2019-04-24 Thread luhuachao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDFS-14457:
-
Attachment: HDFS-14457-HDFS-13891-01.patch

> RBF: Add order text SPACE in CLI command 'hdfs dfsrouteradmin'
> --
>
> Key: HDFS-14457
> URL: https://issues.apache.org/jira/browse/HDFS-14457
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: rbf
>Affects Versions: HDFS-13891
>Reporter: luhuachao
>Assignee: luhuachao
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14457-HDFS-13891-01.patch, HDFS-14457.01.patch
>
>
> when execute cli comand 'hdfs dfsrouteradmin' ,the text in -order donot 
> contain SPACE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14457) Add order text SPACE in CLI command 'hdfs dfsrouteradmin'

2019-04-24 Thread luhuachao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDFS-14457:
-
Status: Patch Available  (was: Open)

> Add order text SPACE in CLI command 'hdfs dfsrouteradmin'
> -
>
> Key: HDFS-14457
> URL: https://issues.apache.org/jira/browse/HDFS-14457
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: HDFS-13891
>Reporter: luhuachao
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14457.01.patch
>
>
> when execute cli comand 'hdfs dfsrouteradmin' ,the text in -order donot 
> contain SPACE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14457) Add order text SPACE in CLI command 'hdfs dfsrouteradmin'

2019-04-24 Thread luhuachao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDFS-14457:
-
Labels: RBF  (was: )

> Add order text SPACE in CLI command 'hdfs dfsrouteradmin'
> -
>
> Key: HDFS-14457
> URL: https://issues.apache.org/jira/browse/HDFS-14457
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: HDFS-13891
>Reporter: luhuachao
>Priority: Major
>  Labels: RBF
> Attachments: HDFS-14457.01.patch
>
>
> when execute cli comand 'hdfs dfsrouteradmin' ,the text in -order donot 
> contain SPACE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-14457) Add order text SPACE in CLI command 'hdfs dfsrouteradmin'

2019-04-24 Thread luhuachao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-14457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDFS-14457:
-
Attachment: HDFS-14457.01.patch

> Add order text SPACE in CLI command 'hdfs dfsrouteradmin'
> -
>
> Key: HDFS-14457
> URL: https://issues.apache.org/jira/browse/HDFS-14457
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: HDFS-13891
>Reporter: luhuachao
>Priority: Major
> Attachments: HDFS-14457.01.patch
>
>
> when execute cli comand 'hdfs dfsrouteradmin' ,the text in -order donot 
> contain SPACE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-14457) Add order text SPACE in CLI command 'hdfs dfsrouteradmin'

2019-04-24 Thread luhuachao (JIRA)
luhuachao created HDFS-14457:


 Summary: Add order text SPACE in CLI command 'hdfs dfsrouteradmin'
 Key: HDFS-14457
 URL: https://issues.apache.org/jira/browse/HDFS-14457
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: rbf
Affects Versions: HDFS-13891
Reporter: luhuachao


when execute cli comand 'hdfs dfsrouteradmin' ,the text in -order donot contain 
SPACE



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13889) The hadoop3.x client have compatible problem with hadoop2.x cluster

2018-09-03 Thread luhuachao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDFS-13889:
-
Component/s: hdfs

> The hadoop3.x client have compatible problem with hadoop2.x cluster
> ---
>
> Key: HDFS-13889
> URL: https://issues.apache.org/jira/browse/HDFS-13889
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs
>Reporter: luhuachao
>Priority: Critical
>
> when use hadoop3.1.0 client submit a mapreduce job to the hadoop2.8.2 
> cluster,the appmaster will fail with 'java.lang.NumberFormatException: For 
> input string: "30s"' on config dfs.client.datanode-restart.timeout; As in 
> hadoop3.x hdfs-default.xml "dfs.client.datanode-restart.timeout" was set to 
> value "30s" , and in hadoop2.x, DfsClientConf.java use Method getLong to get 
> this value. Is it necessary to fix this problem?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13889) The hadoop3.x client have compatible problem with hadoop2.x cluster

2018-09-03 Thread luhuachao (JIRA)
luhuachao created HDFS-13889:


 Summary: The hadoop3.x client have compatible problem with 
hadoop2.x cluster
 Key: HDFS-13889
 URL: https://issues.apache.org/jira/browse/HDFS-13889
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: luhuachao


when use hadoop3.1.0 client submit a mapreduce job to the hadoop2.8.2 
cluster,the appmaster will fail with 'java.lang.NumberFormatException: For 
input string: "30s"' on config dfs.client.datanode-restart.timeout; As in 
hadoop3.x hdfs-default.xml "dfs.client.datanode-restart.timeout" was set to 
value "30s" , and in hadoop2.x, DfsClientConf.java use Method getLong to get 
this value. Is it necessary to fix this problem?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13626) When the setOwner operation was denied,The logging username is not appropriate

2018-05-28 Thread luhuachao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDFS-13626:
-
Description: 
when do the chown operation on target file /tmp/test with user 'root' to user 
'hive', the log displays 'User hive is not a super user' ;This appropriate log 
here should be 'User root is not a super user'
{code:java}
[root@lhccmh1 ~]# hdfs dfs -ls /tmp/test
rw-rr- 3 root hdfs 0 2018-05-28 10:33 /tmp/test
[root@lhccmh1 ~]# hdfs dfs -chown hive /tmp/test
chown: changing ownership of '/tmp/test': User hive is not a super user 
(non-super user cannot change owner).{code}
The last version patch of issue HDFS-10455 use username but not pc.getUser() in 
logs;

 
{code:java}
 
   if (!pc.isSuperUser()) {
 if (username != null && !pc.getUser().equals(username)) {
-  throw new AccessControlException("Non-super user cannot change 
owner");
+  throw new AccessControlException("User " + username
+  + " is not a super user (non-super user cannot change owner).");
 }
 if (group != null && !pc.isMemberOfGroup(group)) {
-  throw new AccessControlException("User does not belong to " + group);
+  throw new AccessControlException(
+  "User " + username + " does not belong to " + group);
 }
   } {code}
 

  was:
when do the chown operation on target file /tmp/test with user 'root' to user 
'hive', the log displays 'User hive is not a super user' ;This appropriate log 
here should be 'User root is not a super user'

[root@lhccmh1 ~]# hdfs dfs -ls /tmp/test
 -rw-r--r-- 3 root hdfs 0 2018-05-28 10:33 /tmp/test
 [root@lhccmh1 ~]# hdfs dfs -chown hive /tmp/test
 chown: changing ownership of '/tmp/test': User hive is not a super user 
(non-super user cannot change owner).

The last version patch of issue HDFS-10455 use username but not pc.getUser() in 
logs;

if (!pc.isSuperUser()) {
 if (username != null && !pc.getUser().equals(username)) {
- throw new AccessControlException("Non-super user cannot change owner");
+ throw new AccessControlException("User " + username
+ + " is not a super user (non-super user cannot change owner).");
 }
 if (group != null && !pc.isMemberOfGroup(group)) {
- throw new AccessControlException("User does not belong to " + group);
+ throw new AccessControlException(
+ "User " + username + " does not belong to " + group);
 }
 }

 

 

 


> When the setOwner operation was denied,The logging username is not appropriate
> --
>
> Key: HDFS-13626
> URL: https://issues.apache.org/jira/browse/HDFS-13626
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha2
> Environment: hadoop 2.8.2
>Reporter: luhuachao
>Priority: Minor
>
> when do the chown operation on target file /tmp/test with user 'root' to user 
> 'hive', the log displays 'User hive is not a super user' ;This appropriate 
> log here should be 'User root is not a super user'
> {code:java}
> [root@lhccmh1 ~]# hdfs dfs -ls /tmp/test
> rw-rr- 3 root hdfs 0 2018-05-28 10:33 /tmp/test
> [root@lhccmh1 ~]# hdfs dfs -chown hive /tmp/test
> chown: changing ownership of '/tmp/test': User hive is not a super user 
> (non-super user cannot change owner).{code}
> The last version patch of issue HDFS-10455 use username but not pc.getUser() 
> in logs;
>  
> {code:java}
>  
>if (!pc.isSuperUser()) {
>  if (username != null && !pc.getUser().equals(username)) {
> -  throw new AccessControlException("Non-super user cannot change 
> owner");
> +  throw new AccessControlException("User " + username
> +  + " is not a super user (non-super user cannot change 
> owner).");
>  }
>  if (group != null && !pc.isMemberOfGroup(group)) {
> -  throw new AccessControlException("User does not belong to " + 
> group);
> +  throw new AccessControlException(
> +  "User " + username + " does not belong to " + group);
>  }
>} {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13626) When the setOwner operation was denied,The logging username is not appropriate

2018-05-28 Thread luhuachao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDFS-13626:
-
Description: 
when do the chown operation on target file /tmp/test with user 'root' to user 
'hive', the log displays 'User hive is not a super user' ;This appropriate log 
here should be 'User root is not a super user'
{code:java}
[root@lhccmh1 ~]# hdfs dfs -ls /tmp/test
rw-rr- 3 root hdfs 0 2018-05-28 10:33 /tmp/test
[root@lhccmh1 ~]# hdfs dfs -chown hive /tmp/test
chown: changing ownership of '/tmp/test': User hive is not a super user 
(non-super user cannot change owner).{code}
The last version patch of issue HDFS-10455 use username but not pc.getUser() in 
logs; 
{code:java}
 
   if (!pc.isSuperUser()) {
 if (username != null && !pc.getUser().equals(username)) {
-  throw new AccessControlException("Non-super user cannot change 
owner");
+  throw new AccessControlException("User " + username
+  + " is not a super user (non-super user cannot change owner).");
 }
 if (group != null && !pc.isMemberOfGroup(group)) {
-  throw new AccessControlException("User does not belong to " + group);
+  throw new AccessControlException(
+  "User " + username + " does not belong to " + group);
 }
   } {code}
 

  was:
when do the chown operation on target file /tmp/test with user 'root' to user 
'hive', the log displays 'User hive is not a super user' ;This appropriate log 
here should be 'User root is not a super user'
{code:java}
[root@lhccmh1 ~]# hdfs dfs -ls /tmp/test
rw-rr- 3 root hdfs 0 2018-05-28 10:33 /tmp/test
[root@lhccmh1 ~]# hdfs dfs -chown hive /tmp/test
chown: changing ownership of '/tmp/test': User hive is not a super user 
(non-super user cannot change owner).{code}
The last version patch of issue HDFS-10455 use username but not pc.getUser() in 
logs;

 
{code:java}
 
   if (!pc.isSuperUser()) {
 if (username != null && !pc.getUser().equals(username)) {
-  throw new AccessControlException("Non-super user cannot change 
owner");
+  throw new AccessControlException("User " + username
+  + " is not a super user (non-super user cannot change owner).");
 }
 if (group != null && !pc.isMemberOfGroup(group)) {
-  throw new AccessControlException("User does not belong to " + group);
+  throw new AccessControlException(
+  "User " + username + " does not belong to " + group);
 }
   } {code}
 


> When the setOwner operation was denied,The logging username is not appropriate
> --
>
> Key: HDFS-13626
> URL: https://issues.apache.org/jira/browse/HDFS-13626
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha2
> Environment: hadoop 2.8.2
>Reporter: luhuachao
>Priority: Minor
>
> when do the chown operation on target file /tmp/test with user 'root' to user 
> 'hive', the log displays 'User hive is not a super user' ;This appropriate 
> log here should be 'User root is not a super user'
> {code:java}
> [root@lhccmh1 ~]# hdfs dfs -ls /tmp/test
> rw-rr- 3 root hdfs 0 2018-05-28 10:33 /tmp/test
> [root@lhccmh1 ~]# hdfs dfs -chown hive /tmp/test
> chown: changing ownership of '/tmp/test': User hive is not a super user 
> (non-super user cannot change owner).{code}
> The last version patch of issue HDFS-10455 use username but not pc.getUser() 
> in logs; 
> {code:java}
>  
>if (!pc.isSuperUser()) {
>  if (username != null && !pc.getUser().equals(username)) {
> -  throw new AccessControlException("Non-super user cannot change 
> owner");
> +  throw new AccessControlException("User " + username
> +  + " is not a super user (non-super user cannot change 
> owner).");
>  }
>  if (group != null && !pc.isMemberOfGroup(group)) {
> -  throw new AccessControlException("User does not belong to " + 
> group);
> +  throw new AccessControlException(
> +  "User " + username + " does not belong to " + group);
>  }
>} {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-13626) When the setOwner operation was denied,The logging username is not appropriate

2018-05-28 Thread luhuachao (JIRA)


 [ 
https://issues.apache.org/jira/browse/HDFS-13626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

luhuachao updated HDFS-13626:
-
Description: 
when do the chown operation on target file /tmp/test with user 'root' to user 
'hive', the log displays 'User hive is not a super user' ;This appropriate log 
here should be 'User root is not a super user'

[root@lhccmh1 ~]# hdfs dfs -ls /tmp/test
 -rw-r--r-- 3 root hdfs 0 2018-05-28 10:33 /tmp/test
 [root@lhccmh1 ~]# hdfs dfs -chown hive /tmp/test
 chown: changing ownership of '/tmp/test': User hive is not a super user 
(non-super user cannot change owner).

The last version patch of issue HDFS-10455 use username but not pc.getUser() in 
logs;

if (!pc.isSuperUser()) {
 if (username != null && !pc.getUser().equals(username)) {
- throw new AccessControlException("Non-super user cannot change owner");
+ throw new AccessControlException("User " + username
+ + " is not a super user (non-super user cannot change owner).");
 }
 if (group != null && !pc.isMemberOfGroup(group)) {
- throw new AccessControlException("User does not belong to " + group);
+ throw new AccessControlException(
+ "User " + username + " does not belong to " + group);
 }
 }

 

 

 

  was:
when do the chown operation on target file /tmp/test with user 'root' to user 
'hive', the log displays 'User hive is not a super user' ;This appropriate log 
here should be 'User root is not a super user'

[root@lhccmh1 ~]# hdfs dfs -ls /tmp/test
 -rw-r--r-- 3 root hdfs 0 2018-05-28 10:33 /tmp/test
 [root@lhccmh1 ~]# hdfs dfs -chown hive /tmp/test
 chown: changing ownership of '/tmp/test': User hive is not a super user 
(non-super user cannot change owner).

The last version patch of issue HDFS-10455 use username but not pc.getUser() in 
logs;

   if (!pc.isSuperUser()) {
 if (username != null && !pc.getUser().equals(username)) {
-  throw new AccessControlException("Non-super user cannot change 
owner");
+  throw new AccessControlException("User " + *username*
+  + " is not a super user (non-super user cannot change owner).");
 }
 if (group != null && !pc.isMemberOfGroup(group)) {
-  throw new AccessControlException("User does not belong to " + group);
+  throw new AccessControlException(
+  "User " + username + " does not belong to " + group);
 }
   }
 

 

 


> When the setOwner operation was denied,The logging username is not appropriate
> --
>
> Key: HDFS-13626
> URL: https://issues.apache.org/jira/browse/HDFS-13626
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 2.8.0, 2.7.4, 3.0.0-alpha2
> Environment: hadoop 2.8.2
>Reporter: luhuachao
>Priority: Minor
>
> when do the chown operation on target file /tmp/test with user 'root' to user 
> 'hive', the log displays 'User hive is not a super user' ;This appropriate 
> log here should be 'User root is not a super user'
> [root@lhccmh1 ~]# hdfs dfs -ls /tmp/test
>  -rw-r--r-- 3 root hdfs 0 2018-05-28 10:33 /tmp/test
>  [root@lhccmh1 ~]# hdfs dfs -chown hive /tmp/test
>  chown: changing ownership of '/tmp/test': User hive is not a super user 
> (non-super user cannot change owner).
> The last version patch of issue HDFS-10455 use username but not pc.getUser() 
> in logs;
> if (!pc.isSuperUser()) {
>  if (username != null && !pc.getUser().equals(username)) {
> - throw new AccessControlException("Non-super user cannot change owner");
> + throw new AccessControlException("User " + username
> + + " is not a super user (non-super user cannot change owner).");
>  }
>  if (group != null && !pc.isMemberOfGroup(group)) {
> - throw new AccessControlException("User does not belong to " + group);
> + throw new AccessControlException(
> + "User " + username + " does not belong to " + group);
>  }
>  }
>  
>  
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-13626) When the setOwner operation was denied,The logging username is not appropriate

2018-05-27 Thread luhuachao (JIRA)
luhuachao created HDFS-13626:


 Summary: When the setOwner operation was denied,The logging 
username is not appropriate
 Key: HDFS-13626
 URL: https://issues.apache.org/jira/browse/HDFS-13626
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: namenode
Affects Versions: 3.0.0-alpha2, 2.7.4, 2.8.0
 Environment: hadoop 2.8.2
Reporter: luhuachao


when do the chown operation on target file /tmp/test with user 'root' to user 
'hive', the log displays 'User hive is not a super user' ;This appropriate log 
here should be 'User root is not a super user'

[root@lhccmh1 ~]# hdfs dfs -ls /tmp/test
 -rw-r--r-- 3 root hdfs 0 2018-05-28 10:33 /tmp/test
 [root@lhccmh1 ~]# hdfs dfs -chown hive /tmp/test
 chown: changing ownership of '/tmp/test': User hive is not a super user 
(non-super user cannot change owner).

The last version patch of issue HDFS-10455 use username but not pc.getUser() in 
logs;

   if (!pc.isSuperUser()) {
 if (username != null && !pc.getUser().equals(username)) {
-  throw new AccessControlException("Non-super user cannot change 
owner");
+  throw new AccessControlException("User " + *username*
+  + " is not a super user (non-super user cannot change owner).");
 }
 if (group != null && !pc.isMemberOfGroup(group)) {
-  throw new AccessControlException("User does not belong to " + group);
+  throw new AccessControlException(
+  "User " + username + " does not belong to " + group);
 }
   }
 

 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org