[jira] [Commented] (HDFS-15631) RBF: dfsadmin -report multiple capacity and used info
[ https://issues.apache.org/jira/browse/HDFS-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17244990#comment-17244990 ] Shen Yinjie commented on HDFS-15631: [~ayushtkn] Sorry,I was quite busy with work last month, and was confused with this UT : clusterId with multi NS in MiniRouterDFSCluster is always the same, I'd find a way to set different clusterId in test scope when DNs are not shared. > RBF: dfsadmin -report multiple capacity and used info > -- > > Key: HDFS-15631 > URL: https://issues.apache.org/jira/browse/HDFS-15631 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.0.1 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-15631_1.patch > > > When RBF enabled,we execute `hdfs dfsadmin -report` with RBF ns, the return > capacity is a multiple of the number of nss -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15631) RBF: dfsadmin -report multiple capacity and used info
[ https://issues.apache.org/jira/browse/HDFS-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-15631: --- Attachment: (was: HDFS-15631_2.patch) > RBF: dfsadmin -report multiple capacity and used info > -- > > Key: HDFS-15631 > URL: https://issues.apache.org/jira/browse/HDFS-15631 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.0.1 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-15631_1.patch > > > When RBF enabled,we execute `hdfs dfsadmin -report` with RBF ns, the return > capacity is a multiple of the number of nss -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15631) RBF: dfsadmin -report multiple capacity and used info
[ https://issues.apache.org/jira/browse/HDFS-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-15631: --- Status: Open (was: Patch Available) > RBF: dfsadmin -report multiple capacity and used info > -- > > Key: HDFS-15631 > URL: https://issues.apache.org/jira/browse/HDFS-15631 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.0.1 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-15631_1.patch, HDFS-15631_2.patch > > > When RBF enabled,we execute `hdfs dfsadmin -report` with RBF ns, the return > capacity is a multiple of the number of nss -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15631) RBF: dfsadmin -report multiple capacity and used info
[ https://issues.apache.org/jira/browse/HDFS-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-15631: --- Status: Patch Available (was: Open) > RBF: dfsadmin -report multiple capacity and used info > -- > > Key: HDFS-15631 > URL: https://issues.apache.org/jira/browse/HDFS-15631 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.0.1 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-15631_1.patch, HDFS-15631_2.patch > > > When RBF enabled,we execute `hdfs dfsadmin -report` with RBF ns, the return > capacity is a multiple of the number of nss -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15631) RBF: dfsadmin -report multiple capacity and used info
[ https://issues.apache.org/jira/browse/HDFS-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-15631: --- Attachment: HDFS-15631_2.patch > RBF: dfsadmin -report multiple capacity and used info > -- > > Key: HDFS-15631 > URL: https://issues.apache.org/jira/browse/HDFS-15631 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.0.1 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-15631_1.patch, HDFS-15631_2.patch > > > When RBF enabled,we execute `hdfs dfsadmin -report` with RBF ns, the return > capacity is a multiple of the number of nss -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15631) RBF: dfsadmin -report multiple capacity and used info
[ https://issues.apache.org/jira/browse/HDFS-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-15631: --- Attachment: (was: HDFS-15631_2.patch) > RBF: dfsadmin -report multiple capacity and used info > -- > > Key: HDFS-15631 > URL: https://issues.apache.org/jira/browse/HDFS-15631 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.0.1 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-15631_1.patch > > > When RBF enabled,we execute `hdfs dfsadmin -report` with RBF ns, the return > capacity is a multiple of the number of nss -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15631) RBF: dfsadmin -report multiple capacity and used info
[ https://issues.apache.org/jira/browse/HDFS-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-15631: --- Attachment: HDFS-15631_2.patch > RBF: dfsadmin -report multiple capacity and used info > -- > > Key: HDFS-15631 > URL: https://issues.apache.org/jira/browse/HDFS-15631 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.0.1 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-15631_1.patch, HDFS-15631_2.patch > > > When RBF enabled,we execute `hdfs dfsadmin -report` with RBF ns, the return > capacity is a multiple of the number of nss -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15631) RBF: dfsadmin -report multiple capacity and used info
[ https://issues.apache.org/jira/browse/HDFS-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17216389#comment-17216389 ] Shen Yinjie commented on HDFS-15631: Sorry about that :( ,I will put new patch soon. > RBF: dfsadmin -report multiple capacity and used info > -- > > Key: HDFS-15631 > URL: https://issues.apache.org/jira/browse/HDFS-15631 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.0.1 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-15631_1.patch > > > When RBF enabled,we execute `hdfs dfsadmin -report` with RBF ns, the return > capacity is a multiple of the number of nss -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15631) RBF: dfsadmin -report multiple capacity and used info
[ https://issues.apache.org/jira/browse/HDFS-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-15631: --- Status: Patch Available (was: Open) > RBF: dfsadmin -report multiple capacity and used info > -- > > Key: HDFS-15631 > URL: https://issues.apache.org/jira/browse/HDFS-15631 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.0.1 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-15631_1.patch > > > When RBF enabled,we execute `hdfs dfsadmin -report` with RBF ns, the return > capacity is a multiple of the number of nss -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15631) RBF: dfsadmin -report multiple capacity and used info
[ https://issues.apache.org/jira/browse/HDFS-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-15631: --- Attachment: HDFS-15631_1.patch > RBF: dfsadmin -report multiple capacity and used info > -- > > Key: HDFS-15631 > URL: https://issues.apache.org/jira/browse/HDFS-15631 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.0.1 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-15631_1.patch > > > When RBF enabled,we execute `hdfs dfsadmin -report` with RBF ns, the return > capacity is a multiple of the number of nss -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15631) dfsadmin -report with RBF returns multiple capacity and used info
[ https://issues.apache.org/jira/browse/HDFS-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17214364#comment-17214364 ] Shen Yinjie commented on HDFS-15631: Yes,all the datanodes are shared. As I know, mixed cluster with federation and independent cluster is also allowed in RBF. I take a simple fix: when calling getStats() API , remove duplicate FederationNamespaceInfo by clusterId, then invoke and merge in the same way as before. > dfsadmin -report with RBF returns multiple capacity and used info > - > > Key: HDFS-15631 > URL: https://issues.apache.org/jira/browse/HDFS-15631 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.0.1 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > > When RBF enabled,we execute `hdfs dfsadmin -report` with RBF ns, the return > capacity is a multiple of the number of nss -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-15631) dfsadmin -report with RBF returns multiple capacity and used info
[ https://issues.apache.org/jira/browse/HDFS-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie reassigned HDFS-15631: -- Assignee: Shen Yinjie > dfsadmin -report with RBF returns multiple capacity and used info > - > > Key: HDFS-15631 > URL: https://issues.apache.org/jira/browse/HDFS-15631 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.0.1 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > > When RBF enabled,we execute `hdfs dfsadmin -report` with RBF ns, the return > capacity is a multiple of the number of nss -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15631) dfsadmin -report with RBF returns multiple capacity and used info
[ https://issues.apache.org/jira/browse/HDFS-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-15631: --- Affects Version/s: 3.0.1 > dfsadmin -report with RBF returns multiple capacity and used info > - > > Key: HDFS-15631 > URL: https://issues.apache.org/jira/browse/HDFS-15631 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Affects Versions: 3.0.1 >Reporter: Shen Yinjie >Priority: Major > > When RBF enabled,we execute `hdfs dfsadmin -report` with RBF ns, the return > capacity is a multiple of the number of nss -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15631) dfsadmin -report with RBF returns multiple capacity and used info
[ https://issues.apache.org/jira/browse/HDFS-15631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-15631: --- Component/s: rbf > dfsadmin -report with RBF returns multiple capacity and used info > - > > Key: HDFS-15631 > URL: https://issues.apache.org/jira/browse/HDFS-15631 > Project: Hadoop HDFS > Issue Type: Bug > Components: rbf >Reporter: Shen Yinjie >Priority: Major > > When RBF enabled,we execute `hdfs dfsadmin -report` with RBF ns, the return > capacity is a multiple of the number of nss -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-15631) dfsadmin -report with RBF returns multiple capacity and used info
Shen Yinjie created HDFS-15631: -- Summary: dfsadmin -report with RBF returns multiple capacity and used info Key: HDFS-15631 URL: https://issues.apache.org/jira/browse/HDFS-15631 Project: Hadoop HDFS Issue Type: Bug Reporter: Shen Yinjie When RBF enabled,we execute `hdfs dfsadmin -report` with RBF ns, the return capacity is a multiple of the number of nss -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9913) DistCp to add -useTrash to move deleted files to Trash
[ https://issues.apache.org/jira/browse/HDFS-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16967124#comment-16967124 ] Shen Yinjie commented on HDFS-9913: --- [~LiJinglun] We had done this weeks ago. [~ste...@apache.org] Please spare some time to give some suggestions, I will respond and update in time. > DistCp to add -useTrash to move deleted files to Trash > -- > > Key: HDFS-9913 > URL: https://issues.apache.org/jira/browse/HDFS-9913 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 2.6.0 >Reporter: Konstantin Shaposhnikov >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-9913_1.patch, HDFS-9913_2.patch > > > Documentation for DistCp -delete option says > ([http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html]): > | The deletion is done by FS Shell. So the trash will be used, if it is > enable. > However it seems to be no longer the case. The latest source code > (https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java) > uses `FileSystem.delete` and trash options seems to be not applied. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14271) [SBN read] StandbyException is logged if Observer is the first NameNode
[ https://issues.apache.org/jira/browse/HDFS-14271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14271: --- Assignee: Shen Yinjie Status: Patch Available (was: Open) Attach a simple fix for retry exception log > [SBN read] StandbyException is logged if Observer is the first NameNode > --- > > Key: HDFS-14271 > URL: https://issues.apache.org/jira/browse/HDFS-14271 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.3.0 >Reporter: Wei-Chiu Chuang >Assignee: Shen Yinjie >Priority: Minor > Attachments: HDFS-14271_1.patch > > > If I transition the first NameNode into Observer state, and then I create a > file from command line, it prints the following StandbyException log message, > as if the command failed. But it actually completed successfully: > {noformat} > [root@weichiu-sbsr-1 ~]# hdfs dfs -touchz /tmp/abf > 19/02/12 16:35:17 INFO retry.RetryInvocationHandler: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): > Operation category WRITE is not supported in state observer. Visit > https://s.apache.org/sbnn-error > at > org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98) > at > org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1987) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1424) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:762) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:458) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:530) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:918) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:853) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2782) > , while invoking $Proxy4.create over > [weichiu-sbsr-1.gce.cloudera.com/172.31.121.145:8020,weichiu-sbsr-2.gce.cloudera.com/172.31.121.140:8020]. > Trying to failover immediately. > {noformat} > This is unlike the case when the first NameNode is the Standby, where this > StandbyException is suppressed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14271) [SBN read] StandbyException is logged if Observer is the first NameNode
[ https://issues.apache.org/jira/browse/HDFS-14271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14271: --- Attachment: HDFS-14271_1.patch > [SBN read] StandbyException is logged if Observer is the first NameNode > --- > > Key: HDFS-14271 > URL: https://issues.apache.org/jira/browse/HDFS-14271 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.3.0 >Reporter: Wei-Chiu Chuang >Priority: Minor > Attachments: HDFS-14271_1.patch > > > If I transition the first NameNode into Observer state, and then I create a > file from command line, it prints the following StandbyException log message, > as if the command failed. But it actually completed successfully: > {noformat} > [root@weichiu-sbsr-1 ~]# hdfs dfs -touchz /tmp/abf > 19/02/12 16:35:17 INFO retry.RetryInvocationHandler: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): > Operation category WRITE is not supported in state observer. Visit > https://s.apache.org/sbnn-error > at > org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98) > at > org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1987) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1424) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:762) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:458) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:530) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:918) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:853) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2782) > , while invoking $Proxy4.create over > [weichiu-sbsr-1.gce.cloudera.com/172.31.121.145:8020,weichiu-sbsr-2.gce.cloudera.com/172.31.121.140:8020]. > Trying to failover immediately. > {noformat} > This is unlike the case when the first NameNode is the Standby, where this > StandbyException is suppressed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14238) A log in NNThroughputBenchmark should change log level to "INFO" instead of "ERROR"
[ https://issues.apache.org/jira/browse/HDFS-14238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16949888#comment-16949888 ] Shen Yinjie commented on HDFS-14238: Thanks ![~ayushtkn] > A log in NNThroughputBenchmark should change log level to "INFO" instead of > "ERROR" > > > Key: HDFS-14238 > URL: https://issues.apache.org/jira/browse/HDFS-14238 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-14238.patch > > > In NNThroughputBenchmark#150, LOG.error("Log level = " + logLevel.toString()); > this loglevel should be changed to “LOG.info()” ,since no error occurs here, > just tell us namenode log level has changed . -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDDS-2173) Change ozone default time-zone from GMT to system default time-zone.
Shen Yinjie created HDDS-2173: - Summary: Change ozone default time-zone from GMT to system default time-zone. Key: HDDS-2173 URL: https://issues.apache.org/jira/browse/HDDS-2173 Project: Hadoop Distributed Data Store Issue Type: Improvement Affects Versions: 0.4.0 Reporter: Shen Yinjie Currently ozone use GMT as time zone, When we're in + 8 time zone, modified time of ozone bucket and key would be confusing. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14271) [SBN read] StandbyException is logged if Observer is the first NameNode
[ https://issues.apache.org/jira/browse/HDFS-14271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16934009#comment-16934009 ] Shen Yinjie commented on HDFS-14271: [~xkrogen] Agreed with your analysis. I prepare a simple fix to change the StandbyException log level from INFO to DEBUG if the Exception param obtained by RetryInvocationHandler#log(...,Exception ex) wraps StandbyException. > [SBN read] StandbyException is logged if Observer is the first NameNode > --- > > Key: HDFS-14271 > URL: https://issues.apache.org/jira/browse/HDFS-14271 > Project: Hadoop HDFS > Issue Type: Improvement > Components: hdfs >Affects Versions: 3.3.0 >Reporter: Wei-Chiu Chuang >Priority: Minor > > If I transition the first NameNode into Observer state, and then I create a > file from command line, it prints the following StandbyException log message, > as if the command failed. But it actually completed successfully: > {noformat} > [root@weichiu-sbsr-1 ~]# hdfs dfs -touchz /tmp/abf > 19/02/12 16:35:17 INFO retry.RetryInvocationHandler: > org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): > Operation category WRITE is not supported in state observer. Visit > https://s.apache.org/sbnn-error > at > org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:98) > at > org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1987) > at > org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1424) > at > org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:762) > at > org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:458) > at > org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) > at > org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:530) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:918) > at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:853) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Subject.java:422) > at > org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1729) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2782) > , while invoking $Proxy4.create over > [weichiu-sbsr-1.gce.cloudera.com/172.31.121.145:8020,weichiu-sbsr-2.gce.cloudera.com/172.31.121.140:8020]. > Trying to failover immediately. > {noformat} > This is unlike the case when the first NameNode is the Standby, where this > StandbyException is suppressed. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14855) client always print standbyexception info with multi standby namenode
[ https://issues.apache.org/jira/browse/HDFS-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16933976#comment-16933976 ] Shen Yinjie commented on HDFS-14855: Thank you [~ayushtkn] I am glad to work on it. > client always print standbyexception info with multi standby namenode > - > > Key: HDFS-14855 > URL: https://issues.apache.org/jira/browse/HDFS-14855 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: image-2019-09-19-20-04-54-591.png > > > When cluster has more than two standby namenodes, client executes shell will > print standbyexception info. May we change the log level from INFO to DEBUG, > !image-2019-09-19-20-04-54-591.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-14855) client always print standbyexception info with multi standby namenode
[ https://issues.apache.org/jira/browse/HDFS-14855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie reassigned HDFS-14855: -- Assignee: Shen Yinjie > client always print standbyexception info with multi standby namenode > - > > Key: HDFS-14855 > URL: https://issues.apache.org/jira/browse/HDFS-14855 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: image-2019-09-19-20-04-54-591.png > > > When cluster has more than two standby namenodes, client executes shell will > print standbyexception info. May we change the log level from INFO to DEBUG, > !image-2019-09-19-20-04-54-591.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-14855) client always print standbyexception info with multi standby namenode
Shen Yinjie created HDFS-14855: -- Summary: client always print standbyexception info with multi standby namenode Key: HDFS-14855 URL: https://issues.apache.org/jira/browse/HDFS-14855 Project: Hadoop HDFS Issue Type: Improvement Reporter: Shen Yinjie Attachments: image-2019-09-19-20-04-54-591.png When cluster has more than two standby namenodes, client executes shell will print standbyexception info. May we change the log level from INFO to DEBUG, !image-2019-09-19-20-04-54-591.png! -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9913) DistCp to add -useTrash to move deleted files to Trash
[ https://issues.apache.org/jira/browse/HDFS-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16887102#comment-16887102 ] Shen Yinjie commented on HDFS-9913: --- upload a integrated patch . > DistCp to add -useTrash to move deleted files to Trash > -- > > Key: HDFS-9913 > URL: https://issues.apache.org/jira/browse/HDFS-9913 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 2.6.0 >Reporter: Konstantin Shaposhnikov >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-9913_1.patch, HDFS-9913_2.patch > > > Documentation for DistCp -delete option says > ([http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html]): > | The deletion is done by FS Shell. So the trash will be used, if it is > enable. > However it seems to be no longer the case. The latest source code > (https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java) > uses `FileSystem.delete` and trash options seems to be not applied. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9913) DistCp to add -useTrash to move deleted files to Trash
[ https://issues.apache.org/jira/browse/HDFS-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-9913: -- Attachment: HDFS-9913_2.patch > DistCp to add -useTrash to move deleted files to Trash > -- > > Key: HDFS-9913 > URL: https://issues.apache.org/jira/browse/HDFS-9913 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 2.6.0 >Reporter: Konstantin Shaposhnikov >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-9913_1.patch, HDFS-9913_2.patch > > > Documentation for DistCp -delete option says > ([http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html]): > | The deletion is done by FS Shell. So the trash will be used, if it is > enable. > However it seems to be no longer the case. The latest source code > (https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java) > uses `FileSystem.delete` and trash options seems to be not applied. -- This message was sent by Atlassian JIRA (v7.6.14#76016) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14628) Improve information on scanAndCompactStorages in BlockManager and lower log level
[ https://issues.apache.org/jira/browse/HDFS-14628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16877504#comment-16877504 ] Shen Yinjie commented on HDFS-14628: [~ayushtkn] just found HDFS-13692 has fix this. > Improve information on scanAndCompactStorages in BlockManager and lower log > level > -- > > Key: HDFS-14628 > URL: https://issues.apache.org/jira/browse/HDFS-14628 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > > BlockManager#scanAndCompactStorages is called every 600sec by default. > In big cluster with thousands of datanodes, it will print out 10 thousands of > informations every sec when scanAndCompactStorages() is running,which may > make much noise in namenode logs. And, currently these INFO logs at namenode > side and does not provide much information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Resolved] (HDFS-14628) Improve information on scanAndCompactStorages in BlockManager and lower log level
[ https://issues.apache.org/jira/browse/HDFS-14628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie resolved HDFS-14628. Resolution: Duplicate Assignee: Shen Yinjie > Improve information on scanAndCompactStorages in BlockManager and lower log > level > -- > > Key: HDFS-14628 > URL: https://issues.apache.org/jira/browse/HDFS-14628 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > > BlockManager#scanAndCompactStorages is called every 600sec by default. > In big cluster with thousands of datanodes, it will print out 10 thousands of > informations every sec when scanAndCompactStorages() is running,which may > make much noise in namenode logs. And, currently these INFO logs at namenode > side and does not provide much information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14628) Improve information on scanAndCompactStorages in BlockManager and lower log level
[ https://issues.apache.org/jira/browse/HDFS-14628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14628: --- Affects Version/s: 3.1.0 > Improve information on scanAndCompactStorages in BlockManager and lower log > level > -- > > Key: HDFS-14628 > URL: https://issues.apache.org/jira/browse/HDFS-14628 > Project: Hadoop HDFS > Issue Type: Improvement >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Priority: Major > > BlockManager#scanAndCompactStorages is called every 600sec by default. > In big cluster with thousands of datanodes, it will print out 10 thousands of > informations every sec when scanAndCompactStorages() is running,which may > make much noise in namenode logs. And, currently these INFO logs at namenode > side and does not provide much information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-14628) Improve information on scanAndCompactStorages in BlockManager and lower log level
Shen Yinjie created HDFS-14628: -- Summary: Improve information on scanAndCompactStorages in BlockManager and lower log level Key: HDFS-14628 URL: https://issues.apache.org/jira/browse/HDFS-14628 Project: Hadoop HDFS Issue Type: Improvement Reporter: Shen Yinjie BlockManager#scanAndCompactStorages is called every 600sec by default. In big cluster with thousands of datanodes, it will print out 10 thousands of informations every sec when scanAndCompactStorages() is running,which may make much noise in namenode logs. And, currently these INFO logs at namenode side and does not provide much information. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9913) DispCp doesn't use Trash with -delete option
[ https://issues.apache.org/jira/browse/HDFS-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16873758#comment-16873758 ] Shen Yinjie commented on HDFS-9913: --- [~ste...@apache.org][~jojochuang] I have added a unit test and created a pr [pull/974|https://github.com/apache/hadoop/pull/974], please help to review. > DispCp doesn't use Trash with -delete option > > > Key: HDFS-9913 > URL: https://issues.apache.org/jira/browse/HDFS-9913 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 2.6.0 >Reporter: Konstantin Shaposhnikov >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-9913_1.patch > > > Documentation for DistCp -delete option says > ([http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html]): > | The deletion is done by FS Shell. So the trash will be used, if it is > enable. > However it seems to be no longer the case. The latest source code > (https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java) > uses `FileSystem.delete` and trash options seems to be not applied. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9913) DispCp doesn't use Trash with -delete option
[ https://issues.apache.org/jira/browse/HDFS-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-9913: -- Attachment: HDFS-9913_1.patch > DispCp doesn't use Trash with -delete option > > > Key: HDFS-9913 > URL: https://issues.apache.org/jira/browse/HDFS-9913 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 2.6.0 >Reporter: Konstantin Shaposhnikov >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-9913_1.patch > > > Documentation for DistCp -delete option says > ([http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html]): > | The deletion is done by FS Shell. So the trash will be used, if it is > enable. > However it seems to be no longer the case. The latest source code > (https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java) > uses `FileSystem.delete` and trash options seems to be not applied. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-9913) DispCp doesn't use Trash with -delete option
[ https://issues.apache.org/jira/browse/HDFS-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-9913: -- Status: Patch Available (was: Open) > DispCp doesn't use Trash with -delete option > > > Key: HDFS-9913 > URL: https://issues.apache.org/jira/browse/HDFS-9913 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 2.6.0 >Reporter: Konstantin Shaposhnikov >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-9913_1.patch > > > Documentation for DistCp -delete option says > ([http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html]): > | The deletion is done by FS Shell. So the trash will be used, if it is > enable. > However it seems to be no longer the case. The latest source code > (https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java) > uses `FileSystem.delete` and trash options seems to be not applied. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9913) DispCp doesn't use Trash with -delete option
[ https://issues.apache.org/jira/browse/HDFS-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16862816#comment-16862816 ] Shen Yinjie commented on HDFS-9913: --- Thank you [~jojochuang], I will upload a patch that add a 'useTrash' option to support move files missing to trash. Could you please help to review. > DispCp doesn't use Trash with -delete option > > > Key: HDFS-9913 > URL: https://issues.apache.org/jira/browse/HDFS-9913 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 2.6.0 >Reporter: Konstantin Shaposhnikov >Assignee: Shen Yinjie >Priority: Major > > Documentation for DistCp -delete option says > ([http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html]): > | The deletion is done by FS Shell. So the trash will be used, if it is > enable. > However it seems to be no longer the case. The latest source code > (https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java) > uses `FileSystem.delete` and trash options seems to be not applied. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14240) blockReport test in NNThroughputBenchmark throws ArrayIndexOutOfBoundsException
[ https://issues.apache.org/jira/browse/HDFS-14240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16861710#comment-16861710 ] Shen Yinjie commented on HDFS-14240: Sorry for my late replies.[~RANith] I use a real namenode to run “NNThroughputBenchmark -fs hdfs://hc1:8020 -op blockReport -datanodes 10 -reports 30 -blocksPerReport 100 -blocksPerFile 10”. > blockReport test in NNThroughputBenchmark throws > ArrayIndexOutOfBoundsException > --- > > Key: HDFS-14240 > URL: https://issues.apache.org/jira/browse/HDFS-14240 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Shen Yinjie >Assignee: Ranith Sardar >Priority: Major > Attachments: screenshot-1.png > > > _emphasized text_When I run a blockReport test with NNThroughputBenchmark, > BlockReportStats.addBlocks() throws ArrayIndexOutOfBoundsException. > digging the code: > {code:java} > for(DatanodeInfo dnInfo : loc.getLocations()) > { int dnIdx = dnInfo.getXferPort() - 1; > datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());{code} > > problem is here:array datanodes's length is determined by args as > "-datanodes" or "-threads" ,but dnIdx = dnInfo.getXferPort() is a random port. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14545) RBF: Router should support GetUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16859840#comment-16859840 ] Shen Yinjie commented on HDFS-14545: Thanks [~ayushtkn] for implementing this ! patch V10 LGTM. > RBF: Router should support GetUserMappingsProtocol > -- > > Key: HDFS-14545 > URL: https://issues.apache.org/jira/browse/HDFS-14545 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Íñigo Goiri >Assignee: Ayush Saxena >Priority: Major > Attachments: HDFS-14545-HDFS-13891-01.patch, > HDFS-14545-HDFS-13891-02.patch, HDFS-14545-HDFS-13891-03.patch, > HDFS-14545-HDFS-13891-04.patch, HDFS-14545-HDFS-13891-05.patch, > HDFS-14545-HDFS-13891-06.patch, HDFS-14545-HDFS-13891-07.patch, > HDFS-14545-HDFS-13891-08.patch, HDFS-14545-HDFS-13891-09.patch, > HDFS-14545-HDFS-13891-10.patch, HDFS-14545-HDFS-13891.000.patch > > > We should be able to check the groups for a user from a Router. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14512) ONE_SSD policy will be violated while write data with DistributedFileSystem.create(....favoredNodes)
[ https://issues.apache.org/jira/browse/HDFS-14512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16848561#comment-16848561 ] Shen Yinjie commented on HDFS-14512: [~ayushtkn] and [~jojochuang] Thanks for your comment! Yes, I had two disks ,one for DISK and one for SSD It is used by HBase to catch the best regionservers > ONE_SSD policy will be violated while write data with > DistributedFileSystem.create(favoredNodes) > > > Key: HDFS-14512 > URL: https://issues.apache.org/jira/browse/HDFS-14512 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Shen Yinjie >Priority: Major > > Reproduce steps: > 1.setStoragePolicy ONE_SSD for a path A; > 2. client write data to path A by > DistributedFileSystem.create(...favoredNodes) and Passing parameter > favoredNodes > then, three replicas of file in this path will be located in 2 SSD and > 1DISK,which is violating the ONE_SSD policy. > Not sure am I clear? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-14512) ONE_SSD policy will be violated while write data with DistributedFileSystem.create(....favoredNodes)
Shen Yinjie created HDFS-14512: -- Summary: ONE_SSD policy will be violated while write data with DistributedFileSystem.create(favoredNodes) Key: HDFS-14512 URL: https://issues.apache.org/jira/browse/HDFS-14512 Project: Hadoop HDFS Issue Type: Bug Reporter: Shen Yinjie Reproduce steps: 1.setStoragePolicy ONE_SSD for a path A; 2. client write data to path A by DistributedFileSystem.create(...favoredNodes) and Passing parameter favoredNodes then, three replicas of file in this path will be located in 2 SSD and 1DISK,which is violating the ONE_SSD policy. Not sure am I clear? -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-9913) DispCp doesn't use Trash with -delete option
[ https://issues.apache.org/jira/browse/HDFS-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840282#comment-16840282 ] Shen Yinjie edited comment on HDFS-9913 at 5/16/19 2:06 AM: Agree with you, [~ste...@apache.org] , currently docs and codes are not consistent,and docs may confuse users. We should update the docs or change codes to support "useTrash" as [~szetszwo] suggested in MAPREDUCE-6597. [~jzhuge] Are you still working on this issue? I'd like to take over if you dont mind. was (Author: shenyinjie): [~ste...@apache.org] Agree with you, currently docs and codes are not consistent. We should update the docs or change codes as [~szetszwo] suggested in MAPREDUCE-6597. [~jzhuge] Are you still working on this issue? I'd like to take over if you dont mind. > DispCp doesn't use Trash with -delete option > > > Key: HDFS-9913 > URL: https://issues.apache.org/jira/browse/HDFS-9913 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 2.6.0 >Reporter: Konstantin Shaposhnikov >Assignee: John Zhuge >Priority: Major > > Documentation for DistCp -delete option says > ([http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html]): > | The deletion is done by FS Shell. So the trash will be used, if it is > enable. > However it seems to be no longer the case. The latest source code > (https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java) > uses `FileSystem.delete` and trash options seems to be not applied. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14447: --- Attachment: HDFS-14447-HDFS-13891.09.patch > RBF: Router should support RefreshUserMappingsProtocol > -- > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-14447-HDFS-13891.01.patch, > HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, > HDFS-14447-HDFS-13891.04.patch, HDFS-14447-HDFS-13891.05.patch, > HDFS-14447-HDFS-13891.06.patch, HDFS-14447-HDFS-13891.07.patch, > HDFS-14447-HDFS-13891.08.patch, HDFS-14447-HDFS-13891.09.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14447: --- Attachment: HDFS-14447-HDFS-13891.08.patch > RBF: Router should support RefreshUserMappingsProtocol > -- > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-14447-HDFS-13891.01.patch, > HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, > HDFS-14447-HDFS-13891.04.patch, HDFS-14447-HDFS-13891.05.patch, > HDFS-14447-HDFS-13891.06.patch, HDFS-14447-HDFS-13891.07.patch, > HDFS-14447-HDFS-13891.08.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-9913) DispCp doesn't use Trash with -delete option
[ https://issues.apache.org/jira/browse/HDFS-9913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16840282#comment-16840282 ] Shen Yinjie commented on HDFS-9913: --- [~ste...@apache.org] Agree with you, currently docs and codes are not consistent. We should update the docs or change codes as [~szetszwo] suggested in MAPREDUCE-6597. [~jzhuge] Are you still working on this issue? I'd like to take over if you dont mind. > DispCp doesn't use Trash with -delete option > > > Key: HDFS-9913 > URL: https://issues.apache.org/jira/browse/HDFS-9913 > Project: Hadoop HDFS > Issue Type: Bug > Components: distcp >Affects Versions: 2.6.0 >Reporter: Konstantin Shaposhnikov >Assignee: John Zhuge >Priority: Major > > Documentation for DistCp -delete option says > ([http://hadoop.apache.org/docs/current/hadoop-distcp/DistCp.html]): > | The deletion is done by FS Shell. So the trash will be used, if it is > enable. > However it seems to be no longer the case. The latest source code > (https://svn.apache.org/repos/asf/hadoop/common/trunk/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java) > uses `FileSystem.delete` and trash options seems to be not applied. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14447: --- Attachment: HDFS-14447-HDFS-13891.07.patch > RBF: Router should support RefreshUserMappingsProtocol > -- > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-14447-HDFS-13891.01.patch, > HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, > HDFS-14447-HDFS-13891.04.patch, HDFS-14447-HDFS-13891.05.patch, > HDFS-14447-HDFS-13891.06.patch, HDFS-14447-HDFS-13891.07.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16839933#comment-16839933 ] Shen Yinjie commented on HDFS-14447: Thanks very much for your review! [~elgoiri] [~lukmajercak],I have updated v7.patch as your comments. > RBF: Router should support RefreshUserMappingsProtocol > -- > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-14447-HDFS-13891.01.patch, > HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, > HDFS-14447-HDFS-13891.04.patch, HDFS-14447-HDFS-13891.05.patch, > HDFS-14447-HDFS-13891.06.patch, HDFS-14447-HDFS-13891.07.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16834315#comment-16834315 ] Shen Yinjie commented on HDFS-14447: [~elgoiri][~crh] Thank you for comment,could you please review and help to push forward. > RBF: Router should support RefreshUserMappingsProtocol > -- > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-14447-HDFS-13891.01.patch, > HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, > HDFS-14447-HDFS-13891.04.patch, HDFS-14447-HDFS-13891.05.patch, > HDFS-14447-HDFS-13891.06.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16827858#comment-16827858 ] Shen Yinjie commented on HDFS-14447: unit test failed seems unrelated. > RBF: Router should support RefreshUserMappingsProtocol > -- > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-14447-HDFS-13891.01.patch, > HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, > HDFS-14447-HDFS-13891.04.patch, HDFS-14447-HDFS-13891.05.patch, > HDFS-14447-HDFS-13891.06.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14447: --- Attachment: HDFS-14447-HDFS-13891.06.patch > RBF: Router should support RefreshUserMappingsProtocol > -- > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-14447-HDFS-13891.01.patch, > HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, > HDFS-14447-HDFS-13891.04.patch, HDFS-14447-HDFS-13891.05.patch, > HDFS-14447-HDFS-13891.06.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16827818#comment-16827818 ] Shen Yinjie commented on HDFS-14447: Fix checkstyle. > RBF: Router should support RefreshUserMappingsProtocol > -- > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-14447-HDFS-13891.01.patch, > HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, > HDFS-14447-HDFS-13891.04.patch, HDFS-14447-HDFS-13891.05.patch, > HDFS-14447-HDFS-13891.06.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16827783#comment-16827783 ] Shen Yinjie commented on HDFS-14447: Sorry for patch V4, upload patch V5 > RBF: Router should support RefreshUserMappingsProtocol > -- > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-14447-HDFS-13891.01.patch, > HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, > HDFS-14447-HDFS-13891.04.patch, HDFS-14447-HDFS-13891.05.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14447: --- Attachment: HDFS-14447-HDFS-13891.05.patch > RBF: Router should support RefreshUserMappingsProtocol > -- > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-14447-HDFS-13891.01.patch, > HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, > HDFS-14447-HDFS-13891.04.patch, HDFS-14447-HDFS-13891.05.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14447) RBF: Router should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14447: --- Summary: RBF: Router should support RefreshUserMappingsProtocol (was: RBF: RouterAdminServer should support RefreshUserMappingsProtocol) > RBF: Router should support RefreshUserMappingsProtocol > -- > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-14447-HDFS-13891.01.patch, > HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, > HDFS-14447-HDFS-13891.04.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14447) RBF: RouterAdminServer should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14447: --- Attachment: HDFS-14447-HDFS-13891.04.patch > RBF: RouterAdminServer should support RefreshUserMappingsProtocol > - > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-14447-HDFS-13891.01.patch, > HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, > HDFS-14447-HDFS-13891.04.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14447) RBF: RouterAdminServer should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16827501#comment-16827501 ] Shen Yinjie commented on HDFS-14447: Appreciate your patience! [~elgoiri] I will update pathch as you metioned soon. > RBF: RouterAdminServer should support RefreshUserMappingsProtocol > - > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-14447-HDFS-13891.01.patch, > HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14447) RBF: RouterAdminServer should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14447: --- Attachment: HDFS-14447-HDFS-13891.03.patch > RBF: RouterAdminServer should support RefreshUserMappingsProtocol > - > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-14447-HDFS-13891.01.patch, > HDFS-14447-HDFS-13891.02.patch, HDFS-14447-HDFS-13891.03.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14447) RBF: RouterAdminServer should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825872#comment-16825872 ] Shen Yinjie commented on HDFS-14447: Unit tests passed in my enviroment... > RBF: RouterAdminServer should support RefreshUserMappingsProtocol > - > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-14447-HDFS-13891.01.patch, > HDFS-14447-HDFS-13891.02.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14447) RBF: RouterAdminServer should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16825665#comment-16825665 ] Shen Yinjie commented on HDFS-14447: Thanks [~hfyang20071] [~elgoiri] for review .I upload a V2 patch fixed checkstyle issue,add log info and unit test. > RBF: RouterAdminServer should support RefreshUserMappingsProtocol > - > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-14447-HDFS-13891.01.patch, > HDFS-14447-HDFS-13891.02.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14447) RBF: RouterAdminServer should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14447: --- Attachment: HDFS-14447-HDFS-13891.02.patch > RBF: RouterAdminServer should support RefreshUserMappingsProtocol > - > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-14447-HDFS-13891.01.patch, > HDFS-14447-HDFS-13891.02.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14447) RBF: RouterAdminServer should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14447: --- Fix Version/s: HDFS-13891 Status: Patch Available (was: Open) > RBF: RouterAdminServer should support RefreshUserMappingsProtocol > - > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Fix For: HDFS-13891 > > Attachments: HDFS-14447-HDFS-13891.01.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14447) RBF: RouterAdminServer should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16823966#comment-16823966 ] Shen Yinjie commented on HDFS-14447: [~elgoiri] Thank you for the quick reply.I'll upload a formated patch asap . > RBF: RouterAdminServer should support RefreshUserMappingsProtocol > - > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-14447-HDFS-13891.01.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-14447) RBF: RouterAdminServer should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16822896#comment-16822896 ] Shen Yinjie edited comment on HDFS-14447 at 4/23/19 10:57 AM: -- I upload a simple patch which made minor change in RouterRPCServer, so that we can use "hdfs dfsadmin -Dfs.defaultFs=hdfs://{router-union} -refreshSuperUserGroupsConfiguration" to refresh proxyuser info for RBF:Routers. Or, I also considered to add a command in dfsrouteradmin shell to refreshSuperUserGroupsConfiguration independently for routers. Would you please have a look at this jira?? [~linyiqun] [~eddyxu] was (Author: shenyinjie): I upload a simple patch which made minor change in RouterAdminServer, so that we can use "hdfs dfsadmin -Dfs.defaultFs=hdfs://{router-union} -refreshSuperUserGroupsConfiguration" to refresh proxyuser info for RBF:Routers. Or, I also considered to add a command in dfsrouteradmin shell to refreshSuperUserGroupsConfiguration independently for routers. Would you please have a look at this jira?? [~linyiqun] [~eddyxu] > RBF: RouterAdminServer should support RefreshUserMappingsProtocol > - > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-14447-HDFS-13891.01.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14447) RBF: RouterAdminServer should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14447: --- Attachment: (was: HDFS-14447_1.patch) > RBF: RouterAdminServer should support RefreshUserMappingsProtocol > - > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-14447-HDFS-13891.01.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14447) RBF: RouterAdminServer should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14447: --- Attachment: HDFS-14447-HDFS-13891.01.patch > RBF: RouterAdminServer should support RefreshUserMappingsProtocol > - > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-14447-HDFS-13891.01.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14447) RBF:RouterAdminServer should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14447: --- Status: Open (was: Patch Available) > RBF:RouterAdminServer should support RefreshUserMappingsProtocol > > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-14447_1.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14447) RBF:RouterAdminServer should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14447: --- Status: Patch Available (was: Open) I upload a simple patch which made minor change in RouterAdminServer, so that we can use "hdfs dfsadmin -Dfs.defaultFs=hdfs://{router-union} -refreshSuperUserGroupsConfiguration" to refresh proxyuser info for RBF:Routers. Or, I also considered to add a command in dfsrouteradmin shell to refreshSuperUserGroupsConfiguration independently for routers. Would you please have a look at this jira?? [~linyiqun] [~eddyxu] > RBF:RouterAdminServer should support RefreshUserMappingsProtocol > > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-14447_1.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-14447) RBF:RouterAdminServer should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie reassigned HDFS-14447: -- Assignee: Shen Yinjie > RBF:RouterAdminServer should support RefreshUserMappingsProtocol > > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-14447_1.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14447) RBF:RouterAdminServer should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14447: --- Attachment: HDFS-14447_1.patch > RBF:RouterAdminServer should support RefreshUserMappingsProtocol > > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-14447_1.patch, error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14447) RBF:RouterAdminServer should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14447: --- Attachment: error.png > RBF:RouterAdminServer should support RefreshUserMappingsProtocol > > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Priority: Major > Attachments: error.png > > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14447) RBF:RouterAdminServer should support RefreshUserMappingsProtocol
[ https://issues.apache.org/jira/browse/HDFS-14447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14447: --- Description: HDFS with RBF We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, it throws "Unknown protocol: ...RefreshUserMappingProtocol". RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser client would be refused to impersonate.As shown in the screenshot was: HDFS with RBF We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, it throws "Unknown protocol: ...RefreshUserMappingProtocol". RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser client would be refused to impersonate. > RBF:RouterAdminServer should support RefreshUserMappingsProtocol > > > Key: HDFS-14447 > URL: https://issues.apache.org/jira/browse/HDFS-14447 > Project: Hadoop HDFS > Issue Type: Improvement > Components: rbf >Affects Versions: 3.1.0 >Reporter: Shen Yinjie >Priority: Major > > HDFS with RBF > We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin > -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, > it throws "Unknown protocol: ...RefreshUserMappingProtocol". > RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser > client would be refused to impersonate.As shown in the screenshot -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-14447) RBF:RouterAdminServer should support RefreshUserMappingsProtocol
Shen Yinjie created HDFS-14447: -- Summary: RBF:RouterAdminServer should support RefreshUserMappingsProtocol Key: HDFS-14447 URL: https://issues.apache.org/jira/browse/HDFS-14447 Project: Hadoop HDFS Issue Type: Improvement Components: rbf Affects Versions: 3.1.0 Reporter: Shen Yinjie HDFS with RBF We configure hadoop.proxyuser.xx.yy ,then execute hdfs dfsadmin -Dfs.defaultFS=hdfs://router-fed -refreshSuperUserGroupsConfiguration, it throws "Unknown protocol: ...RefreshUserMappingProtocol". RouterAdminServer should support RefreshUserMappingsProtocol , or a proxyuser client would be refused to impersonate. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14245) Class cast error in GetGroups with ObserverReadProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-14245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14245: --- Affects Version/s: HDFS-12943 > Class cast error in GetGroups with ObserverReadProxyProvider > > > Key: HDFS-14245 > URL: https://issues.apache.org/jira/browse/HDFS-14245 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: HDFS-12943 >Reporter: Shen Yinjie >Priority: Major > > Run "hdfs groups" with ObserverReadProxyProvider, Exception throws as : > {code:java} > Exception in thread "main" java.io.IOException: Couldn't create proxy > provider class > org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider > at > org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:261) > at > org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:119) > at > org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95) > at org.apache.hadoop.hdfs.tools.GetGroups.getUgmProtocol(GetGroups.java:87) > at org.apache.hadoop.tools.GetGroupsBase.run(GetGroupsBase.java:71) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at org.apache.hadoop.hdfs.tools.GetGroups.main(GetGroups.java:96) > Caused by: java.lang.reflect.InvocationTargetException > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:245) > ... 7 more > Caused by: java.lang.ClassCastException: > org.apache.hadoop.hdfs.server.namenode.ha.NameNodeHAProxyFactory cannot be > cast to org.apache.hadoop.hdfs.server.namenode.ha.ClientHAProxyFactory > at > org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:123) > at > org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:112) > ... 12 more > {code} > similar with HDFS-14116, we did a simple fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-14245) Class cast error in GetGroups with ObserverReadProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-14245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie reassigned HDFS-14245: -- Assignee: Shen Yinjie > Class cast error in GetGroups with ObserverReadProxyProvider > > > Key: HDFS-14245 > URL: https://issues.apache.org/jira/browse/HDFS-14245 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: HDFS-12943 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > > Run "hdfs groups" with ObserverReadProxyProvider, Exception throws as : > {code:java} > Exception in thread "main" java.io.IOException: Couldn't create proxy > provider class > org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider > at > org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:261) > at > org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:119) > at > org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95) > at org.apache.hadoop.hdfs.tools.GetGroups.getUgmProtocol(GetGroups.java:87) > at org.apache.hadoop.tools.GetGroupsBase.run(GetGroupsBase.java:71) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at org.apache.hadoop.hdfs.tools.GetGroups.main(GetGroups.java:96) > Caused by: java.lang.reflect.InvocationTargetException > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:245) > ... 7 more > Caused by: java.lang.ClassCastException: > org.apache.hadoop.hdfs.server.namenode.ha.NameNodeHAProxyFactory cannot be > cast to org.apache.hadoop.hdfs.server.namenode.ha.ClientHAProxyFactory > at > org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:123) > at > org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:112) > ... 12 more > {code} > similar with HDFS-14116, we did a simple fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14245) Class cast error in GetGroups with ObserverReadProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-14245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14245: --- Status: Patch Available (was: Open) > Class cast error in GetGroups with ObserverReadProxyProvider > > > Key: HDFS-14245 > URL: https://issues.apache.org/jira/browse/HDFS-14245 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: HDFS-12943 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-14245.patch > > > Run "hdfs groups" with ObserverReadProxyProvider, Exception throws as : > {code:java} > Exception in thread "main" java.io.IOException: Couldn't create proxy > provider class > org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider > at > org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:261) > at > org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:119) > at > org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95) > at org.apache.hadoop.hdfs.tools.GetGroups.getUgmProtocol(GetGroups.java:87) > at org.apache.hadoop.tools.GetGroupsBase.run(GetGroupsBase.java:71) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at org.apache.hadoop.hdfs.tools.GetGroups.main(GetGroups.java:96) > Caused by: java.lang.reflect.InvocationTargetException > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:245) > ... 7 more > Caused by: java.lang.ClassCastException: > org.apache.hadoop.hdfs.server.namenode.ha.NameNodeHAProxyFactory cannot be > cast to org.apache.hadoop.hdfs.server.namenode.ha.ClientHAProxyFactory > at > org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:123) > at > org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:112) > ... 12 more > {code} > similar with HDFS-14116, we did a simple fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14245) Class cast error in GetGroups with ObserverReadProxyProvider
[ https://issues.apache.org/jira/browse/HDFS-14245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14245: --- Attachment: HDFS-14245.patch > Class cast error in GetGroups with ObserverReadProxyProvider > > > Key: HDFS-14245 > URL: https://issues.apache.org/jira/browse/HDFS-14245 > Project: Hadoop HDFS > Issue Type: Bug >Affects Versions: HDFS-12943 >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-14245.patch > > > Run "hdfs groups" with ObserverReadProxyProvider, Exception throws as : > {code:java} > Exception in thread "main" java.io.IOException: Couldn't create proxy > provider class > org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider > at > org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:261) > at > org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:119) > at > org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95) > at org.apache.hadoop.hdfs.tools.GetGroups.getUgmProtocol(GetGroups.java:87) > at org.apache.hadoop.tools.GetGroupsBase.run(GetGroupsBase.java:71) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) > at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) > at org.apache.hadoop.hdfs.tools.GetGroups.main(GetGroups.java:96) > Caused by: java.lang.reflect.InvocationTargetException > at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) > at > sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) > at > sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) > at java.lang.reflect.Constructor.newInstance(Constructor.java:423) > at > org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:245) > ... 7 more > Caused by: java.lang.ClassCastException: > org.apache.hadoop.hdfs.server.namenode.ha.NameNodeHAProxyFactory cannot be > cast to org.apache.hadoop.hdfs.server.namenode.ha.ClientHAProxyFactory > at > org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:123) > at > org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:112) > ... 12 more > {code} > similar with HDFS-14116, we did a simple fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-14245) Class cast error in GetGroups with ObserverReadProxyProvider
Shen Yinjie created HDFS-14245: -- Summary: Class cast error in GetGroups with ObserverReadProxyProvider Key: HDFS-14245 URL: https://issues.apache.org/jira/browse/HDFS-14245 Project: Hadoop HDFS Issue Type: Bug Reporter: Shen Yinjie Run "hdfs groups" with ObserverReadProxyProvider, Exception throws as : {code:java} Exception in thread "main" java.io.IOException: Couldn't create proxy provider class org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider at org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:261) at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:119) at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:95) at org.apache.hadoop.hdfs.tools.GetGroups.getUgmProtocol(GetGroups.java:87) at org.apache.hadoop.tools.GetGroupsBase.run(GetGroupsBase.java:71) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:90) at org.apache.hadoop.hdfs.tools.GetGroups.main(GetGroups.java:96) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hdfs.NameNodeProxiesClient.createFailoverProxyProvider(NameNodeProxiesClient.java:245) ... 7 more Caused by: java.lang.ClassCastException: org.apache.hadoop.hdfs.server.namenode.ha.NameNodeHAProxyFactory cannot be cast to org.apache.hadoop.hdfs.server.namenode.ha.ClientHAProxyFactory at org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:123) at org.apache.hadoop.hdfs.server.namenode.ha.ObserverReadProxyProvider.(ObserverReadProxyProvider.java:112) ... 12 more {code} similar with HDFS-14116, we did a simple fix. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14240) blockReport test in NNThroughputBenchmark throws ArrayIndexOutOfBoundsException
[ https://issues.apache.org/jira/browse/HDFS-14240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14240: --- Attachment: screenshot-1.png > blockReport test in NNThroughputBenchmark throws > ArrayIndexOutOfBoundsException > --- > > Key: HDFS-14240 > URL: https://issues.apache.org/jira/browse/HDFS-14240 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Shen Yinjie >Priority: Major > Attachments: screenshot-1.png > > > When I run a blockReport test with NNThroughputBenchmark, > BlockReportStats.addBlocks() throws ArrayIndexOutOfBoundsException. > digging the code: > {code:java} > for(DatanodeInfo dnInfo : loc.getLocations()) > { int dnIdx = dnInfo.getXferPort() - 1; > datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());{code} > > problem is here:array datanodes's length is determined by args as > "-datanodes" or "-threads" ,but dnIdx = dnInfo.getXferPort() is a random port. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14238) A log in NNThroughputBenchmark should change log level to "INFO" instead of "ERROR"
[ https://issues.apache.org/jira/browse/HDFS-14238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14238: --- Attachment: HDFS-14238.patch > A log in NNThroughputBenchmark should change log level to "INFO" instead of > "ERROR" > > > Key: HDFS-14238 > URL: https://issues.apache.org/jira/browse/HDFS-14238 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-14238.patch > > > In NNThroughputBenchmark#150, LOG.error("Log level = " + logLevel.toString()); > this loglevel should be changed to “LOG.info()” ,since no error occurs here, > just tell us namenode log level has changed . -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14240) blockReport test in NNThroughputBenchmark throws ArrayIndexOutOfBoundsException
[ https://issues.apache.org/jira/browse/HDFS-14240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14240: --- Description: When I run a blockReport test with NNThroughputBenchmark, BlockReportStats.addBlocks() throws ArrayIndexOutOfBoundsException. digging the code: {code:java} for(DatanodeInfo dnInfo : loc.getLocations()) { int dnIdx = dnInfo.getXferPort() - 1; datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());{code} problem is here:array datanodes's length is determined by args as "-datanodes" or "-threads" ,but dnIdx = dnInfo.getXferPort() is a random port. was: When I run a blockReport test with NNThroughputBenchmark, BlockReportStats.addBlocks() throws ArrayIndexOutOfBoundsException. digging the code: {code}for(DatanodeInfo dnInfo : loc.getLocations()) { int dnIdx = dnInfo.getXferPort() - 1; datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());\{code} problem is here:array datanodes's length is determined by args as "-datanodes" or "-threads" ,but dnIdx = dnInfo.getXferPort() is a random port. > blockReport test in NNThroughputBenchmark throws > ArrayIndexOutOfBoundsException > --- > > Key: HDFS-14240 > URL: https://issues.apache.org/jira/browse/HDFS-14240 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Shen Yinjie >Priority: Major > > When I run a blockReport test with NNThroughputBenchmark, > BlockReportStats.addBlocks() throws ArrayIndexOutOfBoundsException. > digging the code: > {code:java} > for(DatanodeInfo dnInfo : loc.getLocations()) > { int dnIdx = dnInfo.getXferPort() - 1; > datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());{code} > > problem is here:array datanodes's length is determined by args as > "-datanodes" or "-threads" ,but dnIdx = dnInfo.getXferPort() is a random port. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14238) A log in NNThroughputBenchmark should change log level to "INFO" instead of "ERROR"
[ https://issues.apache.org/jira/browse/HDFS-14238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14238: --- Attachment: (was: HDFS-14235.patch) > A log in NNThroughputBenchmark should change log level to "INFO" instead of > "ERROR" > > > Key: HDFS-14238 > URL: https://issues.apache.org/jira/browse/HDFS-14238 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > > In NNThroughputBenchmark#150, LOG.error("Log level = " + logLevel.toString()); > this loglevel should be changed to “LOG.info()” ,since no error occurs here, > just tell us namenode log level has changed . -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-14240) blockReport test in NNThroughputBenchmark throws ArrayIndexOutOfBoundsException
Shen Yinjie created HDFS-14240: -- Summary: blockReport test in NNThroughputBenchmark throws ArrayIndexOutOfBoundsException Key: HDFS-14240 URL: https://issues.apache.org/jira/browse/HDFS-14240 Project: Hadoop HDFS Issue Type: Bug Reporter: Shen Yinjie When I run a blockReport test with NNThroughputBenchmark, BlockReportStats.addBlocks() throws ArrayIndexOutOfBoundsException. digging the code: {code}for(DatanodeInfo dnInfo : loc.getLocations()) { int dnIdx = dnInfo.getXferPort() - 1; datanodes[dnIdx].addBlock(loc.getBlock().getLocalBlock());\{code} problem is here:array datanodes's length is determined by args as "-datanodes" or "-threads" ,but dnIdx = dnInfo.getXferPort() is a random port. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14238) A log in NNThroughputBenchmark should change log level to "INFO" instead of "ERROR"
[ https://issues.apache.org/jira/browse/HDFS-14238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14238: --- Assignee: Shen Yinjie Status: Patch Available (was: Open) > A log in NNThroughputBenchmark should change log level to "INFO" instead of > "ERROR" > > > Key: HDFS-14238 > URL: https://issues.apache.org/jira/browse/HDFS-14238 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Shen Yinjie >Assignee: Shen Yinjie >Priority: Major > Attachments: HDFS-14235.patch > > > In NNThroughputBenchmark#150, LOG.error("Log level = " + logLevel.toString()); > this loglevel should be changed to “LOG.info()” ,since no error occurs here, > just tell us namenode log level has changed . -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14238) A log in NNThroughputBenchmark should change log level to "INFO" instead of "ERROR"
[ https://issues.apache.org/jira/browse/HDFS-14238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14238: --- Attachment: HDFS-14235.patch > A log in NNThroughputBenchmark should change log level to "INFO" instead of > "ERROR" > > > Key: HDFS-14238 > URL: https://issues.apache.org/jira/browse/HDFS-14238 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Shen Yinjie >Priority: Major > Attachments: HDFS-14235.patch > > > In NNThroughputBenchmark#150, LOG.error("Log level = " + logLevel.toString()); > this loglevel should be changed to “LOG.info()” ,since no error occurs here, > just tell us namenode log level has changed . -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14238) A log in NNThroughputBenchmark should change log level to "INFO" instead of "ERROR"
[ https://issues.apache.org/jira/browse/HDFS-14238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-14238: --- Description: In NNThroughputBenchmark#150, LOG.error("Log level = " + logLevel.toString()); this loglevel should be changed to “LOG.info()” ,since no error occurs here, just tell us namenode log level has changed . was: In NNThroughputBenchmark#150, LOG.error("Log level = " + logLevel.toString()); this loglevel should be changed to “LOG.info()” ,since no error occurs here, just tell us test log level has changed . > A log in NNThroughputBenchmark should change log level to "INFO" instead of > "ERROR" > > > Key: HDFS-14238 > URL: https://issues.apache.org/jira/browse/HDFS-14238 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Shen Yinjie >Priority: Major > > In NNThroughputBenchmark#150, LOG.error("Log level = " + logLevel.toString()); > this loglevel should be changed to “LOG.info()” ,since no error occurs here, > just tell us namenode log level has changed . -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Created] (HDFS-14238) A log in NNThroughputBenchmark should change log level to "INFO" instead of "ERROR"
Shen Yinjie created HDFS-14238: -- Summary: A log in NNThroughputBenchmark should change log level to "INFO" instead of "ERROR" Key: HDFS-14238 URL: https://issues.apache.org/jira/browse/HDFS-14238 Project: Hadoop HDFS Issue Type: Improvement Reporter: Shen Yinjie In NNThroughputBenchmark#150, LOG.error("Log level = " + logLevel.toString()); this loglevel should be changed to “LOG.info()” ,since no error occurs here, just tell us test log level has changed . -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Issue Comment Deleted] (HDFS-13220) Change lastCheckpointTime to use fsimage mostRecentCheckpointTime
[ https://issues.apache.org/jira/browse/HDFS-13220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-13220: --- Comment: was deleted (was: We met the same problem too..But, I found HDFS-6353 can solve this.) > Change lastCheckpointTime to use fsimage mostRecentCheckpointTime > - > > Key: HDFS-13220 > URL: https://issues.apache.org/jira/browse/HDFS-13220 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Nie Gus >Priority: Minor > > we found the our standby nn did not do the checkpoint, and the checkpoint > alert keep alert, we use the jmx last checkpoint time and > dfs.namenode.checkpoint.period to do the monitor check. > > then check the code and log, found the standby NN are using monotonicNow, not > fsimage checkpoint time, so when Standby NN restart or switch to Active, then > the > lastCheckpointTime in doWork will be reset. so there is risk standby nn > restart or stand active switch will cause the checkpoint delay. > StandbyCheckpointer.java > {code:java} > private void doWork() { > final long checkPeriod = 1000 * checkpointConf.getCheckPeriod(); > // Reset checkpoint time so that we don't always checkpoint > // on startup. > lastCheckpointTime = monotonicNow(); > while (shouldRun) { > boolean needRollbackCheckpoint = namesystem.isNeedRollbackFsImage(); > if (!needRollbackCheckpoint) { > try { > Thread.sleep(checkPeriod); > } catch (InterruptedException ie) { > } > if (!shouldRun) { > break; > } > } > try { > // We may have lost our ticket since last checkpoint, log in again, just in > case > if (UserGroupInformation.isSecurityEnabled()) { > UserGroupInformation.getCurrentUser().checkTGTAndReloginFromKeytab(); > } > final long now = monotonicNow(); > final long uncheckpointed = countUncheckpointedTxns(); > final long secsSinceLast = (now - lastCheckpointTime) / 1000; > boolean needCheckpoint = needRollbackCheckpoint; > if (needCheckpoint) { > LOG.info("Triggering a rollback fsimage for rolling upgrade."); > } else if (uncheckpointed >= checkpointConf.getTxnCount()) { > LOG.info("Triggering checkpoint because there have been " + > uncheckpointed + " txns since the last checkpoint, which " + > "exceeds the configured threshold " + > checkpointConf.getTxnCount()); > needCheckpoint = true; > } else if (secsSinceLast >= checkpointConf.getPeriod()) { > LOG.info("Triggering checkpoint because it has been " + > secsSinceLast + " seconds since the last checkpoint, which " + > "exceeds the configured interval " + checkpointConf.getPeriod()); > needCheckpoint = true; > } > synchronized (cancelLock) { > if (now < preventCheckpointsUntil) { > LOG.info("But skipping this checkpoint since we are about to failover!"); > canceledCount++; > continue; > } > assert canceler == null; > canceler = new Canceler(); > } > if (needCheckpoint) { > doCheckpoint(); > // reset needRollbackCheckpoint to false only when we finish a ckpt > // for rollback image > if (needRollbackCheckpoint > && namesystem.getFSImage().hasRollbackFSImage()) { > namesystem.setCreatedRollbackImages(true); > namesystem.setNeedRollbackFsImage(false); > } > lastCheckpointTime = now; > } > } catch (SaveNamespaceCancelledException ce) { > LOG.info("Checkpoint was cancelled: " + ce.getMessage()); > canceledCount++; > } catch (InterruptedException ie) { > LOG.info("Interrupted during checkpointing", ie); > // Probably requested shutdown. > continue; > } catch (Throwable t) { > LOG.error("Exception in doCheckpoint", t); > } finally { > synchronized (cancelLock) { > canceler = null; > } > } > } > } > } > {code} > > can we use the fsimage's mostRecentCheckpointTime to do the check. > > thanks, > Gus -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-13220) Change lastCheckpointTime to use fsimage mostRecentCheckpointTime
[ https://issues.apache.org/jira/browse/HDFS-13220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401743#comment-16401743 ] Shen Yinjie edited comment on HDFS-13220 at 3/16/18 11:05 AM: -- We met the same problem too..But, I found HDFS-6353 can solve this. was (Author: shenyinjie): We met the same problem too..But, I found HDFS-6353 could solve this. > Change lastCheckpointTime to use fsimage mostRecentCheckpointTime > - > > Key: HDFS-13220 > URL: https://issues.apache.org/jira/browse/HDFS-13220 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Nie Gus >Priority: Minor > > we found the our standby nn did not do the checkpoint, and the checkpoint > alert keep alert, we use the jmx last checkpoint time and > dfs.namenode.checkpoint.period to do the monitor check. > > then check the code and log, found the standby NN are using monotonicNow, not > fsimage checkpoint time, so when Standby NN restart or switch to Active, then > the > lastCheckpointTime in doWork will be reset. so there is risk standby nn > restart or stand active switch will cause the checkpoint delay. > StandbyCheckpointer.java > {code:java} > private void doWork() { > final long checkPeriod = 1000 * checkpointConf.getCheckPeriod(); > // Reset checkpoint time so that we don't always checkpoint > // on startup. > lastCheckpointTime = monotonicNow(); > while (shouldRun) { > boolean needRollbackCheckpoint = namesystem.isNeedRollbackFsImage(); > if (!needRollbackCheckpoint) { > try { > Thread.sleep(checkPeriod); > } catch (InterruptedException ie) { > } > if (!shouldRun) { > break; > } > } > try { > // We may have lost our ticket since last checkpoint, log in again, just in > case > if (UserGroupInformation.isSecurityEnabled()) { > UserGroupInformation.getCurrentUser().checkTGTAndReloginFromKeytab(); > } > final long now = monotonicNow(); > final long uncheckpointed = countUncheckpointedTxns(); > final long secsSinceLast = (now - lastCheckpointTime) / 1000; > boolean needCheckpoint = needRollbackCheckpoint; > if (needCheckpoint) { > LOG.info("Triggering a rollback fsimage for rolling upgrade."); > } else if (uncheckpointed >= checkpointConf.getTxnCount()) { > LOG.info("Triggering checkpoint because there have been " + > uncheckpointed + " txns since the last checkpoint, which " + > "exceeds the configured threshold " + > checkpointConf.getTxnCount()); > needCheckpoint = true; > } else if (secsSinceLast >= checkpointConf.getPeriod()) { > LOG.info("Triggering checkpoint because it has been " + > secsSinceLast + " seconds since the last checkpoint, which " + > "exceeds the configured interval " + checkpointConf.getPeriod()); > needCheckpoint = true; > } > synchronized (cancelLock) { > if (now < preventCheckpointsUntil) { > LOG.info("But skipping this checkpoint since we are about to failover!"); > canceledCount++; > continue; > } > assert canceler == null; > canceler = new Canceler(); > } > if (needCheckpoint) { > doCheckpoint(); > // reset needRollbackCheckpoint to false only when we finish a ckpt > // for rollback image > if (needRollbackCheckpoint > && namesystem.getFSImage().hasRollbackFSImage()) { > namesystem.setCreatedRollbackImages(true); > namesystem.setNeedRollbackFsImage(false); > } > lastCheckpointTime = now; > } > } catch (SaveNamespaceCancelledException ce) { > LOG.info("Checkpoint was cancelled: " + ce.getMessage()); > canceledCount++; > } catch (InterruptedException ie) { > LOG.info("Interrupted during checkpointing", ie); > // Probably requested shutdown. > continue; > } catch (Throwable t) { > LOG.error("Exception in doCheckpoint", t); > } finally { > synchronized (cancelLock) { > canceler = null; > } > } > } > } > } > {code} > > can we use the fsimage's mostRecentCheckpointTime to do the check. > > thanks, > Gus -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-13220) Change lastCheckpointTime to use fsimage mostRecentCheckpointTime
[ https://issues.apache.org/jira/browse/HDFS-13220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16401743#comment-16401743 ] Shen Yinjie commented on HDFS-13220: We met the same problem too..But, I found HDFS-6353 could solve this. > Change lastCheckpointTime to use fsimage mostRecentCheckpointTime > - > > Key: HDFS-13220 > URL: https://issues.apache.org/jira/browse/HDFS-13220 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Reporter: Nie Gus >Priority: Minor > > we found the our standby nn did not do the checkpoint, and the checkpoint > alert keep alert, we use the jmx last checkpoint time and > dfs.namenode.checkpoint.period to do the monitor check. > > then check the code and log, found the standby NN are using monotonicNow, not > fsimage checkpoint time, so when Standby NN restart or switch to Active, then > the > lastCheckpointTime in doWork will be reset. so there is risk standby nn > restart or stand active switch will cause the checkpoint delay. > StandbyCheckpointer.java > {code:java} > private void doWork() { > final long checkPeriod = 1000 * checkpointConf.getCheckPeriod(); > // Reset checkpoint time so that we don't always checkpoint > // on startup. > lastCheckpointTime = monotonicNow(); > while (shouldRun) { > boolean needRollbackCheckpoint = namesystem.isNeedRollbackFsImage(); > if (!needRollbackCheckpoint) { > try { > Thread.sleep(checkPeriod); > } catch (InterruptedException ie) { > } > if (!shouldRun) { > break; > } > } > try { > // We may have lost our ticket since last checkpoint, log in again, just in > case > if (UserGroupInformation.isSecurityEnabled()) { > UserGroupInformation.getCurrentUser().checkTGTAndReloginFromKeytab(); > } > final long now = monotonicNow(); > final long uncheckpointed = countUncheckpointedTxns(); > final long secsSinceLast = (now - lastCheckpointTime) / 1000; > boolean needCheckpoint = needRollbackCheckpoint; > if (needCheckpoint) { > LOG.info("Triggering a rollback fsimage for rolling upgrade."); > } else if (uncheckpointed >= checkpointConf.getTxnCount()) { > LOG.info("Triggering checkpoint because there have been " + > uncheckpointed + " txns since the last checkpoint, which " + > "exceeds the configured threshold " + > checkpointConf.getTxnCount()); > needCheckpoint = true; > } else if (secsSinceLast >= checkpointConf.getPeriod()) { > LOG.info("Triggering checkpoint because it has been " + > secsSinceLast + " seconds since the last checkpoint, which " + > "exceeds the configured interval " + checkpointConf.getPeriod()); > needCheckpoint = true; > } > synchronized (cancelLock) { > if (now < preventCheckpointsUntil) { > LOG.info("But skipping this checkpoint since we are about to failover!"); > canceledCount++; > continue; > } > assert canceler == null; > canceler = new Canceler(); > } > if (needCheckpoint) { > doCheckpoint(); > // reset needRollbackCheckpoint to false only when we finish a ckpt > // for rollback image > if (needRollbackCheckpoint > && namesystem.getFSImage().hasRollbackFSImage()) { > namesystem.setCreatedRollbackImages(true); > namesystem.setNeedRollbackFsImage(false); > } > lastCheckpointTime = now; > } > } catch (SaveNamespaceCancelledException ce) { > LOG.info("Checkpoint was cancelled: " + ce.getMessage()); > canceledCount++; > } catch (InterruptedException ie) { > LOG.info("Interrupted during checkpointing", ie); > // Probably requested shutdown. > continue; > } catch (Throwable t) { > LOG.error("Exception in doCheckpoint", t); > } finally { > synchronized (cancelLock) { > canceler = null; > } > } > } > } > } > {code} > > can we use the fsimage's mostRecentCheckpointTime to do the check. > > thanks, > Gus -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-10786) Erasure Coding: Add removeErasureCodingPolicy API
[ https://issues.apache.org/jira/browse/HDFS-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15574519#comment-15574519 ] Shen Yinjie commented on HDFS-10786: how can I remove erasurecode policy? > Erasure Coding: Add removeErasureCodingPolicy API > - > > Key: HDFS-10786 > URL: https://issues.apache.org/jira/browse/HDFS-10786 > Project: Hadoop HDFS > Issue Type: Sub-task >Reporter: Xinwei Qin > Labels: hdfs-ec-3.0-must-do > > HDFS-7859 has developed addErasureCodingPolicy API to add some user-added > Erasure Coding policies, and as discussed in HDFS-7859, we should also add > removeErasureCodingPolicy API to support removing some user-added Erasure > Coding Polices. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-7957) Truncate should verify quota before making changes
[ https://issues.apache.org/jira/browse/HDFS-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie updated HDFS-7957: -- Assignee: Jing Zhao (was: Shen Yinjie) > Truncate should verify quota before making changes > -- > > Key: HDFS-7957 > URL: https://issues.apache.org/jira/browse/HDFS-7957 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.7.0 >Reporter: Jing Zhao >Assignee: Jing Zhao >Priority: Critical > Fix For: 2.7.0 > > Attachments: HDFS-7957.000.patch, HDFS-7957.001.patch, > HDFS-7957.002.patch > > > This is a similar issue with HDFS-7587: for truncate we should also verify > quota in the beginning and update quota in the end. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Assigned] (HDFS-7957) Truncate should verify quota before making changes
[ https://issues.apache.org/jira/browse/HDFS-7957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Shen Yinjie reassigned HDFS-7957: - Assignee: Shen Yinjie (was: Jing Zhao) > Truncate should verify quota before making changes > -- > > Key: HDFS-7957 > URL: https://issues.apache.org/jira/browse/HDFS-7957 > Project: Hadoop HDFS > Issue Type: Bug > Components: namenode >Affects Versions: 2.7.0 >Reporter: Jing Zhao >Assignee: Shen Yinjie >Priority: Critical > Fix For: 2.7.0 > > Attachments: HDFS-7957.000.patch, HDFS-7957.001.patch, > HDFS-7957.002.patch > > > This is a similar issue with HDFS-7587: for truncate we should also verify > quota in the beginning and update quota in the end. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org