[jira] [Updated] (HDFS-14783) Expired SampleStat should ignore when generating SlowPeersReport
[ https://issues.apache.org/jira/browse/HDFS-14783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibin Huang updated HDFS-14783: Description: SlowPeersReport is generated by the SampleStat between tow dn, so it can present on nn's jmx like this: {code:java} "SlowPeersReport" :[{"SlowNode":"dn2","ReportingNodes":["dn1"]}] {code} In each period, MutableRollingAverages will do a rollOverAvgs(), it will generate a SumAndCount object which is based on SampleStat, and store it in a LinkedBlockingDeque, the deque will be used to generate SlowPeersReport. And the old member of deque won't be removed until the queue is full. However, if dn1 don't send any packet to dn2 in the last of 36*300_000 ms, the deque will be filled with an old member, because the number of last SampleStat never change.I think these old SampleStats should be considered as expired message and ignore them when generating a new SlowPeersReport. was: SlowPeersReport is generated by the SampleStat between tow dn, so it can present on nn's jmx like this: {code:java} "SlowPeersReport" :[{"SlowNode":"dn2","ReportingNodes":["dn1"]}] {code} In each period, MutableRollingAverages will do a rollOverAvgs(), it will generate a SumAndCount object which is based on SampleStat, and store it in a LinkedBlockingDeque, the deque will be used to generate SlowPeersReport. And the old member of deque won't be removed until the queue is full. However, if dn1 don't send any packet to dn2 in the last of 36*300_000 ms, the deque will be filled with an old member, because the number of last SampleStat never change.I think this old SampleStat should consider to be expired and ignore it when the SampleStat is stored in a LinkedBlockingDeque, it won't be removed until the queue is full and a newest one is generated. Therefore, if dn1 don't send any packet to dn2 for a long time, the old SampleStat will keep staying in the queue, and will be used to calculated slowpeer.I think these old SampleStats should be considered as expired message and ignore them when generating a new SlowPeersReport. > Expired SampleStat should ignore when generating SlowPeersReport > > > Key: HDFS-14783 > URL: https://issues.apache.org/jira/browse/HDFS-14783 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Haibin Huang >Assignee: Haibin Huang >Priority: Major > Attachments: HDFS-14783, HDFS-14783-001.patch, HDFS-14783-002.patch, > HDFS-14783-003.patch, HDFS-14783-004.patch, HDFS-14783-005.patch > > > SlowPeersReport is generated by the SampleStat between tow dn, so it can > present on nn's jmx like this: > {code:java} > "SlowPeersReport" :[{"SlowNode":"dn2","ReportingNodes":["dn1"]}] > {code} > In each period, MutableRollingAverages will do a rollOverAvgs(), it will > generate a SumAndCount object which is based on SampleStat, and store it in a > LinkedBlockingDeque, the deque will be used to generate > SlowPeersReport. And the old member of deque won't be removed until the queue > is full. However, if dn1 don't send any packet to dn2 in the last of > 36*300_000 ms, the deque will be filled with an old member, because the > number of last SampleStat never change.I think these old SampleStats should > be considered as expired message and ignore them when generating a new > SlowPeersReport. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14783) Expired SampleStat should ignore when generating SlowPeersReport
[ https://issues.apache.org/jira/browse/HDFS-14783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070685#comment-17070685 ] Haibin Huang commented on HDFS-14783: - Thanks [~elgoiri], i have update the title and description, if necessary i will move this Jira to an new one based on Hadoop common > Expired SampleStat should ignore when generating SlowPeersReport > > > Key: HDFS-14783 > URL: https://issues.apache.org/jira/browse/HDFS-14783 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Haibin Huang >Assignee: Haibin Huang >Priority: Major > Attachments: HDFS-14783, HDFS-14783-001.patch, HDFS-14783-002.patch, > HDFS-14783-003.patch, HDFS-14783-004.patch, HDFS-14783-005.patch > > > SlowPeersReport is generated by the SampleStat between tow dn, so it can > present on nn's jmx like this: > {code:java} > "SlowPeersReport" :[{"SlowNode":"dn2","ReportingNodes":["dn1"]}] > {code} > In each period, MutableRollingAverages will do a rollOverAvgs(), it will > generate a SumAndCount object which is based on SampleStat, and store it in a > LinkedBlockingDeque, the deque will be used to generate > SlowPeersReport. And the old member of deque won't be removed until the queue > is full. However, if dn1 don't send any packet to dn2 in the last of > 36*300_000 ms, the deque will be filled with an old member, because the > number of last SampleStat never change.I think these old SampleStats should > be considered as expired message and ignore them when generating a new > SlowPeersReport. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14783) Expired SampleStat should ignore when generating SlowPeersReport
[ https://issues.apache.org/jira/browse/HDFS-14783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibin Huang updated HDFS-14783: Description: SlowPeersReport is generated by the SampleStat between tow dn, so it can present on nn's jmx like this: {code:java} "SlowPeersReport" :[{"SlowNode":"dn2","ReportingNodes":["dn1"]}] {code} In each period, MutableRollingAverages will do a rollOverAvgs(), it will generate a SumAndCount object which is based on SampleStat, and store it in a LinkedBlockingDeque, the deque will be used to generate SlowPeersReport. And the old member of deque won't be removed until the queue is full. However, if dn1 don't send any packet to dn2 in the last of 36*300_000 ms, the deque will be filled with an old member, because the number of last SampleStat never change.I think this old SampleStat should consider to be expired and ignore it when the SampleStat is stored in a LinkedBlockingDeque, it won't be removed until the queue is full and a newest one is generated. Therefore, if dn1 don't send any packet to dn2 for a long time, the old SampleStat will keep staying in the queue, and will be used to calculated slowpeer.I think these old SampleStats should be considered as expired message and ignore them when generating a new SlowPeersReport. was: SlowPeersReport is generated by the SampleStat between tow dn, so it can present on nn's jmx like this: {code:java} "SlowPeersReport" :[{"SlowNode":"dn2","ReportingNodes":["dn1"]}] {code} In each period, MutableRollingAverages will do a rollOverAvgs(), it will generate a SumAndCount object which is based on SampleStat, and store it in a LinkedBlockingDeque, the deque will be used to generate SlowPeersReport. And the old member of deque won't be removed until the queue is full. However, if dn1 don't send any packet to dn2 in the last of 36*300_000 ms, the deque will be filled with an old member, because the SampleStat is stored in a LinkedBlockingDeque, it won't be removed until the queue is full and a newest one is generated. Therefore, if dn1 don't send any packet to dn2 for a long time, the old SampleStat will keep staying in the queue, and will be used to calculated slowpeer.I think these old SampleStats should be considered as expired message and ignore them when generating a new SlowPeersReport. > Expired SampleStat should ignore when generating SlowPeersReport > > > Key: HDFS-14783 > URL: https://issues.apache.org/jira/browse/HDFS-14783 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Haibin Huang >Assignee: Haibin Huang >Priority: Major > Attachments: HDFS-14783, HDFS-14783-001.patch, HDFS-14783-002.patch, > HDFS-14783-003.patch, HDFS-14783-004.patch, HDFS-14783-005.patch > > > SlowPeersReport is generated by the SampleStat between tow dn, so it can > present on nn's jmx like this: > {code:java} > "SlowPeersReport" :[{"SlowNode":"dn2","ReportingNodes":["dn1"]}] > {code} > In each period, MutableRollingAverages will do a rollOverAvgs(), it will > generate a SumAndCount object which is based on SampleStat, and store it in a > LinkedBlockingDeque, the deque will be used to generate > SlowPeersReport. And the old member of deque won't be removed until the queue > is full. However, if dn1 don't send any packet to dn2 in the last of > 36*300_000 ms, the deque will be filled with an old member, because the > number of last SampleStat never change.I think this old SampleStat should > consider to be expired and ignore it when > the SampleStat is stored in a LinkedBlockingDeque, it won't be > removed until the queue is full and a newest one is generated. Therefore, if > dn1 don't send any packet to dn2 for a long time, the old SampleStat will > keep staying in the queue, and will be used to calculated slowpeer.I think > these old SampleStats should be considered as expired message and ignore them > when generating a new SlowPeersReport. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14783) Expired SampleStat should ignore when generating SlowPeersReport
[ https://issues.apache.org/jira/browse/HDFS-14783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibin Huang updated HDFS-14783: Description: SlowPeersReport is generated by the SampleStat between tow dn, so it can present on nn's jmx like this: {code:java} "SlowPeersReport" :[{"SlowNode":"dn2","ReportingNodes":["dn1"]}] {code} In each period, MutableRollingAverages will do a rollOverAvgs(), it will generate a SumAndCount object which is based on SampleStat, and store it in a LinkedBlockingDeque, the deque will be used to generate SlowPeersReport. And the old member of deque won't be removed until the queue is full. However, if dn1 don't send any packet to dn2 in the last of 36*300_000 ms, the deque will be filled with an old member, because the SampleStat is stored in a LinkedBlockingDeque, it won't be removed until the queue is full and a newest one is generated. Therefore, if dn1 don't send any packet to dn2 for a long time, the old SampleStat will keep staying in the queue, and will be used to calculated slowpeer.I think these old SampleStats should be considered as expired message and ignore them when generating a new SlowPeersReport. was: SlowPeersReport is generated by the SampleStat between tow dn, so it can present on nn's jmx like this: {code:java} "SlowPeersReport" :[{"SlowNode":"dn2","ReportingNodes":["dn1"]}] {code} In each period, MutableRollingAverages will do a rollOverAvgs(), it will generate a SumAndCount object which is based on SampleStat, and store it in a LinkedBlockingDeque, the deque will be used to generate SlowPeersReport. And the member of deque won't be removed until the queue is full. However, if dn1 don't send any packet to dn2 in the last of the SampleStat is stored in a LinkedBlockingDeque, it won't be removed until the queue is full and a newest one is generated. Therefore, if dn1 don't send any packet to dn2 for a long time, the old SampleStat will keep staying in the queue, and will be used to calculated slowpeer.I think these old SampleStats should be considered as expired message and ignore them when generating a new SlowPeersReport. > Expired SampleStat should ignore when generating SlowPeersReport > > > Key: HDFS-14783 > URL: https://issues.apache.org/jira/browse/HDFS-14783 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Haibin Huang >Assignee: Haibin Huang >Priority: Major > Attachments: HDFS-14783, HDFS-14783-001.patch, HDFS-14783-002.patch, > HDFS-14783-003.patch, HDFS-14783-004.patch, HDFS-14783-005.patch > > > SlowPeersReport is generated by the SampleStat between tow dn, so it can > present on nn's jmx like this: > {code:java} > "SlowPeersReport" :[{"SlowNode":"dn2","ReportingNodes":["dn1"]}] > {code} > In each period, MutableRollingAverages will do a rollOverAvgs(), it will > generate a SumAndCount object which is based on SampleStat, and store it in a > LinkedBlockingDeque, the deque will be used to generate > SlowPeersReport. And the old member of deque won't be removed until the queue > is full. However, if dn1 don't send any packet to dn2 in the last of > 36*300_000 ms, the deque will be filled with an old member, because > the SampleStat is stored in a LinkedBlockingDeque, it won't be > removed until the queue is full and a newest one is generated. Therefore, if > dn1 don't send any packet to dn2 for a long time, the old SampleStat will > keep staying in the queue, and will be used to calculated slowpeer.I think > these old SampleStats should be considered as expired message and ignore them > when generating a new SlowPeersReport. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14783) Expired SampleStat should ignore when generating SlowPeersReport
[ https://issues.apache.org/jira/browse/HDFS-14783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibin Huang updated HDFS-14783: Description: SlowPeersReport is generated by the SampleStat between tow dn, so it can present on nn's jmx like this: {code:java} "SlowPeersReport" :[{"SlowNode":"dn2","ReportingNodes":["dn1"]}] {code} In each period, MutableRollingAverages will do a rollOverAvgs(), it will generate a SumAndCount object which is based on SampleStat, and store it in a LinkedBlockingDeque, the deque will be used to generate SlowPeersReport. And the member of deque won't be removed until the queue is full. However, if dn1 don't send any packet to dn2 in the last of the SampleStat is stored in a LinkedBlockingDeque, it won't be removed until the queue is full and a newest one is generated. Therefore, if dn1 don't send any packet to dn2 for a long time, the old SampleStat will keep staying in the queue, and will be used to calculated slowpeer.I think these old SampleStats should be considered as expired message and ignore them when generating a new SlowPeersReport. was: SlowPeersReport is calculated by the SampleStat between tow dn, so it can present on nn's jmx like this: {code:java} "SlowPeersReport" :[{"SlowNode":"dn2","ReportingNodes":["dn1"]}] {code} In each period, MutableRollingAverages will do a rollOverAvgs(), it will generate an SumAndCount object which is based on SampleStat, and stored it in a LinkedBlockingDeque to the SampleStat is stored in a LinkedBlockingDeque, it won't be removed until the queue is full and a newest one is generated. Therefore, if dn1 don't send any packet to dn2 for a long time, the old SampleStat will keep staying in the queue, and will be used to calculated slowpeer.I think these old SampleStats should be considered as expired message and ignore them when generating a new SlowPeersReport. > Expired SampleStat should ignore when generating SlowPeersReport > > > Key: HDFS-14783 > URL: https://issues.apache.org/jira/browse/HDFS-14783 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Haibin Huang >Assignee: Haibin Huang >Priority: Major > Attachments: HDFS-14783, HDFS-14783-001.patch, HDFS-14783-002.patch, > HDFS-14783-003.patch, HDFS-14783-004.patch, HDFS-14783-005.patch > > > SlowPeersReport is generated by the SampleStat between tow dn, so it can > present on nn's jmx like this: > {code:java} > "SlowPeersReport" :[{"SlowNode":"dn2","ReportingNodes":["dn1"]}] > {code} > In each period, MutableRollingAverages will do a rollOverAvgs(), it will > generate a SumAndCount object which is based on SampleStat, and store it in a > LinkedBlockingDeque, the deque will be used to generate > SlowPeersReport. And the member of deque won't be removed until the queue is > full. However, if dn1 don't send any packet to dn2 in the last of > the SampleStat is stored in a LinkedBlockingDeque, it won't be > removed until the queue is full and a newest one is generated. Therefore, if > dn1 don't send any packet to dn2 for a long time, the old SampleStat will > keep staying in the queue, and will be used to calculated slowpeer.I think > these old SampleStats should be considered as expired message and ignore them > when generating a new SlowPeersReport. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14783) Expired SampleStat should ignore when generating SlowPeersReport
[ https://issues.apache.org/jira/browse/HDFS-14783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibin Huang updated HDFS-14783: Description: SlowPeersReport is calculated by the SampleStat between tow dn, so it can present on nn's jmx like this: {code:java} "SlowPeersReport" :[{"SlowNode":"dn2","ReportingNodes":["dn1"]}] {code} In each period, MutableRollingAverages will do a rollOverAvgs(), it will generate an SumAndCount object which is based on SampleStat, and stored it in a LinkedBlockingDeque to the SampleStat is stored in a LinkedBlockingDeque, it won't be removed until the queue is full and a newest one is generated. Therefore, if dn1 don't send any packet to dn2 for a long time, the old SampleStat will keep staying in the queue, and will be used to calculated slowpeer.I think these old SampleStats should be considered as expired message and ignore them when generating a new SlowPeersReport. was: SlowPeersReport is calculated by the SampleStat between tow dn, so it can present on nn's jmx like this: {code:java} "SlowPeersReport" :[{"SlowNode":"dn2","ReportingNodes":["dn1"]}] {code} the SampleStat is stored in a LinkedBlockingDeque, it won't be removed until the queue is full and a newest one is generated. Therefore, if dn1 don't send any packet to dn2 for a long time, the old SampleStat will keep staying in the queue, and will be used to calculated slowpeer.I think these old SampleStats should be considered as expired message and ignore them when generating a new SlowPeersReport. > Expired SampleStat should ignore when generating SlowPeersReport > > > Key: HDFS-14783 > URL: https://issues.apache.org/jira/browse/HDFS-14783 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Haibin Huang >Assignee: Haibin Huang >Priority: Major > Attachments: HDFS-14783, HDFS-14783-001.patch, HDFS-14783-002.patch, > HDFS-14783-003.patch, HDFS-14783-004.patch, HDFS-14783-005.patch > > > SlowPeersReport is calculated by the SampleStat between tow dn, so it can > present on nn's jmx like this: > {code:java} > "SlowPeersReport" :[{"SlowNode":"dn2","ReportingNodes":["dn1"]}] > {code} > In each period, MutableRollingAverages will do a rollOverAvgs(), it will > generate an SumAndCount object which is based on SampleStat, and stored it in > a LinkedBlockingDeque to > the SampleStat is stored in a LinkedBlockingDeque, it won't be > removed until the queue is full and a newest one is generated. Therefore, if > dn1 don't send any packet to dn2 for a long time, the old SampleStat will > keep staying in the queue, and will be used to calculated slowpeer.I think > these old SampleStats should be considered as expired message and ignore them > when generating a new SlowPeersReport. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14783) Expired SampleStat should ignore when generating SlowPeersReport
[ https://issues.apache.org/jira/browse/HDFS-14783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Haibin Huang updated HDFS-14783: Summary: Expired SampleStat should ignore when generating SlowPeersReport (was: Expired SampleStat needs to be removed from SlowPeersReport) > Expired SampleStat should ignore when generating SlowPeersReport > > > Key: HDFS-14783 > URL: https://issues.apache.org/jira/browse/HDFS-14783 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Haibin Huang >Assignee: Haibin Huang >Priority: Major > Attachments: HDFS-14783, HDFS-14783-001.patch, HDFS-14783-002.patch, > HDFS-14783-003.patch, HDFS-14783-004.patch, HDFS-14783-005.patch > > > SlowPeersReport is calculated by the SampleStat between tow dn, so it can > present on nn's jmx like this: > {code:java} > "SlowPeersReport" :[{"SlowNode":"dn2","ReportingNodes":["dn1"]}] > {code} > the SampleStat is stored in a LinkedBlockingDeque, it won't be > removed until the queue is full and a newest one is generated. Therefore, if > dn1 don't send any packet to dn2 for a long time, the old SampleStat will > keep staying in the queue, and will be used to calculated slowpeer.I think > these old SampleStats should be considered as expired message and ignore them > when generating a new SlowPeersReport. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15248) Make the maximum number of ACLs entries configurable
[ https://issues.apache.org/jira/browse/HDFS-15248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070251#comment-17070251 ] Wei-Chiu Chuang edited comment on HDFS-15248 at 3/30/20, 1:56 AM: -- Thanks for offering the patch! I've had customers asking for extending ACL entry limit before. I'm not sure why 32, but here are a few reasons why it's probably not a good idea to extend it further (1) manageability. once you have more than a dozen ACLs per file, it becomes hard to manage, error-prone. (2) NameNode heap size. Especially in a large cluster with hundreds of millions of files, each inode occupies more bytes of heap. The memory pressure becomes even worse. (3) serialization cost. We currently serialize the files under a directory to a protobuf message, which is limited to 64mb (default), and as the result we limit the max number of files per directory to 1 million. Allowing more ACL entries per file means more serialized bytes per file, and you may run into the protobuf message limit for a large directory well before 1 million files. For these reasons I usually recommend users to use external authorization providers like Sentry or Ranger to delegate the authorization work to a separate entity. was (Author: jojochuang): Thanks for offering the patch! I've had customers asking for extending ACL entry limit before. I'm not sure why 32, but here are a few reasons why it's probably not a good idea to extend it further (1) manageability. once you have more than a dozen ACLs per file, it becomes hard to manage, error-prone. (2) NameNode heap size. Especially in a large cluster with hundreds of millions of files, each inode occupies more bytes of heap. The memory pressure becomes even worse. (3) serialization cost. We currently serialize the files under a directory to a protobuf message, which is limited to 64mb (default), and as the result we limit the max number of files per directory to 1 million. Allowing more ACL entries per file means more serialized bytes per file, and you may run into the protobuf message limit for a large directory well before 1 million files. For these reasons I usually recommend users to use external authorization providers like Sentry or Ranger to delete the authorization work to a separate entity. > Make the maximum number of ACLs entries configurable > > > Key: HDFS-15248 > URL: https://issues.apache.org/jira/browse/HDFS-15248 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15248.001.patch, HDFS-15248.patch > > > For big cluster, the hardcode 32 of ACLs maximum number is not enough, make > it configurable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15196) RBF: RouterRpcServer getListing cannot list large dirs correctly
[ https://issues.apache.org/jira/browse/HDFS-15196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070609#comment-17070609 ] Íñigo Goiri commented on HDFS-15196: +1 on [^HDFS-15196.013.patch]. [~ayushtkn], do you mind taking another look? > RBF: RouterRpcServer getListing cannot list large dirs correctly > > > Key: HDFS-15196 > URL: https://issues.apache.org/jira/browse/HDFS-15196 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Critical > Attachments: HDFS-15196.001.patch, HDFS-15196.002.patch, > HDFS-15196.003.patch, HDFS-15196.003.patch, HDFS-15196.004.patch, > HDFS-15196.005.patch, HDFS-15196.006.patch, HDFS-15196.007.patch, > HDFS-15196.008.patch, HDFS-15196.009.patch, HDFS-15196.010.patch, > HDFS-15196.011.patch, HDFS-15196.012.patch, HDFS-15196.013.patch > > > In RouterRpcServer, getListing function is handled as two parts: > # Union all partial listings from destination ns + paths > # Append mount points for the dir to be listed > In the case of large dir which is bigger than DFSConfigKeys.DFS_LIST_LIMIT > (with default value 1k), the batch listing will be used and the startAfter > will be used to define the boundary of each batch listing. However, step 2 > here will add existing mount points, which will mess up with the boundary of > the batch, thus making the next batch startAfter wrong. > The initial fix is just to append the mount points when there is no more > batch query necessary, but this will break the order of returned entries. > Therefore more complex logic is added to make sure the order is kept. At the > same time the remainingEntries variable inside DirectoryListing is also > updated to include the remaining mount points. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15245) Improve JournalNode web UI
[ https://issues.apache.org/jira/browse/HDFS-15245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070543#comment-17070543 ] Hudson commented on HDFS-15245: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18103 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18103/]) HDFS-15245. Improve JournalNode web UI. Contributed by Jianfei Jiang. (ayushsaxena: rev 960c9ebaea31586c7a6d5a029a1555c4ed6cfadb) * (edit) hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/qjournal/server/TestJournalNodeMXBean.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/journal/index.html * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNode.java * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/qjournal/server/JournalNodeMXBean.java * (add) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/journal/journalnode.html * (add) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/journal/jn.js > Improve JournalNode web UI > -- > > Key: HDFS-15245 > URL: https://issues.apache.org/jira/browse/HDFS-15245 > Project: Hadoop HDFS > Issue Type: Improvement > Components: journal-node, ui >Affects Versions: 3.2.1 >Reporter: Jianfei Jiang >Assignee: Jianfei Jiang >Priority: Major > Fix For: 3.4.0 > > Attachments: HDFS-15245.002.patch, HDFS-15245.003.patch, > HDFS-15245.004.patch, HDFS-15245.005.patch, HDFS-15245.006.patch, jn web > 1.PNG, jn web 2.PNG > > > At present, journalnode web UI is a almost blank page. It almost has no > useful infomation. > # Add some infomation of journalnode at the main page. > # Add a dropdown menu Utilities like DN and JN, contains Logs, Log Level, > Metrics, Configuration & Process Thread Dump > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15245) Improve JournalNode web UI
[ https://issues.apache.org/jira/browse/HDFS-15245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070539#comment-17070539 ] Ayush Saxena commented on HDFS-15245: - Committed to trunk. Thanx Everyone for the work here!!! > Improve JournalNode web UI > -- > > Key: HDFS-15245 > URL: https://issues.apache.org/jira/browse/HDFS-15245 > Project: Hadoop HDFS > Issue Type: Improvement > Components: journal-node, ui >Affects Versions: 3.2.1 >Reporter: Jianfei Jiang >Assignee: Jianfei Jiang >Priority: Major > Attachments: HDFS-15245.002.patch, HDFS-15245.003.patch, > HDFS-15245.004.patch, HDFS-15245.005.patch, HDFS-15245.006.patch, jn web > 1.PNG, jn web 2.PNG > > > At present, journalnode web UI is a almost blank page. It almost has no > useful infomation. > # Add some infomation of journalnode at the main page. > # Add a dropdown menu Utilities like DN and JN, contains Logs, Log Level, > Metrics, Configuration & Process Thread Dump > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15245) Improve JournalNode web UI
[ https://issues.apache.org/jira/browse/HDFS-15245?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ayush Saxena updated HDFS-15245: Fix Version/s: 3.4.0 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) > Improve JournalNode web UI > -- > > Key: HDFS-15245 > URL: https://issues.apache.org/jira/browse/HDFS-15245 > Project: Hadoop HDFS > Issue Type: Improvement > Components: journal-node, ui >Affects Versions: 3.2.1 >Reporter: Jianfei Jiang >Assignee: Jianfei Jiang >Priority: Major > Fix For: 3.4.0 > > Attachments: HDFS-15245.002.patch, HDFS-15245.003.patch, > HDFS-15245.004.patch, HDFS-15245.005.patch, HDFS-15245.006.patch, jn web > 1.PNG, jn web 2.PNG > > > At present, journalnode web UI is a almost blank page. It almost has no > useful infomation. > # Add some infomation of journalnode at the main page. > # Add a dropdown menu Utilities like DN and JN, contains Logs, Log Level, > Metrics, Configuration & Process Thread Dump > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15245) Improve JournalNode web UI
[ https://issues.apache.org/jira/browse/HDFS-15245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070533#comment-17070533 ] Ayush Saxena commented on HDFS-15245: - v006 LGTM +1 > Improve JournalNode web UI > -- > > Key: HDFS-15245 > URL: https://issues.apache.org/jira/browse/HDFS-15245 > Project: Hadoop HDFS > Issue Type: Improvement > Components: journal-node, ui >Affects Versions: 3.2.1 >Reporter: Jianfei Jiang >Assignee: Jianfei Jiang >Priority: Major > Attachments: HDFS-15245.002.patch, HDFS-15245.003.patch, > HDFS-15245.004.patch, HDFS-15245.005.patch, HDFS-15245.006.patch, jn web > 1.PNG, jn web 2.PNG > > > At present, journalnode web UI is a almost blank page. It almost has no > useful infomation. > # Add some infomation of journalnode at the main page. > # Add a dropdown menu Utilities like DN and JN, contains Logs, Log Level, > Metrics, Configuration & Process Thread Dump > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15196) RBF: RouterRpcServer getListing cannot list large dirs correctly
[ https://issues.apache.org/jira/browse/HDFS-15196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070505#comment-17070505 ] Hadoop QA commented on HDFS-15196: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 45s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 38s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 3s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 14s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 8s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 31s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 64m 32s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:4454c6d14b7 | | JIRA Issue | HDFS-15196 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12998148/HDFS-15196.013.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 421ea6f7e307 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 3eeb246 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/29057/testReport/ | | Max. process+thread count | 3414 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/29057/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > RBF: RouterRpcServer getListing cannot list large dirs correctly > > > Key: HDFS-15196 >
[jira] [Commented] (HDFS-15196) RBF: RouterRpcServer getListing cannot list large dirs correctly
[ https://issues.apache.org/jira/browse/HDFS-15196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070458#comment-17070458 ] Fengnan Li commented on HDFS-15196: --- Thanks [~elgoiri] for the detailed explanation. I have made changes accordingly and addressed test failures. > RBF: RouterRpcServer getListing cannot list large dirs correctly > > > Key: HDFS-15196 > URL: https://issues.apache.org/jira/browse/HDFS-15196 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Critical > Attachments: HDFS-15196.001.patch, HDFS-15196.002.patch, > HDFS-15196.003.patch, HDFS-15196.003.patch, HDFS-15196.004.patch, > HDFS-15196.005.patch, HDFS-15196.006.patch, HDFS-15196.007.patch, > HDFS-15196.008.patch, HDFS-15196.009.patch, HDFS-15196.010.patch, > HDFS-15196.011.patch, HDFS-15196.012.patch, HDFS-15196.013.patch > > > In RouterRpcServer, getListing function is handled as two parts: > # Union all partial listings from destination ns + paths > # Append mount points for the dir to be listed > In the case of large dir which is bigger than DFSConfigKeys.DFS_LIST_LIMIT > (with default value 1k), the batch listing will be used and the startAfter > will be used to define the boundary of each batch listing. However, step 2 > here will add existing mount points, which will mess up with the boundary of > the batch, thus making the next batch startAfter wrong. > The initial fix is just to append the mount points when there is no more > batch query necessary, but this will break the order of returned entries. > Therefore more complex logic is added to make sure the order is kept. At the > same time the remainingEntries variable inside DirectoryListing is also > updated to include the remaining mount points. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15196) RBF: RouterRpcServer getListing cannot list large dirs correctly
[ https://issues.apache.org/jira/browse/HDFS-15196?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Fengnan Li updated HDFS-15196: -- Attachment: HDFS-15196.013.patch > RBF: RouterRpcServer getListing cannot list large dirs correctly > > > Key: HDFS-15196 > URL: https://issues.apache.org/jira/browse/HDFS-15196 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Critical > Attachments: HDFS-15196.001.patch, HDFS-15196.002.patch, > HDFS-15196.003.patch, HDFS-15196.003.patch, HDFS-15196.004.patch, > HDFS-15196.005.patch, HDFS-15196.006.patch, HDFS-15196.007.patch, > HDFS-15196.008.patch, HDFS-15196.009.patch, HDFS-15196.010.patch, > HDFS-15196.011.patch, HDFS-15196.012.patch, HDFS-15196.013.patch > > > In RouterRpcServer, getListing function is handled as two parts: > # Union all partial listings from destination ns + paths > # Append mount points for the dir to be listed > In the case of large dir which is bigger than DFSConfigKeys.DFS_LIST_LIMIT > (with default value 1k), the batch listing will be used and the startAfter > will be used to define the boundary of each batch listing. However, step 2 > here will add existing mount points, which will mess up with the boundary of > the batch, thus making the next batch startAfter wrong. > The initial fix is just to append the mount points when there is no more > batch query necessary, but this will break the order of returned entries. > Therefore more complex logic is added to make sure the order is kept. At the > same time the remainingEntries variable inside DirectoryListing is also > updated to include the remaining mount points. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15239) Add button to go to the parent directory in the explorer
[ https://issues.apache.org/jira/browse/HDFS-15239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070444#comment-17070444 ] Íñigo Goiri commented on HDFS-15239: Thanks [~hemanthboyina] for the patch. Committed to trunk. > Add button to go to the parent directory in the explorer > > > Key: HDFS-15239 > URL: https://issues.apache.org/jira/browse/HDFS-15239 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Íñigo Goiri >Assignee: hemanthboyina >Priority: Major > Fix For: 3.3.0 > > Attachments: 15239.after.JPG, 15239.before.JPG, HDFS-15239.001.patch, > HDFS-15239.002.patch, HDFS-15239.003.patch, HDFS-15239.004.patch, > screenshot-1.png > > > Currently, when using the HDFS explorer page, it is easy to go into a folder. > However, to go back one has to use the browser back button (if one is coming > from that folder) or to edit the path by hand. > It would be nice to have the typical button to go to the parent. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15239) Add button to go to the parent directory in the explorer
[ https://issues.apache.org/jira/browse/HDFS-15239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070399#comment-17070399 ] Hudson commented on HDFS-15239: --- SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #18101 (See [https://builds.apache.org/job/Hadoop-trunk-Commit/18101/]) HDFS-15239. Add button to go to the parent directory in the explorer. (inigoiri: rev f7a94ec0a443711cf48c8ab146c5b365e358f0b3) * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/explorer.html * (edit) hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/webapps/router/explorer.js * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.html * (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs/explorer.js > Add button to go to the parent directory in the explorer > > > Key: HDFS-15239 > URL: https://issues.apache.org/jira/browse/HDFS-15239 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Íñigo Goiri >Assignee: hemanthboyina >Priority: Major > Fix For: 3.3.0 > > Attachments: 15239.after.JPG, 15239.before.JPG, HDFS-15239.001.patch, > HDFS-15239.002.patch, HDFS-15239.003.patch, HDFS-15239.004.patch, > screenshot-1.png > > > Currently, when using the HDFS explorer page, it is easy to go into a folder. > However, to go back one has to use the browser back button (if one is coming > from that folder) or to edit the path by hand. > It would be nice to have the typical button to go to the parent. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15196) RBF: RouterRpcServer getListing cannot list large dirs correctly
[ https://issues.apache.org/jira/browse/HDFS-15196?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070392#comment-17070392 ] Íñigo Goiri commented on HDFS-15196: I was referring to: {code} // Append router mount point only under either of the two cases: // 1) current mount point is between startAfter and cutoff lastName. // 2) there are no remaining entries from subclusters and this mount //point is bigger than all files from subclusters // This is to make sure that the following batch of // getListing call will use the correct startAfter, which is lastName if ((child.compareTo(DFSUtil.bytes2String(startAfter)) > 0 && child.compareTo(lastName) <= 0) || (remainingEntries == 0 && child.compareTo(lastName) > 0)) { // This may overwrite existing listing entries with the mount point // TODO don't add if already there? nnListing.put(child, dirStatus); } {code} We could do something like: {code} /** * Check if we should append the mount point at the end. This should be done * under either of the two cases: * 1) current mount point is between startAfter and cutoff lastName. * 2) there are no remaining entries from subclusters and this mount *point is bigger than all files from subclusters * This is to make sure that the following batch of * getListing call will use the correct startAfter, which is lastName * @param child * @param lastName * @param startAfter * @param remainingEntries * @return True if the mount point should be appended. */ private static boolean shouldAppendMountPoint( String child, String lastName, byte[] startAfter, int remainingEntries) { if (child.compareTo(DFSUtil.bytes2String(startAfter)) > 0 && child.compareTo(lastName) <= 0) { return true; } if (remainingEntries == 0 && child.compareTo(lastName) > 0) { return true; } return false; } ... if (shouldAppendMountPoint()) { // This may overwrite existing listing entries with the mount point // TODO don't add if already there? nnListing.put(child, dirStatus); } {code} > RBF: RouterRpcServer getListing cannot list large dirs correctly > > > Key: HDFS-15196 > URL: https://issues.apache.org/jira/browse/HDFS-15196 > Project: Hadoop HDFS > Issue Type: Bug >Reporter: Fengnan Li >Assignee: Fengnan Li >Priority: Critical > Attachments: HDFS-15196.001.patch, HDFS-15196.002.patch, > HDFS-15196.003.patch, HDFS-15196.003.patch, HDFS-15196.004.patch, > HDFS-15196.005.patch, HDFS-15196.006.patch, HDFS-15196.007.patch, > HDFS-15196.008.patch, HDFS-15196.009.patch, HDFS-15196.010.patch, > HDFS-15196.011.patch, HDFS-15196.012.patch > > > In RouterRpcServer, getListing function is handled as two parts: > # Union all partial listings from destination ns + paths > # Append mount points for the dir to be listed > In the case of large dir which is bigger than DFSConfigKeys.DFS_LIST_LIMIT > (with default value 1k), the batch listing will be used and the startAfter > will be used to define the boundary of each batch listing. However, step 2 > here will add existing mount points, which will mess up with the boundary of > the batch, thus making the next batch startAfter wrong. > The initial fix is just to append the mount points when there is no more > batch query necessary, but this will break the order of returned entries. > Therefore more complex logic is added to make sure the order is kept. At the > same time the remainingEntries variable inside DirectoryListing is also > updated to include the remaining mount points. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14385) RBF: Optimize MiniRouterDFSCluster with optional light weight MiniDFSCluster
[ https://issues.apache.org/jira/browse/HDFS-14385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070385#comment-17070385 ] Hadoop QA commented on HDFS-14385: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 39s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 56s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 23s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 36s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 20s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 32s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 30s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 0s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 6s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 32s{color} | {color:red} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 61m 26s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterFaultTolerant | | | hadoop.hdfs.server.federation.router.TestRouterMultiRack | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:4454c6d14b7 | | JIRA Issue | HDFS-14385 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12998145/HDFS-14385.004.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 79df1a1f67cd 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 696a663 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/29056/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/29056/testReport/ | | Max. process+thread count | 3197 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output |
[jira] [Updated] (HDFS-15239) Add button to go to the parent directory in the explorer
[ https://issues.apache.org/jira/browse/HDFS-15239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Íñigo Goiri updated HDFS-15239: --- Fix Version/s: 3.3.0 Hadoop Flags: Reviewed Resolution: Fixed Status: Resolved (was: Patch Available) > Add button to go to the parent directory in the explorer > > > Key: HDFS-15239 > URL: https://issues.apache.org/jira/browse/HDFS-15239 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Íñigo Goiri >Assignee: hemanthboyina >Priority: Major > Fix For: 3.3.0 > > Attachments: 15239.after.JPG, 15239.before.JPG, HDFS-15239.001.patch, > HDFS-15239.002.patch, HDFS-15239.003.patch, HDFS-15239.004.patch, > screenshot-1.png > > > Currently, when using the HDFS explorer page, it is easy to go into a folder. > However, to go back one has to use the browser back button (if one is coming > from that folder) or to edit the path by hand. > It would be nice to have the typical button to go to the parent. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15245) Improve JournalNode web UI
[ https://issues.apache.org/jira/browse/HDFS-15245?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070384#comment-17070384 ] Íñigo Goiri commented on HDFS-15245: +1 on [^HDFS-15245.006.patch]. > Improve JournalNode web UI > -- > > Key: HDFS-15245 > URL: https://issues.apache.org/jira/browse/HDFS-15245 > Project: Hadoop HDFS > Issue Type: Improvement > Components: journal-node, ui >Affects Versions: 3.2.1 >Reporter: Jianfei Jiang >Assignee: Jianfei Jiang >Priority: Major > Attachments: HDFS-15245.002.patch, HDFS-15245.003.patch, > HDFS-15245.004.patch, HDFS-15245.005.patch, HDFS-15245.006.patch, jn web > 1.PNG, jn web 2.PNG > > > At present, journalnode web UI is a almost blank page. It almost has no > useful infomation. > # Add some infomation of journalnode at the main page. > # Add a dropdown menu Utilities like DN and JN, contains Logs, Log Level, > Metrics, Configuration & Process Thread Dump > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15248) Make the maximum number of ACLs entries configurable
[ https://issues.apache.org/jira/browse/HDFS-15248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070380#comment-17070380 ] Íñigo Goiri commented on HDFS-15248: Thanks [~weichiu] for chiming it. That was the reason why I was checking for the use case. I'll let you decide if this should be done or not. Regarding the patch itself, the unit test should do something like: {code} LambdaTestUtils.intercept( AclException.class, () -> filterDefaultAclEntries(existing)); LambdaTestUtils.intercept( AclException.class, "which exceeds maximum of", () -> filterDefaultAclEntries(existing)); {code{ > Make the maximum number of ACLs entries configurable > > > Key: HDFS-15248 > URL: https://issues.apache.org/jira/browse/HDFS-15248 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15248.001.patch, HDFS-15248.patch > > > For big cluster, the hardcode 32 of ACLs maximum number is not enough, make > it configurable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Comment Edited] (HDFS-15248) Make the maximum number of ACLs entries configurable
[ https://issues.apache.org/jira/browse/HDFS-15248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070380#comment-17070380 ] Íñigo Goiri edited comment on HDFS-15248 at 3/29/20, 3:14 PM: -- Thanks [~weichiu] for chiming it. That was the reason why I was checking for the use case. I'll let you decide if this should be done or not. Regarding the patch itself, the unit test should do something like: {code} LambdaTestUtils.intercept( AclException.class, () -> filterDefaultAclEntries(existing)); LambdaTestUtils.intercept( AclException.class, "which exceeds maximum of", () -> filterDefaultAclEntries(existing)); {code} was (Author: elgoiri): Thanks [~weichiu] for chiming it. That was the reason why I was checking for the use case. I'll let you decide if this should be done or not. Regarding the patch itself, the unit test should do something like: {code} LambdaTestUtils.intercept( AclException.class, () -> filterDefaultAclEntries(existing)); LambdaTestUtils.intercept( AclException.class, "which exceeds maximum of", () -> filterDefaultAclEntries(existing)); {code{ > Make the maximum number of ACLs entries configurable > > > Key: HDFS-15248 > URL: https://issues.apache.org/jira/browse/HDFS-15248 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15248.001.patch, HDFS-15248.patch > > > For big cluster, the hardcode 32 of ACLs maximum number is not enough, make > it configurable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15051) RBF: Propose to revoke WRITE MountTableEntry privilege to super user only
[ https://issues.apache.org/jira/browse/HDFS-15051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070376#comment-17070376 ] Íñigo Goiri commented on HDFS-15051: My concern with [^HDFS-15051.009.patch] at this point is that one checkMountTablePermission() returns a boolean that is not used for anything. Ideally, I would also like to have two different names for the two checkMountTablePermission(); however, I cannot think of better names. Anyway, very minor, just give it a thought. > RBF: Propose to revoke WRITE MountTableEntry privilege to super user only > - > > Key: HDFS-15051 > URL: https://issues.apache.org/jira/browse/HDFS-15051 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HDFS-15051.001.patch, HDFS-15051.002.patch, > HDFS-15051.003.patch, HDFS-15051.004.patch, HDFS-15051.005.patch, > HDFS-15051.006.patch, HDFS-15051.007.patch, HDFS-15051.008.patch, > HDFS-15051.009.patch > > > The current permission checker of #MountTableStoreImpl is not very restrict. > In some case, any user could add/update/remove MountTableEntry without the > expected permission checking. > The following code segment try to check permission when operate > MountTableEntry, however mountTable object is from Client/RouterAdmin > {{MountTable mountTable = request.getEntry();}}, and user could pass any mode > which could bypass the permission checker. > {code:java} > public void checkPermission(MountTable mountTable, FsAction access) > throws AccessControlException { > if (isSuperUser()) { > return; > } > FsPermission mode = mountTable.getMode(); > if (getUser().equals(mountTable.getOwnerName()) > && mode.getUserAction().implies(access)) { > return; > } > if (isMemberOfGroup(mountTable.getGroupName()) > && mode.getGroupAction().implies(access)) { > return; > } > if (!getUser().equals(mountTable.getOwnerName()) > && !isMemberOfGroup(mountTable.getGroupName()) > && mode.getOtherAction().implies(access)) { > return; > } > throw new AccessControlException( > "Permission denied while accessing mount table " > + mountTable.getSourcePath() > + ": user " + getUser() + " does not have " + access.toString() > + " permissions."); > } > {code} > I just propose revoke WRITE MountTableEntry privilege to super user only. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15051) RBF: Propose to revoke WRITE MountTableEntry privilege to super user only
[ https://issues.apache.org/jira/browse/HDFS-15051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070375#comment-17070375 ] Ayush Saxena commented on HDFS-15051: - Thanx [~hexiaoqiao] for the patch, Changes LGTM. Is there any scope to add this somewhere in the document too. So, that users can be aware of this and use this functionality? > RBF: Propose to revoke WRITE MountTableEntry privilege to super user only > - > > Key: HDFS-15051 > URL: https://issues.apache.org/jira/browse/HDFS-15051 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HDFS-15051.001.patch, HDFS-15051.002.patch, > HDFS-15051.003.patch, HDFS-15051.004.patch, HDFS-15051.005.patch, > HDFS-15051.006.patch, HDFS-15051.007.patch, HDFS-15051.008.patch, > HDFS-15051.009.patch > > > The current permission checker of #MountTableStoreImpl is not very restrict. > In some case, any user could add/update/remove MountTableEntry without the > expected permission checking. > The following code segment try to check permission when operate > MountTableEntry, however mountTable object is from Client/RouterAdmin > {{MountTable mountTable = request.getEntry();}}, and user could pass any mode > which could bypass the permission checker. > {code:java} > public void checkPermission(MountTable mountTable, FsAction access) > throws AccessControlException { > if (isSuperUser()) { > return; > } > FsPermission mode = mountTable.getMode(); > if (getUser().equals(mountTable.getOwnerName()) > && mode.getUserAction().implies(access)) { > return; > } > if (isMemberOfGroup(mountTable.getGroupName()) > && mode.getGroupAction().implies(access)) { > return; > } > if (!getUser().equals(mountTable.getOwnerName()) > && !isMemberOfGroup(mountTable.getGroupName()) > && mode.getOtherAction().implies(access)) { > return; > } > throw new AccessControlException( > "Permission denied while accessing mount table " > + mountTable.getSourcePath() > + ": user " + getUser() + " does not have " + access.toString() > + " permissions."); > } > {code} > I just propose revoke WRITE MountTableEntry privilege to super user only. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15169) RBF: Router FSCK should consider the mount table
[ https://issues.apache.org/jira/browse/HDFS-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070372#comment-17070372 ] Ayush Saxena commented on HDFS-15169: - Thanx [~hexiaoqiao] for confirming, I tried tweaking the test myself to debug and it wasn't working as expected so just wanted to confirm. :) Couple of comments/doubts for the latest patch : * Presently in the output, The path name is coming as of destination: {code:java} The filesystem under path '/testdirdst' is HEALTHY {code} It should be replaced with respective mount path. Same as we do, while throwing exception containing paths. * I didn't catch this : {code:java} * Redirect the request to certain active downstream NameNode if resolve * target namespace otherwise redirect the requests to all active * downstream NameNodes. {code} Can you explain what you are trying to do here? Why if the path can not be resolved we need to send to all namenodes, isn't it like a FNF situation? Otherwise, this seems outsmarting the RouterAdmin, A client should be able to access only paths in a particular NS which Router Admin has authorized through mount points. * Doubt here too, what would happen in case of a failover : {code:java} if (ms.getState() == FederationNamenodeServiceState.ACTIVE {code} Will this logic be able to handle that? or is that being handled somewhere else. * Added a new param in {{getURLArguments}}, Please update the same param in javadoc of the method too. * Well, I couldn't check this with multi destination mount points, But by code, I don't think so, it is handling it any good, Any pointers? If not, Multi destination stuff we can handle as follow up. > RBF: Router FSCK should consider the mount table > > > Key: HDFS-15169 > URL: https://issues.apache.org/jira/browse/HDFS-15169 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Akira Ajisaka >Assignee: Xiaoqiao He >Priority: Major > Attachments: HDFS-15169.001.patch, HDFS-15169.002.patch, > HDFS-15169.003.patch, HDFS-15169.004.patch, HDFS-15169.005.patch > > > HDFS-13989 implemented FSCK to DFSRouter, however, it just redirects the > requests to all the active downstream NameNodes for now. The DFSRouter should > consider the mount table when redirecting the requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15169) RBF: Router FSCK should consider the mount table
[ https://issues.apache.org/jira/browse/HDFS-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070365#comment-17070365 ] Hadoop QA commented on HDFS-15169: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 45s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 32s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 10s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 24s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 51s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 16s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 57s{color} | {color:red} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 65m 50s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.hdfs.server.federation.router.TestRouterFaultTolerant | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:4454c6d14b7 | | JIRA Issue | HDFS-15169 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12998144/HDFS-15169.005.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux e67e2dee6616 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 696a663 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | unit | https://builds.apache.org/job/PreCommit-HDFS-Build/29055/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-rbf.txt | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/29055/testReport/ | | Max. process+thread count | 3245 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/29055/console | | Powered by | Apache Yetus 0.8.0
[jira] [Commented] (HDFS-15239) Add button to go to the parent directory in the explorer
[ https://issues.apache.org/jira/browse/HDFS-15239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070362#comment-17070362 ] Íñigo Goiri commented on HDFS-15239: +1 on [^HDFS-15239.004.patch]. > Add button to go to the parent directory in the explorer > > > Key: HDFS-15239 > URL: https://issues.apache.org/jira/browse/HDFS-15239 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Íñigo Goiri >Assignee: hemanthboyina >Priority: Major > Attachments: 15239.after.JPG, 15239.before.JPG, HDFS-15239.001.patch, > HDFS-15239.002.patch, HDFS-15239.003.patch, HDFS-15239.004.patch, > screenshot-1.png > > > Currently, when using the HDFS explorer page, it is easy to go into a folder. > However, to go back one has to use the browser back button (if one is coming > from that folder) or to edit the path by hand. > It would be nice to have the typical button to go to the parent. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14385) RBF: Optimize MiniRouterDFSCluster with optional light weight MiniDFSCluster
[ https://issues.apache.org/jira/browse/HDFS-14385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoqiao He updated HDFS-14385: --- Attachment: HDFS-14385.004.patch > RBF: Optimize MiniRouterDFSCluster with optional light weight MiniDFSCluster > > > Key: HDFS-14385 > URL: https://issues.apache.org/jira/browse/HDFS-14385 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HDFS-14385-HDFS-13891.001.patch, HDFS-14385.002.patch, > HDFS-14385.003.patch, HDFS-14385.004.patch > > > MiniRouterDFSCluster mimic federated HDFS cluster with routers to support RBF > test, In MiniRouterDFSCluster, it starts MiniDFSCluster with complete roles > of HDFS which have significant time cost. As HDFS-14351 discussed, it is > better to provide mock MiniDFSCluster/Namenodes as one option to support some > test case and reduce time cost. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14385) RBF: Optimize MiniRouterDFSCluster with optional light weight MiniDFSCluster
[ https://issues.apache.org/jira/browse/HDFS-14385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070358#comment-17070358 ] Xiaoqiao He commented on HDFS-14385: v004 try to fix checkstyle and failed unit tests. > RBF: Optimize MiniRouterDFSCluster with optional light weight MiniDFSCluster > > > Key: HDFS-14385 > URL: https://issues.apache.org/jira/browse/HDFS-14385 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HDFS-14385-HDFS-13891.001.patch, HDFS-14385.002.patch, > HDFS-14385.003.patch, HDFS-14385.004.patch > > > MiniRouterDFSCluster mimic federated HDFS cluster with routers to support RBF > test, In MiniRouterDFSCluster, it starts MiniDFSCluster with complete roles > of HDFS which have significant time cost. As HDFS-14351 discussed, it is > better to provide mock MiniDFSCluster/Namenodes as one option to support some > test case and reduce time cost. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15169) RBF: Router FSCK should consider the mount table
[ https://issues.apache.org/jira/browse/HDFS-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070339#comment-17070339 ] Xiaoqiao He commented on HDFS-15169: v005 try to fix findbugs and checkstyle. And update all mount point name and destination name different in {{TestRouterFsck}} to cover above case. > RBF: Router FSCK should consider the mount table > > > Key: HDFS-15169 > URL: https://issues.apache.org/jira/browse/HDFS-15169 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Akira Ajisaka >Assignee: Xiaoqiao He >Priority: Major > Attachments: HDFS-15169.001.patch, HDFS-15169.002.patch, > HDFS-15169.003.patch, HDFS-15169.004.patch, HDFS-15169.005.patch > > > HDFS-13989 implemented FSCK to DFSRouter, however, it just redirects the > requests to all the active downstream NameNodes for now. The DFSRouter should > consider the mount table when redirecting the requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15169) RBF: Router FSCK should consider the mount table
[ https://issues.apache.org/jira/browse/HDFS-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoqiao He updated HDFS-15169: --- Attachment: HDFS-15169.005.patch > RBF: Router FSCK should consider the mount table > > > Key: HDFS-15169 > URL: https://issues.apache.org/jira/browse/HDFS-15169 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Akira Ajisaka >Assignee: Xiaoqiao He >Priority: Major > Attachments: HDFS-15169.001.patch, HDFS-15169.002.patch, > HDFS-15169.003.patch, HDFS-15169.004.patch, HDFS-15169.005.patch > > > HDFS-13989 implemented FSCK to DFSRouter, however, it just redirects the > requests to all the active downstream NameNodes for now. The DFSRouter should > consider the mount table when redirecting the requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15051) RBF: Propose to revoke WRITE MountTableEntry privilege to super user only
[ https://issues.apache.org/jira/browse/HDFS-15051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070329#comment-17070329 ] Hadoop QA commented on HDFS-15051: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 51s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 40s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 30s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 7s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 29s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 14s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 15s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 23s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 29s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 26s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 67m 32s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:4454c6d14b7 | | JIRA Issue | HDFS-15051 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12998139/HDFS-15051.009.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux b9683b6be540 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 696a663 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | Test Results | https://builds.apache.org/job/PreCommit-HDFS-Build/29054/testReport/ | | Max. process+thread count | 3161 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project/hadoop-hdfs-rbf | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/29054/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > RBF: Propose to revoke WRITE MountTableEntry privilege to super user only > - > >
[jira] [Commented] (HDFS-14385) RBF: Optimize MiniRouterDFSCluster with optional light weight MiniDFSCluster
[ https://issues.apache.org/jira/browse/HDFS-14385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070325#comment-17070325 ] Hadoop QA commented on HDFS-14385: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 50s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 2 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 20s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 30s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 19s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 15m 8s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 9s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 15s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 4 new + 0 unchanged - 0 fixed = 4 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 27s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 39s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 11s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:red}-1{color} | {color:red} unit {color} | {color:red} 8m 45s{color} | {color:red} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 28s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 68m 24s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | Failed junit tests | hadoop.fs.contract.router.TestRouterHDFSContractRootDirectorySecure | | | hadoop.fs.contract.router.TestRouterHDFSContractAppendSecure | | | hadoop.fs.contract.router.TestRouterHDFSContractDelegationToken | | | hadoop.fs.contract.router.TestRouterHDFSContractSeekSecure | | | hadoop.fs.contract.router.TestRouterHDFSContractDeleteSecure | | | hadoop.hdfs.server.federation.router.TestRouterWithSecureStartup | | | hadoop.fs.contract.router.TestRouterHDFSContractGetFileStatusSecure | | | hadoop.fs.contract.router.TestRouterHDFSContractRenameSecure | | | hadoop.fs.contract.router.TestRouterHDFSContractOpenSecure | | | hadoop.fs.contract.router.TestRouterHDFSContractConcatSecure | | | hadoop.fs.contract.router.TestRouterHDFSContractMkdirSecure | | | hadoop.fs.contract.router.TestRouterHDFSContractCreateSecure | | | hadoop.fs.contract.router.TestRouterHDFSContractSetTimesSecure | | | hadoop.hdfs.server.federation.router.TestRouterMultiRack | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:4454c6d14b7 | | JIRA Issue | HDFS-14385 | | JIRA Patch URL |
[jira] [Commented] (HDFS-15169) RBF: Router FSCK should consider the mount table
[ https://issues.apache.org/jira/browse/HDFS-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070321#comment-17070321 ] Hadoop QA commented on HDFS-15169: -- | (x) *{color:red}-1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 41s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | | {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 0s{color} | {color:green} The patch appears to include 1 new or modified test files. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 51s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 24s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 34s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 52s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | | {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 1s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 33s{color} | {color:green} trunk passed {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 25s{color} | {color:green} the patch passed {color} | | {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange} 0m 16s{color} | {color:orange} hadoop-hdfs-project/hadoop-hdfs-rbf: The patch generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 13m 7s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | | {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 7s{color} | {color:red} hadoop-hdfs-project/hadoop-hdfs-rbf generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} | | {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s{color} | {color:green} the patch passed {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 34s{color} | {color:green} hadoop-hdfs-rbf in the patch passed. {color} | | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 31s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 62m 4s{color} | {color:black} {color} | \\ \\ || Reason || Tests || | FindBugs | module:hadoop-hdfs-project/hadoop-hdfs-rbf | | | org.apache.hadoop.hdfs.server.federation.router.RouterFsck.fsck() makes inefficient use of keySet iterator instead of entrySet iterator At RouterFsck.java:keySet iterator instead of entrySet iterator At RouterFsck.java:[line 145] | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:4454c6d14b7 | | JIRA Issue | HDFS-15169 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12998138/HDFS-15169.004.patch | | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient findbugs checkstyle | | uname | Linux 9bed04590b32 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 696a663 | | maven | version: Apache Maven 3.3.9 | | Default Java | 1.8.0_242 | | findbugs | v3.1.0-RC1 | | checkstyle |
[jira] [Commented] (HDFS-13183) Standby NameNode process getBlocks request to reduce Active load
[ https://issues.apache.org/jira/browse/HDFS-13183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070299#comment-17070299 ] Xiaoqiao He commented on HDFS-13183: [~ayushtkn] I totally agree that SBN read/Observer is more common and interesting feature. And it is also effective to reduce load of ANN from #getBlocks. IMO, redirect #getBlocks request to Standby does step further, we could also reduce load of Observer which is core role on the whole read/write access path if we open SBN read feature. On another hand, it is also one choice for end users who do not open SBN feature. Thanks. > Standby NameNode process getBlocks request to reduce Active load > > > Key: HDFS-13183 > URL: https://issues.apache.org/jira/browse/HDFS-13183 > Project: Hadoop HDFS > Issue Type: Improvement > Components: balancer mover, namenode >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HDFS-13183-trunk.001.patch, HDFS-13183-trunk.002.patch, > HDFS-13183-trunk.003.patch, HDFS-13183.004.patch, HDFS-13183.005.patch > > > The performance of Active NameNode could be impact when {{Balancer}} requests > #getBlocks, since query blocks of overly full DNs performance is extremely > inefficient currently. The main reason is {{NameNodeRpcServer#getBlocks}} > hold read lock for long time. In extreme case, all handlers of Active > NameNode RPC server are occupied by one reader > {{NameNodeRpcServer#getBlocks}} and other write operation calls, thus Active > NameNode enter a state of false death for number of seconds even for minutes. > The similar performance concerns of Balancer have reported by HDFS-9412, > HDFS-7967, etc. > If Standby NameNode can shoulder #getBlocks heavy burden, it could speed up > the progress of balancing and reduce performance impact to Active NameNode. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15051) RBF: Propose to revoke WRITE MountTableEntry privilege to super user only
[ https://issues.apache.org/jira/browse/HDFS-15051?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070292#comment-17070292 ] Xiaoqiao He commented on HDFS-15051: Thanks [~elgoiri],[~ayushtkn] for your reviews. v009 try to improve readability and add more unit test to cover logic changes. PTAL. > RBF: Propose to revoke WRITE MountTableEntry privilege to super user only > - > > Key: HDFS-15051 > URL: https://issues.apache.org/jira/browse/HDFS-15051 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HDFS-15051.001.patch, HDFS-15051.002.patch, > HDFS-15051.003.patch, HDFS-15051.004.patch, HDFS-15051.005.patch, > HDFS-15051.006.patch, HDFS-15051.007.patch, HDFS-15051.008.patch, > HDFS-15051.009.patch > > > The current permission checker of #MountTableStoreImpl is not very restrict. > In some case, any user could add/update/remove MountTableEntry without the > expected permission checking. > The following code segment try to check permission when operate > MountTableEntry, however mountTable object is from Client/RouterAdmin > {{MountTable mountTable = request.getEntry();}}, and user could pass any mode > which could bypass the permission checker. > {code:java} > public void checkPermission(MountTable mountTable, FsAction access) > throws AccessControlException { > if (isSuperUser()) { > return; > } > FsPermission mode = mountTable.getMode(); > if (getUser().equals(mountTable.getOwnerName()) > && mode.getUserAction().implies(access)) { > return; > } > if (isMemberOfGroup(mountTable.getGroupName()) > && mode.getGroupAction().implies(access)) { > return; > } > if (!getUser().equals(mountTable.getOwnerName()) > && !isMemberOfGroup(mountTable.getGroupName()) > && mode.getOtherAction().implies(access)) { > return; > } > throw new AccessControlException( > "Permission denied while accessing mount table " > + mountTable.getSourcePath() > + ": user " + getUser() + " does not have " + access.toString() > + " permissions."); > } > {code} > I just propose revoke WRITE MountTableEntry privilege to super user only. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15051) RBF: Propose to revoke WRITE MountTableEntry privilege to super user only
[ https://issues.apache.org/jira/browse/HDFS-15051?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoqiao He updated HDFS-15051: --- Attachment: HDFS-15051.009.patch > RBF: Propose to revoke WRITE MountTableEntry privilege to super user only > - > > Key: HDFS-15051 > URL: https://issues.apache.org/jira/browse/HDFS-15051 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HDFS-15051.001.patch, HDFS-15051.002.patch, > HDFS-15051.003.patch, HDFS-15051.004.patch, HDFS-15051.005.patch, > HDFS-15051.006.patch, HDFS-15051.007.patch, HDFS-15051.008.patch, > HDFS-15051.009.patch > > > The current permission checker of #MountTableStoreImpl is not very restrict. > In some case, any user could add/update/remove MountTableEntry without the > expected permission checking. > The following code segment try to check permission when operate > MountTableEntry, however mountTable object is from Client/RouterAdmin > {{MountTable mountTable = request.getEntry();}}, and user could pass any mode > which could bypass the permission checker. > {code:java} > public void checkPermission(MountTable mountTable, FsAction access) > throws AccessControlException { > if (isSuperUser()) { > return; > } > FsPermission mode = mountTable.getMode(); > if (getUser().equals(mountTable.getOwnerName()) > && mode.getUserAction().implies(access)) { > return; > } > if (isMemberOfGroup(mountTable.getGroupName()) > && mode.getGroupAction().implies(access)) { > return; > } > if (!getUser().equals(mountTable.getOwnerName()) > && !isMemberOfGroup(mountTable.getGroupName()) > && mode.getOtherAction().implies(access)) { > return; > } > throw new AccessControlException( > "Permission denied while accessing mount table " > + mountTable.getSourcePath() > + ": user " + getUser() + " does not have " + access.toString() > + " permissions."); > } > {code} > I just propose revoke WRITE MountTableEntry privilege to super user only. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15169) RBF: Router FSCK should consider the mount table
[ https://issues.apache.org/jira/browse/HDFS-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070291#comment-17070291 ] Xiaoqiao He commented on HDFS-15169: Thanks [~ayushtkn], Great catch. It is exactly a bug that we do not resolve source path of fsck parameter to destination, which will cause wrong result. v004 also try to fix it. Please help to take another review. Thanks. > RBF: Router FSCK should consider the mount table > > > Key: HDFS-15169 > URL: https://issues.apache.org/jira/browse/HDFS-15169 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Akira Ajisaka >Assignee: Xiaoqiao He >Priority: Major > Attachments: HDFS-15169.001.patch, HDFS-15169.002.patch, > HDFS-15169.003.patch, HDFS-15169.004.patch > > > HDFS-13989 implemented FSCK to DFSRouter, however, it just redirects the > requests to all the active downstream NameNodes for now. The DFSRouter should > consider the mount table when redirecting the requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15169) RBF: Router FSCK should consider the mount table
[ https://issues.apache.org/jira/browse/HDFS-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoqiao He updated HDFS-15169: --- Attachment: HDFS-15169.004.patch > RBF: Router FSCK should consider the mount table > > > Key: HDFS-15169 > URL: https://issues.apache.org/jira/browse/HDFS-15169 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Akira Ajisaka >Assignee: Xiaoqiao He >Priority: Major > Attachments: HDFS-15169.001.patch, HDFS-15169.002.patch, > HDFS-15169.003.patch, HDFS-15169.004.patch > > > HDFS-13989 implemented FSCK to DFSRouter, however, it just redirects the > requests to all the active downstream NameNodes for now. The DFSRouter should > consider the mount table when redirecting the requests. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-14385) RBF: Optimize MiniRouterDFSCluster with optional light weight MiniDFSCluster
[ https://issues.apache.org/jira/browse/HDFS-14385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070288#comment-17070288 ] Xiaoqiao He commented on HDFS-14385: Thanks [~elgoiri], v003 try to fix checkstyle and bugs. Pending Jenkins. > RBF: Optimize MiniRouterDFSCluster with optional light weight MiniDFSCluster > > > Key: HDFS-14385 > URL: https://issues.apache.org/jira/browse/HDFS-14385 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HDFS-14385-HDFS-13891.001.patch, HDFS-14385.002.patch, > HDFS-14385.003.patch > > > MiniRouterDFSCluster mimic federated HDFS cluster with routers to support RBF > test, In MiniRouterDFSCluster, it starts MiniDFSCluster with complete roles > of HDFS which have significant time cost. As HDFS-14351 discussed, it is > better to provide mock MiniDFSCluster/Namenodes as one option to support some > test case and reduce time cost. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-14385) RBF: Optimize MiniRouterDFSCluster with optional light weight MiniDFSCluster
[ https://issues.apache.org/jira/browse/HDFS-14385?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Xiaoqiao He updated HDFS-14385: --- Attachment: HDFS-14385.003.patch > RBF: Optimize MiniRouterDFSCluster with optional light weight MiniDFSCluster > > > Key: HDFS-14385 > URL: https://issues.apache.org/jira/browse/HDFS-14385 > Project: Hadoop HDFS > Issue Type: Sub-task > Components: rbf >Reporter: Xiaoqiao He >Assignee: Xiaoqiao He >Priority: Major > Attachments: HDFS-14385-HDFS-13891.001.patch, HDFS-14385.002.patch, > HDFS-14385.003.patch > > > MiniRouterDFSCluster mimic federated HDFS cluster with routers to support RBF > test, In MiniRouterDFSCluster, it starts MiniDFSCluster with complete roles > of HDFS which have significant time cost. As HDFS-14351 discussed, it is > better to provide mock MiniDFSCluster/Namenodes as one option to support some > test case and reduce time cost. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15248) Make the maximum number of ACLs entries configurable
[ https://issues.apache.org/jira/browse/HDFS-15248?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070251#comment-17070251 ] Wei-Chiu Chuang commented on HDFS-15248: Thanks for offering the patch! I've had customers asking for extending ACL entry limit before. I'm not sure why 32, but here are a few reasons why it's probably not a good idea to extend it further (1) manageability. once you have more than a dozen ACLs per file, it becomes hard to manage, error-prone. (2) NameNode heap size. Especially in a large cluster with hundreds of millions of files, each inode occupies more bytes of heap. The memory pressure becomes even worse. (3) serialization cost. We currently serialize the files under a directory to a protobuf message, which is limited to 64mb (default), and as the result we limit the max number of files per directory to 1 million. Allowing more ACL entries per file means more serialized bytes per file, and you may run into the protobuf message limit for a large directory well before 1 million files. For these reasons I usually recommend users to use external authorization providers like Sentry or Ranger to delete the authorization work to a separate entity. > Make the maximum number of ACLs entries configurable > > > Key: HDFS-15248 > URL: https://issues.apache.org/jira/browse/HDFS-15248 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode >Reporter: Yang Yun >Assignee: Yang Yun >Priority: Minor > Attachments: HDFS-15248.001.patch, HDFS-15248.patch > > > For big cluster, the hardcode 32 of ACLs maximum number is not enough, make > it configurable. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15247) RBF: Provide Non DFS Used per DataNode in DataNode UI
[ https://issues.apache.org/jira/browse/HDFS-15247?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070232#comment-17070232 ] Lisheng Sun commented on HDFS-15247: I added one PR for this issue. > RBF: Provide Non DFS Used per DataNode in DataNode UI > - > > Key: HDFS-15247 > URL: https://issues.apache.org/jira/browse/HDFS-15247 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Ayush Saxena >Assignee: Lisheng Sun >Priority: Major > -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Commented] (HDFS-15239) Add button to go to the parent directory in the explorer
[ https://issues.apache.org/jira/browse/HDFS-15239?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17070227#comment-17070227 ] Hadoop QA commented on HDFS-15239: -- | (/) *{color:green}+1 overall{color}* | \\ \\ || Vote || Subsystem || Runtime || Comment || | {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 43s{color} | {color:blue} Docker mode activated. {color} | || || || || {color:brown} Prechecks {color} || | {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s{color} | {color:green} The patch does not contain any @author tags. {color} | || || || || {color:brown} trunk Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for branch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 52s{color} | {color:green} trunk passed {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 34m 27s{color} | {color:green} branch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Patch Compile Tests {color} || | {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s{color} | {color:blue} Maven dependency ordering for patch {color} | | {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 31s{color} | {color:green} the patch passed {color} | | {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 0s{color} | {color:green} The patch has no whitespace issues. {color} | | {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 14m 13s{color} | {color:green} patch has no errors when building and testing our client artifacts. {color} | || || || || {color:brown} Other Tests {color} || | {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 27s{color} | {color:green} The patch does not generate ASF License warnings. {color} | | {color:black}{color} | {color:black} {color} | {color:black} 52m 9s{color} | {color:black} {color} | \\ \\ || Subsystem || Report/Notes || | Docker | Client=19.03.8 Server=19.03.8 Image:yetus/hadoop:4454c6d14b7 | | JIRA Issue | HDFS-15239 | | JIRA Patch URL | https://issues.apache.org/jira/secure/attachment/12998134/HDFS-15239.004.patch | | Optional Tests | dupname asflicense shadedclient | | uname | Linux ddd35de1315b 4.15.0-74-generic #84-Ubuntu SMP Thu Dec 19 08:06:28 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux | | Build tool | maven | | Personality | /testptch/patchprocess/precommit/personality/provided.sh | | git revision | trunk / 696a663 | | maven | version: Apache Maven 3.3.9 | | Max. process+thread count | 311 (vs. ulimit of 5500) | | modules | C: hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs-rbf U: hadoop-hdfs-project | | Console output | https://builds.apache.org/job/PreCommit-HDFS-Build/29051/console | | Powered by | Apache Yetus 0.8.0 http://yetus.apache.org | This message was automatically generated. > Add button to go to the parent directory in the explorer > > > Key: HDFS-15239 > URL: https://issues.apache.org/jira/browse/HDFS-15239 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Íñigo Goiri >Assignee: hemanthboyina >Priority: Major > Attachments: 15239.after.JPG, 15239.before.JPG, HDFS-15239.001.patch, > HDFS-15239.002.patch, HDFS-15239.003.patch, HDFS-15239.004.patch, > screenshot-1.png > > > Currently, when using the HDFS explorer page, it is easy to go into a folder. > However, to go back one has to use the browser back button (if one is coming > from that folder) or to edit the path by hand. > It would be nice to have the typical button to go to the parent. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org
[jira] [Updated] (HDFS-15239) Add button to go to the parent directory in the explorer
[ https://issues.apache.org/jira/browse/HDFS-15239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] hemanthboyina updated HDFS-15239: - Attachment: HDFS-15239.004.patch > Add button to go to the parent directory in the explorer > > > Key: HDFS-15239 > URL: https://issues.apache.org/jira/browse/HDFS-15239 > Project: Hadoop HDFS > Issue Type: Improvement >Reporter: Íñigo Goiri >Assignee: hemanthboyina >Priority: Major > Attachments: 15239.after.JPG, 15239.before.JPG, HDFS-15239.001.patch, > HDFS-15239.002.patch, HDFS-15239.003.patch, HDFS-15239.004.patch, > screenshot-1.png > > > Currently, when using the HDFS explorer page, it is easy to go into a folder. > However, to go back one has to use the browser back button (if one is coming > from that folder) or to edit the path by hand. > It would be nice to have the typical button to go to the parent. -- This message was sent by Atlassian Jira (v8.3.4#803005) - To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org