[jira] [Work logged] (HDFS-16085) Move the getPermissionChecker out of the read lock

2021-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16085?focusedWorklogId=614854=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614854
 ]

ASF GitHub Bot logged work on HDFS-16085:
-

Author: ASF GitHub Bot
Created on: 25/Jun/21 05:48
Start Date: 25/Jun/21 05:48
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on pull request #3134:
URL: https://github.com/apache/hadoop/pull/3134#issuecomment-868222185


   Sorry i was late but this looks nice and good. Thanks a lot!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614854)
Time Spent: 1h 40m  (was: 1.5h)

> Move the getPermissionChecker out of the read lock
> --
>
> Key: HDFS-16085
> URL: https://issues.apache.org/jira/browse/HDFS-16085
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Move the getPermissionChecker out of the read lock in 
> NamenodeFsck#getBlockLocations() since the operation does not need to be 
> locked.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16039) RBF: Some indicators of RBFMetrics count inaccurately

2021-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16039?focusedWorklogId=614836=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614836
 ]

ASF GitHub Bot logged work on HDFS-16039:
-

Author: ASF GitHub Bot
Created on: 25/Jun/21 05:08
Start Date: 25/Jun/21 05:08
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3086:
URL: https://github.com/apache/hadoop/pull/3086#issuecomment-868207854


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 38s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  11m 22s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 13s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 51s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |  25m 12s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   7m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   8m  0s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |  35m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  47m 49s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  20m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  21m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 15s |  |  the patch passed  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3086/6/artifact/out/blanks-eol.txt)
 |  The patch has 8 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  checkstyle  |   3m 49s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |  21m 41s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   7m 52s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   8m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |  34m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  46m 59s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 780m 30s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3086/6/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | -1 :x: |  asflicense  |   1m 41s | 
[/results-asflicense.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3086/6/artifact/out/results-asflicense.txt)
 |  The patch generated 1 ASF License warnings.  |
   |  |   | 1108m  1s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.tools.dynamometer.TestDynamometerInfra |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpcMultiDestination |
   |   | hadoop.hdfs.server.federation.router.TestRouterRpc |
   |   | hadoop.hdfs.server.federation.router.TestRouterClientRejectOverload |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3086/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3086 |
   | Optional Tests | dupname asflicense codespell compile javac javadoc 
mvninstall mvnsite unit shadedclient spotbugs checkstyle |
   | uname | Linux cc2a45cce6c8 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 

[jira] [Commented] (HDFS-13123) RBF: Add a balancer tool to move data across subcluster

2021-06-24 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17369236#comment-17369236
 ] 

Hadoop QA commented on HDFS-13123:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
58s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red}{color} | {color:red} The patch doesn't appear to 
include any new or modified tests. Please justify why no new tests are needed 
for this patch. Also please list what manual steps were performed to verify 
this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
43s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
42s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
42s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 16s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
41s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 17m  
8s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are 
enabled, using SpotBugs. {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  1m 
17s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
24s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/646/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs-rbf.txt{color}
 | {color:red} hadoop-hdfs-rbf in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
28s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/646/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt{color}
 | {color:red} hadoop-hdfs-rbf in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 28s{color} 
| 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/646/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt{color}
 | {color:red} hadoop-hdfs-rbf in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
25s{color} | 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/646/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt{color}
 | {color:red} hadoop-hdfs-rbf in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 25s{color} 
| 
{color:red}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/646/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-rbf-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt{color}
 | {color:red} hadoop-hdfs-rbf in the patch failed with JDK 

[jira] [Updated] (HDFS-16086) Add volume information to datanode log for tracing

2021-06-24 Thread tomscut (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tomscut updated HDFS-16086:
---
Attachment: Received.jpg

> Add volume information to datanode log for tracing
> --
>
> Key: HDFS-16086
> URL: https://issues.apache.org/jira/browse/HDFS-16086
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Attachments: CreatingRbw.jpg, Received.jpg
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> To keep track of the block in volume, we can add the volume information to 
> the datanode log.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16086) Add volume information to datanode log for tracing

2021-06-24 Thread tomscut (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

tomscut updated HDFS-16086:
---
Attachment: CreatingRbw.jpg

> Add volume information to datanode log for tracing
> --
>
> Key: HDFS-16086
> URL: https://issues.apache.org/jira/browse/HDFS-16086
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Attachments: CreatingRbw.jpg
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> To keep track of the block in volume, we can add the volume information to 
> the datanode log.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16067) Support Append API in NNThroughputBenchmark

2021-06-24 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17369226#comment-17369226
 ] 

Hadoop QA commented on HDFS-16067:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
29s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:blue}0{color} | {color:blue} markdownlint {color} | {color:blue}  0m  
0s{color} | {color:blue}{color} | {color:blue} markdownlint was not available. 
{color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 1 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
43s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for 
branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 22m 
 9s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
32s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m  
0s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
57s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
1s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
22m 51s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m  
5s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
9s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 33m 
51s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are 
enabled, using SpotBugs. {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  5m 
47s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
23s{color} | {color:blue}{color} | {color:blue} Maven dependency ordering for 
patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
11s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
47s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m 
47s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m  
2s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 19m  
2s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  3m 
59s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
0s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient 

[jira] [Commented] (HDFS-15294) Federation balance tool

2021-06-24 Thread panlijie (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17369221#comment-17369221
 ] 

panlijie commented on HDFS-15294:
-

Add HDFS-16087 RBF balance process is stuck at DisableWrite stage ,Eric Yin 
will commit it.

> Federation balance tool
> ---
>
> Key: HDFS-15294
> URL: https://issues.apache.org/jira/browse/HDFS-15294
> Project: Hadoop HDFS
>  Issue Type: New Feature
>Reporter: Jinglun
>Assignee: Jinglun
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: BalanceProcedureScheduler.png, HDFS-15294.001.patch, 
> HDFS-15294.002.patch, HDFS-15294.003.patch, HDFS-15294.003.reupload.patch, 
> HDFS-15294.004.patch, HDFS-15294.005.patch, HDFS-15294.006.patch, 
> HDFS-15294.007.patch, distcp-balance.pdf, distcp-balance.v2.pdf
>
>
> This jira introduces a new HDFS federation balance tool to balance data 
> across different federation namespaces. It uses Distcp to copy data from the 
> source path to the target path.
> The process is:
>  1. Use distcp and snapshot diff to sync data between src and dst until they 
> are the same.
>  2. Update mount table in Router if we specified RBF mode.
>  3. Deal with src data, move to trash, delete or skip them.
> The design of fedbalance tool comes from the discussion in HDFS-15087.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HDFS-13123) RBF: Add a balancer tool to move data across subcluster

2021-06-24 Thread panlijie (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-13123?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

panlijie updated HDFS-13123:

Comment: was deleted

(was: Add 
[HDFS-16087|https://issues.apache.org/jira/projects/HDFS/issues/HDFS-16087]  
RBF balance process is stuck at DisableWrite stage ,Eric Yin will commit it.)

> RBF: Add a balancer tool to move data across subcluster 
> 
>
> Key: HDFS-13123
> URL: https://issues.apache.org/jira/browse/HDFS-13123
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei Yan
>Assignee: Hemanth Boyina
>Priority: Major
> Attachments: HDFS Router-Based Federation Rebalancer.pdf, 
> HDFS-13123.patch
>
>
> Follow the discussion in HDFS-12615. This Jira is to track effort for 
> building a rebalancer tool, used by router-based federation to move data 
> among subclusters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-13123) RBF: Add a balancer tool to move data across subcluster

2021-06-24 Thread panlijie (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-13123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17369220#comment-17369220
 ] 

panlijie commented on HDFS-13123:
-

Add [HDFS-16087|https://issues.apache.org/jira/projects/HDFS/issues/HDFS-16087] 
 RBF balance process is stuck at DisableWrite stage ,Eric Yin will commit it.

> RBF: Add a balancer tool to move data across subcluster 
> 
>
> Key: HDFS-13123
> URL: https://issues.apache.org/jira/browse/HDFS-13123
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: Wei Yan
>Assignee: Hemanth Boyina
>Priority: Major
> Attachments: HDFS Router-Based Federation Rebalancer.pdf, 
> HDFS-13123.patch
>
>
> Follow the discussion in HDFS-12615. This Jira is to track effort for 
> building a rebalancer tool, used by router-based federation to move data 
> among subclusters.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing

2021-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=614821=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614821
 ]

ASF GitHub Bot logged work on HDFS-16086:
-

Author: ASF GitHub Bot
Created on: 25/Jun/21 03:13
Start Date: 25/Jun/21 03:13
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #3136:
URL: https://github.com/apache/hadoop/pull/3136#issuecomment-868171074


   These UTs works fine locally and are unrelated to the change.
   
   Hi @tasanuma @aajisaka  @jojochuang @Hexiaoqiao , could you please review 
the code when you have time. Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614821)
Time Spent: 40m  (was: 0.5h)

> Add volume information to datanode log for tracing
> --
>
> Key: HDFS-16086
> URL: https://issues.apache.org/jira/browse/HDFS-16086
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> To keep track of the block in volume, we can add the volume information to 
> the datanode log.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16088) Standby NameNode process getLiveDatanodeStorageReport request to reduce Active load

2021-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16088:
--
Labels: pull-request-available  (was: )

> Standby NameNode process getLiveDatanodeStorageReport request to reduce 
> Active load
> ---
>
> Key: HDFS-16088
> URL: https://issues.apache.org/jira/browse/HDFS-16088
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As with [HDFS-13183|https://issues.apache.org/jira/browse/HDFS-13183], 
> NameNodeConnector#getLiveDatanodeStorageReport() can also request to SNN to 
> reduce the ANN load.
> There are two points that need to be mentioned:
> 1. NameNodeConnector#getLiveDatanodeStorageReport() is 
> OperationCategory.UNCHECKED in FSNamesystem, so we can access SNN directly.
> 2. We can share the same UT(testBalancerRequestSBNWithHA) with 
> NameNodeConnector#getBlocks().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16088) Standby NameNode process getLiveDatanodeStorageReport request to reduce Active load

2021-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16088?focusedWorklogId=614817=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614817
 ]

ASF GitHub Bot logged work on HDFS-16088:
-

Author: ASF GitHub Bot
Created on: 25/Jun/21 03:03
Start Date: 25/Jun/21 03:03
Worklog Time Spent: 10m 
  Work Description: tomscut opened a new pull request #3140:
URL: https://github.com/apache/hadoop/pull/3140


   JIRA: [HDFS-16088](https://issues.apache.org/jira/browse/HDFS-16088)
   
   As with [HDFS-13183](https://issues.apache.org/jira/browse/HDFS-13183) , 
NameNodeConnector#getLiveDatanodeStorageReport() can also request to SNN to 
reduce the ANN load.
   
   There are two points that need to be mentioned:
   1. NameNodeConnector#getLiveDatanodeStorageReport() is 
OperationCategory.UNCHECKED in FSNamesystem, so we can access SNN directly.
   2. We can share the same UT(testBalancerRequestSBNWithHA) with 
NameNodeConnector#getBlocks().
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614817)
Remaining Estimate: 0h
Time Spent: 10m

> Standby NameNode process getLiveDatanodeStorageReport request to reduce 
> Active load
> ---
>
> Key: HDFS-16088
> URL: https://issues.apache.org/jira/browse/HDFS-16088
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As with [HDFS-13183|https://issues.apache.org/jira/browse/HDFS-13183], 
> NameNodeConnector#getLiveDatanodeStorageReport() can also request to SNN to 
> reduce the ANN load.
> There are two points that need to be mentioned:
> 1. NameNodeConnector#getLiveDatanodeStorageReport() is 
> OperationCategory.UNCHECKED in FSNamesystem, so we can access SNN directly.
> 2. We can share the same UT(testBalancerRequestSBNWithHA) with 
> NameNodeConnector#getBlocks().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16088) Standby NameNode process getLiveDatanodeStorageReport request to reduce Active load

2021-06-24 Thread tomscut (Jira)
tomscut created HDFS-16088:
--

 Summary: Standby NameNode process getLiveDatanodeStorageReport 
request to reduce Active load
 Key: HDFS-16088
 URL: https://issues.apache.org/jira/browse/HDFS-16088
 Project: Hadoop HDFS
  Issue Type: Improvement
Reporter: tomscut
Assignee: tomscut


As with [HDFS-13183|https://issues.apache.org/jira/browse/HDFS-13183], 
NameNodeConnector#getLiveDatanodeStorageReport() can also request to SNN to 
reduce the ANN load.

There are two points that need to be mentioned:
1. NameNodeConnector#getLiveDatanodeStorageReport() is 
OperationCategory.UNCHECKED in FSNamesystem, so we can access SNN directly.
2. We can share the same UT(testBalancerRequestSBNWithHA) with 
NameNodeConnector#getBlocks().



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16087) RBF balance process is stuck at DisableWrite stage

2021-06-24 Thread Eric Yin (Jira)
Eric Yin created HDFS-16087:
---

 Summary: RBF balance process is stuck at DisableWrite stage
 Key: HDFS-16087
 URL: https://issues.apache.org/jira/browse/HDFS-16087
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: rbf
Affects Versions: 3.4.0
Reporter: Eric Yin


The balance process will be stuck at DisableWrite stage when running the 
rbfbalance command.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16043) HDFS : Delete performance optimization

2021-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16043?focusedWorklogId=614813=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614813
 ]

ASF GitHub Bot logged work on HDFS-16043:
-

Author: ASF GitHub Bot
Created on: 25/Jun/21 01:46
Start Date: 25/Jun/21 01:46
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #3063:
URL: https://github.com/apache/hadoop/pull/3063#discussion_r658412280



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
##
@@ -4909,6 +4930,73 @@ public long getLastRedundancyMonitorTS() {
 return lastRedundancyCycleTS.get();
   }
 
+  /**
+   * Periodically deletes the marked block.
+   */
+  private class MarkedDeleteBlockScrubber implements Runnable {
+private Iterator toDeleteIterator = null;
+private boolean isSleep;
+
+private void toRemove(long time) {

Review comment:
   the method name does not convey its work. a toX() method usually implies 
it converts the object to another type of object.
   
   let's rename it as "remove"?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614813)
Time Spent: 2.5h  (was: 2h 20m)

> HDFS : Delete performance optimization
> --
>
> Key: HDFS-16043
> URL: https://issues.apache.org/jira/browse/HDFS-16043
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namanode
>Affects Versions: 3.4.0
>Reporter: Xiangyi Zhu
>Assignee: Xiangyi Zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: 20210527-after.svg, 20210527-before.svg
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> The deletion of the large directory caused NN to hold the lock for too long, 
> which caused our NameNode to be killed by ZKFC.
>  Through the flame graph, it is found that its main time-consuming 
> calculation is QuotaCount when removingBlocks(toRemovedBlocks) and deleting 
> inodes, and removeBlocks(toRemovedBlocks) takes a higher proportion of time.
> h3. solution:
> 1. RemoveBlocks is processed asynchronously. A thread is started in the 
> BlockManager to process the deleted blocks and control the lock time.
>  2. QuotaCount calculation optimization, this is similar to the optimization 
> of this Issue HDFS-16000.
> h3. Comparison before and after optimization:
> Delete 1000w Inode and 1000w block test.
>  *before:*
> remove inode elapsed time: 7691 ms
>  remove block elapsed time :11107 ms
>  *after:*
>  remove inode elapsed time: 4149 ms
>  remove block elapsed time :0 ms



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16043) HDFS : Delete performance optimization

2021-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16043?focusedWorklogId=614812=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614812
 ]

ASF GitHub Bot logged work on HDFS-16043:
-

Author: ASF GitHub Bot
Created on: 25/Jun/21 01:28
Start Date: 25/Jun/21 01:28
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #3063:
URL: https://github.com/apache/hadoop/pull/3063#discussion_r658381800



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
##
@@ -4909,6 +4930,73 @@ public long getLastRedundancyMonitorTS() {
 return lastRedundancyCycleTS.get();
   }
 
+  /**
+   * Periodically deletes the marked block.
+   */
+  private class MarkedDeleteBlockScrubber implements Runnable {
+private Iterator toDeleteIterator = null;
+private boolean isSleep;
+
+private void toRemove(long time) {
+  // Reentrant write lock, Release the lock when the remove is
+  // complete
+  if (checkToDeleteIterator()) {
+namesystem.writeLock();
+try {
+  while (toDeleteIterator.hasNext()) {
+removeBlock(toDeleteIterator.next());
+if (Time.now() - time > deleteBlockLockTimeMs) {

Review comment:
   Use Time.monotonicNow()

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
##
@@ -4909,6 +4930,73 @@ public long getLastRedundancyMonitorTS() {
 return lastRedundancyCycleTS.get();
   }
 
+  /**
+   * Periodically deletes the marked block.
+   */
+  private class MarkedDeleteBlockScrubber implements Runnable {
+private Iterator toDeleteIterator = null;
+private boolean isSleep;
+
+private void toRemove(long time) {
+  // Reentrant write lock, Release the lock when the remove is
+  // complete
+  if (checkToDeleteIterator()) {
+namesystem.writeLock();
+try {
+  while (toDeleteIterator.hasNext()) {
+removeBlock(toDeleteIterator.next());
+if (Time.now() - time > deleteBlockLockTimeMs) {
+  isSleep = true;
+  break;
+}
+  }
+} finally {
+  namesystem.writeUnlock();
+}
+  }
+}
+
+private boolean checkToDeleteIterator() {
+  return toDeleteIterator != null && toDeleteIterator.hasNext();
+}
+
+@Override
+public void run() {
+  LOG.info("Start MarkedDeleteBlockScrubber thread");
+  while (namesystem.isRunning()) {
+if (!markedDeleteQueue.isEmpty() || checkToDeleteIterator()) {
+  namesystem.writeLock();
+  try {
+NameNodeMetrics metrics = NameNode.getNameNodeMetrics();
+metrics.setDeleteBlocksQueued(markedDeleteQueue.size());
+isSleep = false;
+long startTime = Time.now();

Review comment:
   Replace Time.now() with Time.monotonicNow()

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSNamesystem.java
##
@@ -3344,7 +3344,8 @@ boolean delete(String src, boolean recursive, boolean 
logRetryCache)
 getEditLog().logSync();
 logAuditEvent(ret, operationName, src);
 if (toRemovedBlocks != null) {
-  removeBlocks(toRemovedBlocks); // Incremental deletion of blocks
+  blockManager.getMarkedDeleteQueue().add(
+  toRemovedBlocks.getToDeleteList());

Review comment:
   we remove blocks for several other operations as well. For example, 
truncate, rename, deleteSnapshot. Do we plan to make them delete block 
asynchronous as well?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614812)
Time Spent: 2h 20m  (was: 2h 10m)

> HDFS : Delete performance optimization
> --
>
> Key: HDFS-16043
> URL: https://issues.apache.org/jira/browse/HDFS-16043
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namanode
>Affects Versions: 3.4.0
>Reporter: Xiangyi Zhu
>Assignee: Xiangyi Zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: 20210527-after.svg, 20210527-before.svg
>
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> The deletion of the large directory caused NN to hold the lock for too long, 
> which caused our NameNode to be killed by ZKFC.
>  Through the flame graph, it is found that its main time-consuming 
> calculation is 

[jira] [Work logged] (HDFS-16043) HDFS : Delete performance optimization

2021-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16043?focusedWorklogId=614747=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614747
 ]

ASF GitHub Bot logged work on HDFS-16043:
-

Author: ASF GitHub Bot
Created on: 24/Jun/21 22:20
Start Date: 24/Jun/21 22:20
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3063:
URL: https://github.com/apache/hadoop/pull/3063#issuecomment-867991046


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 13 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 11s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  0s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3063/7/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 957 unchanged 
- 1 fixed = 959 total (was 958)  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m  6s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 236m 21s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3063/7/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 320m 50s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.balancer.TestBalancerWithHANameNodes |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3063/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3063 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux 3b3c0fa7e498 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f09d251c76d3c2ce51650311cb96e2abc1dd3e22 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK 

[jira] [Commented] (HDFS-16067) Support Append API in NNThroughputBenchmark

2021-06-24 Thread Renukaprasad C (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17369056#comment-17369056
 ] 

Renukaprasad C commented on HDFS-16067:
---

Thanks [~hexiaoqiao] for the UT clarification & patch review. Yes, printUsage 
got missed, corrected it in HDFS-16067.004.patch. Please review.

> Support Append API in NNThroughputBenchmark
> ---
>
> Key: HDFS-16067
> URL: https://issues.apache.org/jira/browse/HDFS-16067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Minor
> Attachments: HDFS-16067.001.patch, HDFS-16067.002.patch, 
> HDFS-16067.003.patch, HDFS-16067.004.patch
>
>
> Append API needs to be added into NNThroughputBenchmark tool.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16067) Support Append API in NNThroughputBenchmark

2021-06-24 Thread Renukaprasad C (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16067?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Renukaprasad C updated HDFS-16067:
--
Attachment: HDFS-16067.004.patch

> Support Append API in NNThroughputBenchmark
> ---
>
> Key: HDFS-16067
> URL: https://issues.apache.org/jira/browse/HDFS-16067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Minor
> Attachments: HDFS-16067.001.patch, HDFS-16067.002.patch, 
> HDFS-16067.003.patch, HDFS-16067.004.patch
>
>
> Append API needs to be added into NNThroughputBenchmark tool.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing

2021-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=614664=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614664
 ]

ASF GitHub Bot logged work on HDFS-16086:
-

Author: ASF GitHub Bot
Created on: 24/Jun/21 18:42
Start Date: 24/Jun/21 18:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3136:
URL: https://github.com/apache/hadoop/pull/3136#issuecomment-867869816


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 13s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 53s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javac  |   1m 19s | 
[/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/2/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 36 new + 467 
unchanged - 36 fixed = 503 total (was 503)  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 55s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 398 unchanged - 8 
fixed = 398 total (was 406)  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 19s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 341m 37s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 431m 48s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.TestDFSShell |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestFsVolumeList |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3136 |
   | Optional Tests | 

[jira] [Updated] (HDFS-16044) Fix getListing call getLocatedBlocks even source is a directory

2021-06-24 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-16044:
---
Fix Version/s: 3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk. Thanks [~pilchard] for your report and contribution!
Will backport to other active branches shortly.

> Fix getListing call getLocatedBlocks even source is a directory
> ---
>
> Key: HDFS-16044
> URL: https://issues.apache.org/jira/browse/HDFS-16044
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.1
>Reporter: ludun
>Assignee: ludun
>Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-16044.00.patch, HDFS-16044.01.patch, 
> HDFS-16044.02.patch, HDFS-16044.03.patch
>
>
> In production cluster when call getListing very frequent.  The processing 
> time of rpc request is very high. we try  to  optimize the performance of 
> getListing request.
> After some check, we found that, even the source and child is dir,   the 
> getListing request also call   getLocatedBlocks. 
> the request is and  needLocation is false
> {code:java}
> 2021-05-27 15:16:07,093 TRACE ipc.ProtobufRpcEngine: 1: Call -> 
> 8-5-231-4/8.5.231.4:25000: getListing {src: 
> "/data/connector/test/topics/102test" startAfter: "" needLocation: false}
> {code}
> but getListing request 1000 times getLocatedBlocks which not needed.
> {code:java}
> `---ts=2021-05-27 14:19:15;thread_name=IPC Server handler 86 on 
> 25000;id=e6;is_daemon=true;priority=5;TCCL=sun.misc.Launcher$AppClassLoader@5fcfe4b2
> `---[35.068532ms] 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp:getListing()
> +---[0.003542ms] 
> org.apache.hadoop.hdfs.server.namenode.INodesInPath:getPathComponents() #214
> +---[0.003053ms] 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory:isExactReservedName() #95
> +---[0.002938ms] 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory:readLock() #218
> +---[0.00252ms] 
> org.apache.hadoop.hdfs.server.namenode.INodesInPath:isDotSnapshotDir() #220
> +---[0.002788ms] 
> org.apache.hadoop.hdfs.server.namenode.INodesInPath:getPathSnapshotId() #223
> +---[0.002905ms] 
> org.apache.hadoop.hdfs.server.namenode.INodesInPath:getLastINode() #224
> +---[0.002785ms] 
> org.apache.hadoop.hdfs.server.namenode.INode:getStoragePolicyID() #230
> +---[0.002236ms] 
> org.apache.hadoop.hdfs.server.namenode.INode:isDirectory() #233
> +---[0.002919ms] 
> org.apache.hadoop.hdfs.server.namenode.INode:asDirectory() #242
> +---[0.003408ms] 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory:getChildrenList() #243
> +---[0.005942ms] 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory:nextChild() #244
> +---[0.002467ms] org.apache.hadoop.hdfs.util.ReadOnlyList:size() #245
> +---[0.005481ms] 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory:getLsLimit() #247
> +---[0.002176ms] 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory:getLsLimit() #248
> +---[min=0.00211ms,max=0.005157ms,total=2.247572ms,count=1000] 
> org.apache.hadoop.hdfs.util.ReadOnlyList:get() #252
> +---[min=0.001946ms,max=0.005411ms,total=2.041715ms,count=1000] 
> org.apache.hadoop.hdfs.server.namenode.INode:isSymlink() #253
> +---[min=0.002176ms,max=0.005426ms,total=2.264472ms,count=1000] 
> org.apache.hadoop.hdfs.server.namenode.INode:getLocalStoragePolicyID() #254
> +---[min=0.002251ms,max=0.006849ms,total=2.351935ms,count=1000] 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp:getStoragePolicyID()
>  #95
> +---[min=0.006091ms,max=0.012333ms,total=6.439434ms,count=1000] 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp:createFileStatus()
>  #257
> +---[min=0.00269ms,max=0.004995ms,total=2.788194ms,count=1000] 
> org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus:getLocatedBlocks() #265
> +---[0.003234ms] 
> org.apache.hadoop.hdfs.protocol.DirectoryListing:() #274
> `---[0.002457ms] 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory:readUnlock() #277
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16044) Fix getListing call getLocatedBlocks even source is a directory

2021-06-24 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HDFS-16044:
---
Summary: Fix getListing call getLocatedBlocks even source is a directory  
(was: getListing call getLocatedBlocks even source is a directory)

> Fix getListing call getLocatedBlocks even source is a directory
> ---
>
> Key: HDFS-16044
> URL: https://issues.apache.org/jira/browse/HDFS-16044
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: namenode
>Affects Versions: 3.1.1
>Reporter: ludun
>Assignee: ludun
>Priority: Major
> Attachments: HDFS-16044.00.patch, HDFS-16044.01.patch, 
> HDFS-16044.02.patch, HDFS-16044.03.patch
>
>
> In production cluster when call getListing very frequent.  The processing 
> time of rpc request is very high. we try  to  optimize the performance of 
> getListing request.
> After some check, we found that, even the source and child is dir,   the 
> getListing request also call   getLocatedBlocks. 
> the request is and  needLocation is false
> {code:java}
> 2021-05-27 15:16:07,093 TRACE ipc.ProtobufRpcEngine: 1: Call -> 
> 8-5-231-4/8.5.231.4:25000: getListing {src: 
> "/data/connector/test/topics/102test" startAfter: "" needLocation: false}
> {code}
> but getListing request 1000 times getLocatedBlocks which not needed.
> {code:java}
> `---ts=2021-05-27 14:19:15;thread_name=IPC Server handler 86 on 
> 25000;id=e6;is_daemon=true;priority=5;TCCL=sun.misc.Launcher$AppClassLoader@5fcfe4b2
> `---[35.068532ms] 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp:getListing()
> +---[0.003542ms] 
> org.apache.hadoop.hdfs.server.namenode.INodesInPath:getPathComponents() #214
> +---[0.003053ms] 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory:isExactReservedName() #95
> +---[0.002938ms] 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory:readLock() #218
> +---[0.00252ms] 
> org.apache.hadoop.hdfs.server.namenode.INodesInPath:isDotSnapshotDir() #220
> +---[0.002788ms] 
> org.apache.hadoop.hdfs.server.namenode.INodesInPath:getPathSnapshotId() #223
> +---[0.002905ms] 
> org.apache.hadoop.hdfs.server.namenode.INodesInPath:getLastINode() #224
> +---[0.002785ms] 
> org.apache.hadoop.hdfs.server.namenode.INode:getStoragePolicyID() #230
> +---[0.002236ms] 
> org.apache.hadoop.hdfs.server.namenode.INode:isDirectory() #233
> +---[0.002919ms] 
> org.apache.hadoop.hdfs.server.namenode.INode:asDirectory() #242
> +---[0.003408ms] 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory:getChildrenList() #243
> +---[0.005942ms] 
> org.apache.hadoop.hdfs.server.namenode.INodeDirectory:nextChild() #244
> +---[0.002467ms] org.apache.hadoop.hdfs.util.ReadOnlyList:size() #245
> +---[0.005481ms] 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory:getLsLimit() #247
> +---[0.002176ms] 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory:getLsLimit() #248
> +---[min=0.00211ms,max=0.005157ms,total=2.247572ms,count=1000] 
> org.apache.hadoop.hdfs.util.ReadOnlyList:get() #252
> +---[min=0.001946ms,max=0.005411ms,total=2.041715ms,count=1000] 
> org.apache.hadoop.hdfs.server.namenode.INode:isSymlink() #253
> +---[min=0.002176ms,max=0.005426ms,total=2.264472ms,count=1000] 
> org.apache.hadoop.hdfs.server.namenode.INode:getLocalStoragePolicyID() #254
> +---[min=0.002251ms,max=0.006849ms,total=2.351935ms,count=1000] 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp:getStoragePolicyID()
>  #95
> +---[min=0.006091ms,max=0.012333ms,total=6.439434ms,count=1000] 
> org.apache.hadoop.hdfs.server.namenode.FSDirStatAndListingOp:createFileStatus()
>  #257
> +---[min=0.00269ms,max=0.004995ms,total=2.788194ms,count=1000] 
> org.apache.hadoop.hdfs.protocol.HdfsLocatedFileStatus:getLocatedBlocks() #265
> +---[0.003234ms] 
> org.apache.hadoop.hdfs.protocol.DirectoryListing:() #274
> `---[0.002457ms] 
> org.apache.hadoop.hdfs.server.namenode.FSDirectory:readUnlock() #277
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16043) HDFS : Delete performance optimization

2021-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16043?focusedWorklogId=614608=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614608
 ]

ASF GitHub Bot logged work on HDFS-16043:
-

Author: ASF GitHub Bot
Created on: 24/Jun/21 16:37
Start Date: 24/Jun/21 16:37
Worklog Time Spent: 10m 
  Work Description: Hexiaoqiao commented on pull request #3063:
URL: https://github.com/apache/hadoop/pull/3063#issuecomment-867788993


   Please fix the checkstyle reported by Yetus first. Other looks good to me. I 
would like to give my +1 when fix that. Before commit to trunk we should wait 
for other guys give another reviews. Thanks @zhuxiangyi . 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614608)
Time Spent: 2h  (was: 1h 50m)

> HDFS : Delete performance optimization
> --
>
> Key: HDFS-16043
> URL: https://issues.apache.org/jira/browse/HDFS-16043
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namanode
>Affects Versions: 3.4.0
>Reporter: Xiangyi Zhu
>Assignee: Xiangyi Zhu
>Priority: Major
>  Labels: pull-request-available
> Attachments: 20210527-after.svg, 20210527-before.svg
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> The deletion of the large directory caused NN to hold the lock for too long, 
> which caused our NameNode to be killed by ZKFC.
>  Through the flame graph, it is found that its main time-consuming 
> calculation is QuotaCount when removingBlocks(toRemovedBlocks) and deleting 
> inodes, and removeBlocks(toRemovedBlocks) takes a higher proportion of time.
> h3. solution:
> 1. RemoveBlocks is processed asynchronously. A thread is started in the 
> BlockManager to process the deleted blocks and control the lock time.
>  2. QuotaCount calculation optimization, this is similar to the optimization 
> of this Issue HDFS-16000.
> h3. Comparison before and after optimization:
> Delete 1000w Inode and 1000w block test.
>  *before:*
> remove inode elapsed time: 7691 ms
>  remove block elapsed time :11107 ms
>  *after:*
>  remove inode elapsed time: 4149 ms
>  remove block elapsed time :0 ms



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16067) Support Append API in NNThroughputBenchmark

2021-06-24 Thread Xiaoqiao He (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17368932#comment-17368932
 ] 

Xiaoqiao He commented on HDFS-16067:


Try to check failed unit tests at local, both of them works fine for me. I 
think it is not related.
BTW, we should update `NNThroughputBenchmark#printUsage` too. +1 from me after 
that.

> Support Append API in NNThroughputBenchmark
> ---
>
> Key: HDFS-16067
> URL: https://issues.apache.org/jira/browse/HDFS-16067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Minor
> Attachments: HDFS-16067.001.patch, HDFS-16067.002.patch, 
> HDFS-16067.003.patch
>
>
> Append API needs to be added into NNThroughputBenchmark tool.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Resolved] (HDFS-16085) Move the getPermissionChecker out of the read lock

2021-06-24 Thread Ayush Saxena (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ayush Saxena resolved HDFS-16085.
-
Fix Version/s: 3.3.2
   3.4.0
 Hadoop Flags: Reviewed
   Resolution: Fixed

> Move the getPermissionChecker out of the read lock
> --
>
> Key: HDFS-16085
> URL: https://issues.apache.org/jira/browse/HDFS-16085
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Move the getPermissionChecker out of the read lock in 
> NamenodeFsck#getBlockLocations() since the operation does not need to be 
> locked.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16085) Move the getPermissionChecker out of the read lock

2021-06-24 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17368751#comment-17368751
 ] 

Ayush Saxena commented on HDFS-16085:
-

Committed to trunk and branch-3.3.


> Move the getPermissionChecker out of the read lock
> --
>
> Key: HDFS-16085
> URL: https://issues.apache.org/jira/browse/HDFS-16085
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Move the getPermissionChecker out of the read lock in 
> NamenodeFsck#getBlockLocations() since the operation does not need to be 
> locked.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16085) Move the getPermissionChecker out of the read lock

2021-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16085?focusedWorklogId=614441=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614441
 ]

ASF GitHub Bot logged work on HDFS-16085:
-

Author: ASF GitHub Bot
Created on: 24/Jun/21 10:35
Start Date: 24/Jun/21 10:35
Worklog Time Spent: 10m 
  Work Description: ayushtkn edited a comment on pull request #3134:
URL: https://github.com/apache/hadoop/pull/3134#issuecomment-867529276


   Thanx @tomscut for the contribution and @tasanuma for the review!!!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614441)
Time Spent: 1.5h  (was: 1h 20m)

> Move the getPermissionChecker out of the read lock
> --
>
> Key: HDFS-16085
> URL: https://issues.apache.org/jira/browse/HDFS-16085
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Move the getPermissionChecker out of the read lock in 
> NamenodeFsck#getBlockLocations() since the operation does not need to be 
> locked.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16085) Move the getPermissionChecker out of the read lock

2021-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16085?focusedWorklogId=614440=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614440
 ]

ASF GitHub Bot logged work on HDFS-16085:
-

Author: ASF GitHub Bot
Created on: 24/Jun/21 10:35
Start Date: 24/Jun/21 10:35
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on pull request #3134:
URL: https://github.com/apache/hadoop/pull/3134#issuecomment-867529276


   Thanx @tomscut for the contribution and @tasanuma 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614440)
Time Spent: 1h 20m  (was: 1h 10m)

> Move the getPermissionChecker out of the read lock
> --
>
> Key: HDFS-16085
> URL: https://issues.apache.org/jira/browse/HDFS-16085
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Move the getPermissionChecker out of the read lock in 
> NamenodeFsck#getBlockLocations() since the operation does not need to be 
> locked.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16067) Support Append API in NNThroughputBenchmark

2021-06-24 Thread Renukaprasad C (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16067?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17368748#comment-17368748
 ] 

Renukaprasad C commented on HDFS-16067:
---

[~hexiaoqiao] There are random UT failures, but the changes are not related to 
the failed test. Which i verified locally. Can you have a look at the failed 
tests? Thank you.

> Support Append API in NNThroughputBenchmark
> ---
>
> Key: HDFS-16067
> URL: https://issues.apache.org/jira/browse/HDFS-16067
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>Reporter: Renukaprasad C
>Assignee: Renukaprasad C
>Priority: Minor
> Attachments: HDFS-16067.001.patch, HDFS-16067.002.patch, 
> HDFS-16067.003.patch
>
>
> Append API needs to be added into NNThroughputBenchmark tool.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16085) Move the getPermissionChecker out of the read lock

2021-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16085?focusedWorklogId=614438=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614438
 ]

ASF GitHub Bot logged work on HDFS-16085:
-

Author: ASF GitHub Bot
Created on: 24/Jun/21 10:29
Start Date: 24/Jun/21 10:29
Worklog Time Spent: 10m 
  Work Description: ayushtkn merged pull request #3134:
URL: https://github.com/apache/hadoop/pull/3134


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614438)
Time Spent: 1h 10m  (was: 1h)

> Move the getPermissionChecker out of the read lock
> --
>
> Key: HDFS-16085
> URL: https://issues.apache.org/jira/browse/HDFS-16085
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Move the getPermissionChecker out of the read lock in 
> NamenodeFsck#getBlockLocations() since the operation does not need to be 
> locked.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16083) Forbid Observer NameNode trigger active namenode log roll

2021-06-24 Thread lei w (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lei w updated HDFS-16083:
-
Description: When the Observer NameNode is turned on in the cluster, the 
Active NameNode will receive rollEditLog RPC requests from the Standby NameNode 
and Observer NameNode in a short time. Observer NameNode's rollEditLog request 
is a repetitive operation, so should we forbid Observer NameNode trigger  
active namenode log roll ? We  'dfs.ha.log-roll.period' configured is 300( 5 
minutes) and active NameNode receives rollEditLog RPC as shown in 
activeRollEdits.png  (was: When the Observer NameNode is turned on in the 
cluster, the Active NameNode will receive rollEditLog RPC requests from the 
Standby NameNode and Observer NameNode in a short time. Observer NameNode's 
rollEditLog request is a repetitive operation, so should we forbid Observer 
NameNode trigger  active namenode log roll ? We  'dfs.ha.log-roll.period' 
configured is 300( 5 minutes))

> Forbid Observer NameNode trigger  active namenode log roll
> --
>
> Key: HDFS-16083
> URL: https://issues.apache.org/jira/browse/HDFS-16083
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Attachments: HDFS-16083.001.patch, HDFS-16083.002.patch, 
> activeRollEdits.png
>
>
> When the Observer NameNode is turned on in the cluster, the Active NameNode 
> will receive rollEditLog RPC requests from the Standby NameNode and Observer 
> NameNode in a short time. Observer NameNode's rollEditLog request is a 
> repetitive operation, so should we forbid Observer NameNode trigger  active 
> namenode log roll ? We  'dfs.ha.log-roll.period' configured is 300( 5 
> minutes) and active NameNode receives rollEditLog RPC as shown in 
> activeRollEdits.png



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16083) Forbid Observer NameNode trigger active namenode log roll

2021-06-24 Thread lei w (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lei w updated HDFS-16083:
-
Description: When the Observer NameNode is turned on in the cluster, the 
Active NameNode will receive rollEditLog RPC requests from the Standby NameNode 
and Observer NameNode in a short time. Observer NameNode's rollEditLog request 
is a repetitive operation, so should we forbid Observer NameNode trigger  
active namenode log roll ? We  'dfs.ha.log-roll.period' configured is 300( 5 
minutes)  (was: When the Observer NameNode is turned on in the cluster, the 
Active NameNode will receive rollEditLog RPC requests from the Standby NameNode 
and Observer NameNode in a short time. Observer NameNode's rollEditLog request 
is a repetitive operation, so should we forbid Observer NameNode trigger  
active namenode log roll ? We  'dfs.ha.log-roll.period' configuration
)

> Forbid Observer NameNode trigger  active namenode log roll
> --
>
> Key: HDFS-16083
> URL: https://issues.apache.org/jira/browse/HDFS-16083
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Attachments: HDFS-16083.001.patch, HDFS-16083.002.patch, 
> activeRollEdits.png
>
>
> When the Observer NameNode is turned on in the cluster, the Active NameNode 
> will receive rollEditLog RPC requests from the Standby NameNode and Observer 
> NameNode in a short time. Observer NameNode's rollEditLog request is a 
> repetitive operation, so should we forbid Observer NameNode trigger  active 
> namenode log roll ? We  'dfs.ha.log-roll.period' configured is 300( 5 minutes)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16083) Forbid Observer NameNode trigger active namenode log roll

2021-06-24 Thread lei w (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lei w updated HDFS-16083:
-
Description: 
When the Observer NameNode is turned on in the cluster, the Active NameNode 
will receive rollEditLog RPC requests from the Standby NameNode and Observer 
NameNode in a short time. Observer NameNode's rollEditLog request is a 
repetitive operation, so should we forbid Observer NameNode trigger  active 
namenode log roll ? We  'dfs.ha.log-roll.period' configuration


  was:
When the Observer NameNode is turned on in the cluster, the Active NameNode 
will receive rollEditLog RPC requests from the Standby NameNode and Observer 
NameNode in a short time. Observer NameNode's rollEditLog request is a 
repetitive operation, so should we forbid Observer NameNode trigger  active 
namenode log roll ? We Forbid Observer NameNode trigger  active namenode log 
roll



> Forbid Observer NameNode trigger  active namenode log roll
> --
>
> Key: HDFS-16083
> URL: https://issues.apache.org/jira/browse/HDFS-16083
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Attachments: HDFS-16083.001.patch, HDFS-16083.002.patch, 
> activeRollEdits.png
>
>
> When the Observer NameNode is turned on in the cluster, the Active NameNode 
> will receive rollEditLog RPC requests from the Standby NameNode and Observer 
> NameNode in a short time. Observer NameNode's rollEditLog request is a 
> repetitive operation, so should we forbid Observer NameNode trigger  active 
> namenode log roll ? We  'dfs.ha.log-roll.period' configuration



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16083) Forbid Observer NameNode trigger active namenode log roll

2021-06-24 Thread lei w (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lei w updated HDFS-16083:
-
Description: 
When the Observer NameNode is turned on in the cluster, the Active NameNode 
will receive rollEditLog RPC requests from the Standby NameNode and Observer 
NameNode in a short time. Observer NameNode's rollEditLog request is a 
repetitive operation, so should we forbid Observer NameNode trigger  active 
namenode log roll ? We 


  was:
When the Observer NameNode is turned on in the cluster, the Active NameNode 
will receive rollEditLog RPC requests from the Standby NameNode and Observer 
NameNode in a short time. Observer NameNode's rollEditLog request is a 
repetitive operation, so should we proh



> Forbid Observer NameNode trigger  active namenode log roll
> --
>
> Key: HDFS-16083
> URL: https://issues.apache.org/jira/browse/HDFS-16083
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Attachments: HDFS-16083.001.patch, HDFS-16083.002.patch, 
> activeRollEdits.png
>
>
> When the Observer NameNode is turned on in the cluster, the Active NameNode 
> will receive rollEditLog RPC requests from the Standby NameNode and Observer 
> NameNode in a short time. Observer NameNode's rollEditLog request is a 
> repetitive operation, so should we forbid Observer NameNode trigger  active 
> namenode log roll ? We 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16083) Forbid Observer NameNode trigger active namenode log roll

2021-06-24 Thread lei w (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lei w updated HDFS-16083:
-
Description: 
When the Observer NameNode is turned on in the cluster, the Active NameNode 
will receive rollEditLog RPC requests from the Standby NameNode and Observer 
NameNode in a short time. Observer NameNode's rollEditLog request is a 
repetitive operation, so should we forbid Observer NameNode trigger  active 
namenode log roll ? We Forbid Observer NameNode trigger  active namenode log 
roll


  was:
When the Observer NameNode is turned on in the cluster, the Active NameNode 
will receive rollEditLog RPC requests from the Standby NameNode and Observer 
NameNode in a short time. Observer NameNode's rollEditLog request is a 
repetitive operation, so should we forbid Observer NameNode trigger  active 
namenode log roll ? We 



> Forbid Observer NameNode trigger  active namenode log roll
> --
>
> Key: HDFS-16083
> URL: https://issues.apache.org/jira/browse/HDFS-16083
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Attachments: HDFS-16083.001.patch, HDFS-16083.002.patch, 
> activeRollEdits.png
>
>
> When the Observer NameNode is turned on in the cluster, the Active NameNode 
> will receive rollEditLog RPC requests from the Standby NameNode and Observer 
> NameNode in a short time. Observer NameNode's rollEditLog request is a 
> repetitive operation, so should we forbid Observer NameNode trigger  active 
> namenode log roll ? We Forbid Observer NameNode trigger  active namenode log 
> roll



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16083) Forbid Observer NameNode trigger active namenode log roll

2021-06-24 Thread lei w (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lei w updated HDFS-16083:
-
Attachment: activeRollEdits.png

> Forbid Observer NameNode trigger  active namenode log roll
> --
>
> Key: HDFS-16083
> URL: https://issues.apache.org/jira/browse/HDFS-16083
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Attachments: HDFS-16083.001.patch, HDFS-16083.002.patch, 
> activeRollEdits.png
>
>
> When the Observer NameNode is turned on in the cluster, the Active NameNode 
> will receive rollEditLog RPC requests from the Standby NameNode and Observer 
> NameNode in a short time. Observer NameNode's rollEditLog request is a 
> repetitive operation, so should we proh



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16083) Forbid Observer NameNode trigger active namenode log roll

2021-06-24 Thread lei w (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lei w updated HDFS-16083:
-
Description: 
When the Observer NameNode is turned on in the cluster, the Active NameNode 
will receive rollEditLog RPC requests from the Standby NameNode and Observer 
NameNode in a short time. Observer NameNode's rollEditLog request is a 
repetitive operation, so should we proh


  was:When the Observer NameNode is turned on in the cluster, the Active 
NameNode will receive rollEditLog RPC requests from the Standby NameNode and 
Observer NameNode in a short time. Observer NameNode's rollEditLog request is a 
repetitive operation, so should we prohibit Observer NameNode from triggering 
rollEditLog?


> Forbid Observer NameNode trigger  active namenode log roll
> --
>
> Key: HDFS-16083
> URL: https://issues.apache.org/jira/browse/HDFS-16083
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: namanode
>Reporter: lei w
>Assignee: lei w
>Priority: Minor
> Attachments: HDFS-16083.001.patch, HDFS-16083.002.patch
>
>
> When the Observer NameNode is turned on in the cluster, the Active NameNode 
> will receive rollEditLog RPC requests from the Standby NameNode and Observer 
> NameNode in a short time. Observer NameNode's rollEditLog request is a 
> repetitive operation, so should we proh



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16039) RBF: Some indicators of RBFMetrics count inaccurately

2021-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16039?focusedWorklogId=614392=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614392
 ]

ASF GitHub Bot logged work on HDFS-16039:
-

Author: ASF GitHub Bot
Created on: 24/Jun/21 09:07
Start Date: 24/Jun/21 09:07
Worklog Time Spent: 10m 
  Work Description: zhuxiangyi commented on a change in pull request #3086:
URL: https://github.com/apache/hadoop/pull/3086#discussion_r657769249



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/router/TestRouterRpc.java
##
@@ -1757,7 +1757,7 @@ public void testRBFMetricsMethodsRelayOnStateStore() {
 // These methods relays on
 // {@link RBFMetrics#getActiveNamenodeRegistration()}
 assertEquals("{}", metrics.getNameservices());
-assertEquals(0, metrics.getNumLiveNodes());
+assertEquals(NUM_DNS * 2, metrics.getNumLiveNodes());

Review comment:
   GetNumLiveNodes used to obtain dn information through StateStore, and 
now it obtains it through RouterRpcServer#getCachedDatanodeReport. Now this 
test is not applicable in StateStore, I deleted it.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614392)
Time Spent: 2.5h  (was: 2h 20m)

> RBF:  Some indicators of RBFMetrics count inaccurately
> --
>
> Key: HDFS-16039
> URL: https://issues.apache.org/jira/browse/HDFS-16039
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.4.0
>Reporter: Xiangyi Zhu
>Assignee: Xiangyi Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> RBFMetrics#getNumLiveNodes, getNumNamenodes, getTotalCapacity
> The current statistical algorithm is to accumulate all Nn indicators, which 
> will lead to inaccurate counting. I think that the same ClusterID only needs 
> to take one Max and then do the accumulation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16039) RBF: Some indicators of RBFMetrics count inaccurately

2021-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16039?focusedWorklogId=614384=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614384
 ]

ASF GitHub Bot logged work on HDFS-16039:
-

Author: ASF GitHub Bot
Created on: 24/Jun/21 09:02
Start Date: 24/Jun/21 09:02
Worklog Time Spent: 10m 
  Work Description: zhuxiangyi commented on a change in pull request #3086:
URL: https://github.com/apache/hadoop/pull/3086#discussion_r657765228



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/metrics/TestRBFMetrics.java
##
@@ -382,4 +366,56 @@ private void testCapacity(FederationMBean bean) throws 
IOException {
 assertNotEquals(availableCapacity,
 BigInteger.valueOf(bean.getRemainingCapacity()));
   }
+
+  @Test
+  public void testDatanodeNumMetrics()
+  throws Exception {
+Configuration routerConf = new RouterConfigBuilder()
+.metrics()
+.http()
+.stateStore()
+.rpc()
+.build();
+MiniRouterDFSCluster cluster = new MiniRouterDFSCluster(false, 1);
+cluster.setNumDatanodesPerNameservice(0);
+cluster.addNamenodeOverrides(routerConf);
+cluster.startCluster();
+routerConf.setTimeDuration(
+RBFConfigKeys.DN_REPORT_CACHE_EXPIRE, 1, TimeUnit.SECONDS);
+cluster.addRouterOverrides(routerConf);
+cluster.startRouters();
+Router router = cluster.getRandomRouter().getRouter();
+// Register and verify all NNs with all routers
+cluster.registerNamenodes();
+cluster.waitNamenodeRegistration();
+RouterRpcServer rpcServer = router.getRpcServer();
+RBFMetrics rbfMetrics = router.getMetrics();
+// Create mock dn
+DatanodeInfo[] dNInfo = new DatanodeInfo[4];
+DatanodeInfo datanodeInfo = new DatanodeInfo.DatanodeInfoBuilder().build();
+datanodeInfo.setDecommissioned();
+dNInfo[0] = datanodeInfo;
+datanodeInfo = new DatanodeInfo.DatanodeInfoBuilder().build();
+datanodeInfo.setInMaintenance();
+dNInfo[1] = datanodeInfo;
+datanodeInfo = new DatanodeInfo.DatanodeInfoBuilder().build();
+datanodeInfo.startMaintenance();
+dNInfo[2] = datanodeInfo;
+datanodeInfo = new DatanodeInfo.DatanodeInfoBuilder().build();
+datanodeInfo.startDecommission();
+dNInfo[3] = datanodeInfo;
+
+rpcServer.getDnCache().put(HdfsConstants.DatanodeReportType.LIVE, dNInfo);

Review comment:
   Thanks for your reminder.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614384)
Time Spent: 2h 20m  (was: 2h 10m)

> RBF:  Some indicators of RBFMetrics count inaccurately
> --
>
> Key: HDFS-16039
> URL: https://issues.apache.org/jira/browse/HDFS-16039
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: rbf
>Affects Versions: 3.4.0
>Reporter: Xiangyi Zhu
>Assignee: Xiangyi Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> RBFMetrics#getNumLiveNodes, getNumNamenodes, getTotalCapacity
> The current statistical algorithm is to accumulate all Nn indicators, which 
> will lead to inaccurate counting. I think that the same ClusterID only needs 
> to take one Max and then do the accumulation.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16086) Add volume information to datanode log for tracing

2021-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16086?focusedWorklogId=614379=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614379
 ]

ASF GitHub Bot logged work on HDFS-16086:
-

Author: ASF GitHub Bot
Created on: 24/Jun/21 08:43
Start Date: 24/Jun/21 08:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3136:
URL: https://github.com/apache/hadoop/pull/3136#issuecomment-867455179


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 47s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m  2s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javac  |   1m 21s | 
[/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/1/artifact/out/results-compile-javac-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 36 new + 467 
unchanged - 36 fixed = 503 total (was 503)  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 55s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 406 unchanged 
- 0 fixed = 408 total (was 406)  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  18m 52s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 343m 48s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 434m 20s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandby |
   |   | hadoop.hdfs.server.namenode.TestDecommissioningStatus |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3136/1/artifact/out/Dockerfile
 |
  

[jira] [Work logged] (HDFS-16085) Move the getPermissionChecker out of the read lock

2021-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16085?focusedWorklogId=614377=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614377
 ]

ASF GitHub Bot logged work on HDFS-16085:
-

Author: ASF GitHub Bot
Created on: 24/Jun/21 08:14
Start Date: 24/Jun/21 08:14
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #3134:
URL: https://github.com/apache/hadoop/pull/3134#issuecomment-867436585


   Thanks @tasanuma for your review.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614377)
Time Spent: 1h  (was: 50m)

> Move the getPermissionChecker out of the read lock
> --
>
> Key: HDFS-16085
> URL: https://issues.apache.org/jira/browse/HDFS-16085
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Move the getPermissionChecker out of the read lock in 
> NamenodeFsck#getBlockLocations() since the operation does not need to be 
> locked.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16085) Move the getPermissionChecker out of the read lock

2021-06-24 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16085?focusedWorklogId=614350=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-614350
 ]

ASF GitHub Bot logged work on HDFS-16085:
-

Author: ASF GitHub Bot
Created on: 24/Jun/21 07:20
Start Date: 24/Jun/21 07:20
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #3134:
URL: https://github.com/apache/hadoop/pull/3134#issuecomment-867403055


   Hi @aajisaka , could you please review the code? Thank you.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 614350)
Time Spent: 50m  (was: 40m)

> Move the getPermissionChecker out of the read lock
> --
>
> Key: HDFS-16085
> URL: https://issues.apache.org/jira/browse/HDFS-16085
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Move the getPermissionChecker out of the read lock in 
> NamenodeFsck#getBlockLocations() since the operation does not need to be 
> locked.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org