[jira] [Work logged] (HDFS-16137) Improve the comments related to FairCallQueue#queues

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16137?focusedWorklogId=628165=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-628165
 ]

ASF GitHub Bot logged work on HDFS-16137:
-

Author: ASF GitHub Bot
Created on: 27/Jul/21 05:49
Start Date: 27/Jul/21 05:49
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3226:
URL: https://github.com/apache/hadoop/pull/3226#issuecomment-887228968


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m 16s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  1s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  18m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  7s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 35s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  15m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 54s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  20m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  18m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 43s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 34s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  3s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 58s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 189m  1s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3226/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3226 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 52cb01a56227 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4eace25561b4aa04bc9350ed80c8b3a55e026654 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3226/2/testReport/ |
   | Max. process+thread count | 1300 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3226/2/console |
   | versions | 

[jira] [Commented] (HDFS-16139) Update BPServiceActor Scheduler's nextBlockReportTime atomically

2021-07-26 Thread Viraj Jasani (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17387788#comment-17387788
 ] 

Viraj Jasani commented on HDFS-16139:
-

[~hexiaoqiao] Thanks for the review. Are you fine merging the PR?

Thanks

> Update BPServiceActor Scheduler's nextBlockReportTime atomically
> 
>
> Key: HDFS-16139
> URL: https://issues.apache.org/jira/browse/HDFS-16139
> Project: Hadoop HDFS
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> BPServiceActor#Scheduler's nextBlockReportTime is declared volatile and it 
> can be assigned/read by testing threads (through BPServiceActor#triggerXXX) 
> as well as by actor threads. Hence it is declared volatile but it is still 
> assigned non-atomically
> e.g
> {code:java}
> if (factor != 0) {
>   nextBlockReportTime += factor * blockReportIntervalMs;
> } else {
>   // If the difference between the present time and the scheduled
>   // time is very less, the factor can be 0, so in that case, we can
>   // ignore that negligible time, spent while sending the BRss and
>   // schedule the next BR after the blockReportInterval.
>   nextBlockReportTime += blockReportIntervalMs;
> }
> {code}
> We should convert it to AtomicLong to take care of concurrent assignments 
> while making sure that it is assigned atomically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16137) Improve the comments related to FairCallQueue#queues

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16137?focusedWorklogId=628161=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-628161
 ]

ASF GitHub Bot logged work on HDFS-16137:
-

Author: ASF GitHub Bot
Created on: 27/Jul/21 05:42
Start Date: 27/Jul/21 05:42
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on pull request #3226:
URL: https://github.com/apache/hadoop/pull/3226#issuecomment-887226260


   @jojochuang , thanks for your comment. I will try to modify and submit some 
new codes later.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 628161)
Time Spent: 1h  (was: 50m)

> Improve the comments related to FairCallQueue#queues
> 
>
> Key: HDFS-16137
> URL: https://issues.apache.org/jira/browse/HDFS-16137
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> FairCallQueue#queues related comments are too simple:
>/* The queues */
>private final ArrayList> queues;
> Can not visually see the meaning of FairCallQueue#queues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16137) Improve the comments related to FairCallQueue#queues

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16137?focusedWorklogId=628113=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-628113
 ]

ASF GitHub Bot logged work on HDFS-16137:
-

Author: ASF GitHub Bot
Created on: 27/Jul/21 02:58
Start Date: 27/Jul/21 02:58
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #3226:
URL: https://github.com/apache/hadoop/pull/3226#discussion_r677080813



##
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/FairCallQueue.java
##
@@ -20,11 +20,7 @@
 
 import java.lang.ref.WeakReference;
 
-import java.util.ArrayList;
-import java.util.Collection;
-import java.util.Iterator;
-import java.util.AbstractQueue;
-import java.util.HashMap;
+import java.util.*;

Review comment:
   just a minor code style issue. We typically don't do wildcard imports in 
Hadoop codebase.
   You can configure your IDE to not automatically collapse multiple imports 
into one wildcard import.
   
   Other than that I am +1




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 628113)
Time Spent: 50m  (was: 40m)

> Improve the comments related to FairCallQueue#queues
> 
>
> Key: HDFS-16137
> URL: https://issues.apache.org/jira/browse/HDFS-16137
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> FairCallQueue#queues related comments are too simple:
>/* The queues */
>private final ArrayList> queues;
> Can not visually see the meaning of FairCallQueue#queues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16145) CopyListing fails with FNF exception with snapshot diff

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16145?focusedWorklogId=628110=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-628110
 ]

ASF GitHub Bot logged work on HDFS-16145:
-

Author: ASF GitHub Bot
Created on: 27/Jul/21 02:43
Start Date: 27/Jul/21 02:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3234:
URL: https://github.com/apache/hadoop/pull/3234#issuecomment-887163918


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m 18s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   0m 50s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  13m 44s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 16s | 
[/results-checkstyle-hadoop-tools_hadoop-distcp.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3234/5/artifact/out/results-checkstyle-hadoop-tools_hadoop-distcp.txt)
 |  hadoop-tools/hadoop-distcp: The patch generated 50 new + 26 unchanged - 0 
fixed = 76 total (was 26)  |
   | +1 :green_heart: |  mvnsite  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  20m 53s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 100m 27s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3234/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3234 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 63f9627d12e3 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 59c82aed70bf452da5f150b18064a887bc53dc53 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3234/5/testReport/ |
   | Max. process+thread count | 720 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: 

[jira] [Commented] (HDFS-16118) Improve the number of handlers that initialize NameNodeRpcServer#clientRpcServer

2021-07-26 Thread JiangHua Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17387703#comment-17387703
 ] 

JiangHua Zhu commented on HDFS-16118:
-

[~hexiaoqiao], thanks for your comment.

> Improve the number of handlers that initialize 
> NameNodeRpcServer#clientRpcServer
> 
>
> Key: HDFS-16118
> URL: https://issues.apache.org/jira/browse/HDFS-16118
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When initializing NameNodeRpcServer, if the value of 
> dfs.namenode.lifeline.handler.count is set to be less than 0 (such as -1, of 
> course, this is rare), when determining the number of lifeline RPC handlers, 
> it will be based on dfs.namenode.handler .count * lifelineHandlerRatio is 
> determined.
> The code can be found:
> int lifelineHandlerCount = conf.getInt(
>DFS_NAMENODE_LIFELINE_HANDLER_COUNT_KEY, 0);
>if (lifelineHandlerCount <= 0) {
>  float lifelineHandlerRatio = conf.getFloat(
>  DFS_NAMENODE_LIFELINE_HANDLER_RATIO_KEY,
>  DFS_NAMENODE_LIFELINE_HANDLER_RATIO_DEFAULT);
>  lifelineHandlerCount = Math.max(
>  (int)(handlerCount * lifelineHandlerRatio), 1);
>}
> When this happens, the handlerCount should be subtracted from the 
> lifelineHandlerCount when in fact it doesn't.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16137) Improve the comments related to FairCallQueue#queues

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16137?focusedWorklogId=628106=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-628106
 ]

ASF GitHub Bot logged work on HDFS-16137:
-

Author: ASF GitHub Bot
Created on: 27/Jul/21 02:21
Start Date: 27/Jul/21 02:21
Worklog Time Spent: 10m 
  Work Description: jianghuazhu commented on pull request #3226:
URL: https://github.com/apache/hadoop/pull/3226#issuecomment-887157195


   @virajjasani , thanks for your comment. I will try to modify and submit some 
new codes later.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 628106)
Time Spent: 40m  (was: 0.5h)

> Improve the comments related to FairCallQueue#queues
> 
>
> Key: HDFS-16137
> URL: https://issues.apache.org/jira/browse/HDFS-16137
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> FairCallQueue#queues related comments are too simple:
>/* The queues */
>private final ArrayList> queues;
> Can not visually see the meaning of FairCallQueue#queues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16134) Optimize RPC throughput

2021-07-26 Thread JiangHua Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17387686#comment-17387686
 ] 

JiangHua Zhu commented on HDFS-16134:
-

[~hexiaoqiao], thanks for your comment. I will continue to pay attention.


> Optimize RPC throughput
> ---
>
> Key: HDFS-16134
> URL: https://issues.apache.org/jira/browse/HDFS-16134
> Project: Hadoop HDFS
>  Issue Type: Improvement
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Major
>
> Now RPC has the following phenomena:
> 1. Each RPC has 1 Responder, which is used to return the result, but many 
> client requests are directly returned after the handler is processed;
> 2. Call#Connection#responseQueue uses synchronous control every time it is 
> used;
> These phenomena will hinder the performance of RPC, we can try to make some 
> changes.
> 1. Set multiple responders for each RPC, and the user returns the result;
> 2. After the Handler processing is completed, add the results that need to be 
> returned to a queue, and the Responder will process the queue data;
> 3. Add an identifier similar to sequenceId to ensure the order of return;



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16137) Improve the comments related to FairCallQueue#queues

2021-07-26 Thread JiangHua Zhu (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17387687#comment-17387687
 ] 

JiangHua Zhu commented on HDFS-16137:
-

[~hexiaoqiao], thanks for your comment.

> Improve the comments related to FairCallQueue#queues
> 
>
> Key: HDFS-16137
> URL: https://issues.apache.org/jira/browse/HDFS-16137
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: ipc
>Reporter: JiangHua Zhu
>Assignee: JiangHua Zhu
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> FairCallQueue#queues related comments are too simple:
>/* The queues */
>private final ArrayList> queues;
> Can not visually see the meaning of FairCallQueue#queues.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16141) [FGL] Address permission related issues with File / Directory

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16141?focusedWorklogId=628098=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-628098
 ]

ASF GitHub Bot logged work on HDFS-16141:
-

Author: ASF GitHub Bot
Created on: 27/Jul/21 01:39
Start Date: 27/Jul/21 01:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3232:
URL: https://github.com/apache/hadoop/pull/3232#issuecomment-887142585


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 50s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ fgl Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 33s |  |  fgl passed  |
   | +1 :green_heart: |  compile  |   1m 24s |  |  fgl passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  fgl passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 57s |  |  fgl passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  fgl passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  fgl passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  fgl passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  spotbugs  |   3m 19s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3232/4/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs in fgl has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  18m 56s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 22s |  |  
hadoop-hdfs-project/hadoop-hdfs generated 0 new + 0 unchanged - 1 fixed = 0 
total (was 1)  |
   | +1 :green_heart: |  shadedclient  |  18m 53s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 389m 34s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3232/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 38s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 481m 16s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestDeleteRace |
   |   | hadoop.hdfs.server.namenode.ha.TestObserverNode |
   |   | hadoop.hdfs.server.namenode.TestAuditLogs |
   |   | hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap |
   |   | hadoop.hdfs.server.namenode.TestNameNodeRecovery |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.TestDFSUpgradeFromImage |
   |   | hadoop.hdfs.TestAclsEndToEnd |
   |   | hadoop.hdfs.server.datanode.TestBatchIbr |
   |   | hadoop.hdfs.server.namenode.snapshot.TestSnapshot |
   |   | hadoop.hdfs.server.namenode.snapshot.TestOrderedSnapshotDeletion |
   |   | hadoop.hdfs.server.namenode.snapshot.TestRenameWithSnapshots |
   |   | hadoop.hdfs.TestFileCreation |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.TestErasureCodingPolicyWithSnapshot |
   |   | 

[jira] [Work logged] (HDFS-16145) CopyListing fails with FNF exception with snapshot diff

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16145?focusedWorklogId=628095=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-628095
 ]

ASF GitHub Bot logged work on HDFS-16145:
-

Author: ASF GitHub Bot
Created on: 27/Jul/21 01:01
Start Date: 27/Jul/21 01:01
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on a change in pull request #3234:
URL: https://github.com/apache/hadoop/pull/3234#discussion_r676858532



##
File path: 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java
##
@@ -563,10 +589,27 @@ private Path translateRenamedPath(Path sourcePath,
 } else {
   List renameDiffsList =
   diffMap.get(SnapshotDiffReport.DiffType.RENAME);
+  List deletedDirDiffsList =
+  diffMap.get(SnapshotDiffReport.DiffType.DELETE);

Review comment:
   The list will hold all the entries that are marked deleted. For single 
directory with 1000 files when gets deleted, the snapshot diff report itself 
will just have 1 deleted entry for the directory.
   
   It does scan the whole rename list already. With this, it needs to scan the 
deleted list as well and hence it can bring performance problem depending how 
many entries are marked deleted.
   
   I have modified the logic now to build a deleted list for entries which have 
been marked deleted only bcoz of rename to an excluded path (excluded by 
filter). This should limit the scans.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 628095)
Time Spent: 2.5h  (was: 2h 20m)

> CopyListing fails with FNF exception with snapshot diff
> ---
>
> Key: HDFS-16145
> URL: https://issues.apache.org/jira/browse/HDFS-16145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> Distcp with snapshotdiff and with filters, marks a Rename as a delete 
> opeartion on the target if the rename target is to a directory which is 
> exluded by the filter. But, in cases, where files/subdirs created/modified 
> prior to the Rename post the old snapshot will still be present as 
> modified/created entries in the final copy list. Since, the parent diretory 
> is marked for deletion, these subsequent create/modify entries should be 
> ignored while building the final copy list. 
> With such cases, when the final copy list is built, distcp tries to do a 
> lookup for each create/modified file in the newer snapshot which will fail 
> as, the parent dir is already moved to a new location in later snapshot.
>  
> {code:java}
> sudo -u kms hadoop key create testkey
> hadoop fs -mkdir -p /data/gcgdlknnasg/
> hdfs crypto -createZone -keyName testkey -path /data/gcgdlknnasg/
> hadoop fs -mkdir -p /dest/gcgdlknnasg
> hdfs crypto -createZone -keyName testkey -path /dest/gcgdlknnasg
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1
> hdfs dfsadmin -allowSnapshot /data/gcgdlknnasg/ 
> hdfs dfsadmin -allowSnapshot /dest/gcgdlknnasg/ 
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /data/gcgdlknnasg/
> drwxrwxrwt   - hdfs supergroup  0 2021-07-16 14:05 
> /data/gcgdlknnasg/.Trash
> drwxr-xr-x   - hdfs supergroup  0 2021-07-16 13:07 
> /data/gcgdlknnasg/dir1
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /dest/gcgdlknnasg/
> [root@nightly62x-1 logs]#
> hdfs dfs -put /etc/hosts /data/gcgdlknnasg/dir1/
> hdfs dfs -rm -r /data/gcgdlknnasg/dir1/
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1/
> ===> Run BDR with “Abort on Snapshot Diff Failures” CHECKED now in the 
> replication schedule. You get into below error and failure of the BDR job.
> 21/07/16 15:02:30 INFO distcp.DistCp: Failed to use snapshot diff - 
> java.io.FileNotFoundException: File does not exist: 
> /data/gcgdlknnasg/.snapshot/distcp-5-46485360-new/dir1/hosts
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1494)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1487)
> ……..
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16145) CopyListing fails with FNF exception with snapshot diff

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16145?focusedWorklogId=628093=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-628093
 ]

ASF GitHub Bot logged work on HDFS-16145:
-

Author: ASF GitHub Bot
Created on: 27/Jul/21 00:57
Start Date: 27/Jul/21 00:57
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on a change in pull request #3234:
URL: https://github.com/apache/hadoop/pull/3234#discussion_r676860099



##
File path: 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java
##
@@ -515,6 +521,26 @@ private DiffInfo getRenameItem(DiffInfo diff, DiffInfo[] 
renameDiffArray) {
 return null;
   }
 
+  /**
+   * checks if a parent dir is marked deleted as a part of dir rename happening
+   * to a path which is excluded by the the filter.
+   * @return true if it's marked deleted
+   */
+  private boolean isParentOrSelfMarkedDeleted(DiffInfo diff,
+  DiffInfo[] deletedDirDiffArray) {
+for (DiffInfo item : deletedDirDiffArray) {
+  if (item.getSource().equals(diff.getSource())) {
+if (diff.getType() == SnapshotDiffReport.DiffType.MODIFY) {

Review comment:
   Added the comments.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 628093)
Time Spent: 2h 20m  (was: 2h 10m)

> CopyListing fails with FNF exception with snapshot diff
> ---
>
> Key: HDFS-16145
> URL: https://issues.apache.org/jira/browse/HDFS-16145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 20m
>  Remaining Estimate: 0h
>
> Distcp with snapshotdiff and with filters, marks a Rename as a delete 
> opeartion on the target if the rename target is to a directory which is 
> exluded by the filter. But, in cases, where files/subdirs created/modified 
> prior to the Rename post the old snapshot will still be present as 
> modified/created entries in the final copy list. Since, the parent diretory 
> is marked for deletion, these subsequent create/modify entries should be 
> ignored while building the final copy list. 
> With such cases, when the final copy list is built, distcp tries to do a 
> lookup for each create/modified file in the newer snapshot which will fail 
> as, the parent dir is already moved to a new location in later snapshot.
>  
> {code:java}
> sudo -u kms hadoop key create testkey
> hadoop fs -mkdir -p /data/gcgdlknnasg/
> hdfs crypto -createZone -keyName testkey -path /data/gcgdlknnasg/
> hadoop fs -mkdir -p /dest/gcgdlknnasg
> hdfs crypto -createZone -keyName testkey -path /dest/gcgdlknnasg
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1
> hdfs dfsadmin -allowSnapshot /data/gcgdlknnasg/ 
> hdfs dfsadmin -allowSnapshot /dest/gcgdlknnasg/ 
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /data/gcgdlknnasg/
> drwxrwxrwt   - hdfs supergroup  0 2021-07-16 14:05 
> /data/gcgdlknnasg/.Trash
> drwxr-xr-x   - hdfs supergroup  0 2021-07-16 13:07 
> /data/gcgdlknnasg/dir1
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /dest/gcgdlknnasg/
> [root@nightly62x-1 logs]#
> hdfs dfs -put /etc/hosts /data/gcgdlknnasg/dir1/
> hdfs dfs -rm -r /data/gcgdlknnasg/dir1/
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1/
> ===> Run BDR with “Abort on Snapshot Diff Failures” CHECKED now in the 
> replication schedule. You get into below error and failure of the BDR job.
> 21/07/16 15:02:30 INFO distcp.DistCp: Failed to use snapshot diff - 
> java.io.FileNotFoundException: File does not exist: 
> /data/gcgdlknnasg/.snapshot/distcp-5-46485360-new/dir1/hosts
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1494)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1487)
> ……..
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16144) Revert HDFS-15372 (Files in snapshots no longer see attribute provider permissions)

2021-07-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17387628#comment-17387628
 ] 

Hadoop QA commented on HDFS-16144:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
28s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 1 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 19m 
39s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
23s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
59s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
21s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
15m 33s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
30s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 21m  
4s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are 
enabled, using SpotBugs. {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  3m  
5s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
14s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
15s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
15s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
9s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
9s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m 40s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
23s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  3m 
14s{color} | {color:green}{color} | 

[jira] [Work logged] (HDFS-16145) CopyListing fails with FNF exception with snapshot diff

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16145?focusedWorklogId=628008=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-628008
 ]

ASF GitHub Bot logged work on HDFS-16145:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 21:23
Start Date: 26/Jul/21 21:23
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3234:
URL: https://github.com/apache/hadoop/pull/3234#issuecomment-887037357


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  14m  7s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 48s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 36s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 28s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   0m 53s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m  8s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-tools_hadoop-distcp.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3234/4/artifact/out/results-checkstyle-hadoop-tools_hadoop-distcp.txt)
 |  hadoop-tools/hadoop-distcp: The patch generated 50 new + 26 unchanged - 0 
fixed = 76 total (was 26)  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  50m 44s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 134m 43s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3234/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3234 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux ff4256e149a4 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a3c764c355ef45e22ce004140c4a55c39c5db449 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3234/4/testReport/ |
   | Max. process+thread count | 574 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: 

[jira] [Work logged] (HDFS-16145) CopyListing fails with FNF exception with snapshot diff

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16145?focusedWorklogId=628006=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-628006
 ]

ASF GitHub Bot logged work on HDFS-16145:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 21:22
Start Date: 26/Jul/21 21:22
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3234:
URL: https://github.com/apache/hadoop/pull/3234#issuecomment-887036892


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  16m 23s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 50s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   0m 54s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 18s | 
[/results-checkstyle-hadoop-tools_hadoop-distcp.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3234/3/artifact/out/results-checkstyle-hadoop-tools_hadoop-distcp.txt)
 |  hadoop-tools/hadoop-distcp: The patch generated 50 new + 26 unchanged - 0 
fixed = 76 total (was 26)  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   0m 55s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  50m 17s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 136m  9s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3234/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3234 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 0db5247f3721 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / a3c764c355ef45e22ce004140c4a55c39c5db449 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3234/3/testReport/ |
   | Max. process+thread count | 547 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: 

[jira] [Work logged] (HDFS-16145) CopyListing fails with FNF exception with snapshot diff

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16145?focusedWorklogId=627988=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627988
 ]

ASF GitHub Bot logged work on HDFS-16145:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 20:22
Start Date: 26/Jul/21 20:22
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3234:
URL: https://github.com/apache/hadoop/pull/3234#issuecomment-887000902


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 20s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 16s | 
[/results-checkstyle-hadoop-tools_hadoop-distcp.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3234/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-distcp.txt)
 |  hadoop-tools/hadoop-distcp: The patch generated 51 new + 26 unchanged - 0 
fixed = 77 total (was 26)  |
   | +1 :green_heart: |  mvnsite  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  20m 57s | 
[/patch-unit-hadoop-tools_hadoop-distcp.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3234/2/artifact/out/patch-unit-hadoop-tools_hadoop-distcp.txt)
 |  hadoop-distcp in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  90m  5s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.tools.TestDistCpSync |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3234/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3234 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux d5f369204b6b 4.15.0-136-generic #140-Ubuntu SMP Thu Jan 28 
05:20:47 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d575e8c25f5acc7040c607434b95f0c26c4de29b |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 

[jira] [Work logged] (HDFS-16143) TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits is flaky

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16143?focusedWorklogId=627964=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627964
 ]

ASF GitHub Bot logged work on HDFS-16143:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 19:39
Start Date: 26/Jul/21 19:39
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3235:
URL: https://github.com/apache/hadoop/pull/3235#issuecomment-886973725


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 40s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 57s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 22s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 48s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m  7s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 433m 49s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 517m 42s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestDecommissioningStatus 
|
   |   | hadoop.hdfs.TestViewDistributedFileSystemContract |
   |   | hadoop.hdfs.server.namenode.ha.TestSeveralNameNodes |
   |   | hadoop.fs.viewfs.TestViewFSOverloadSchemeWithMountTableConfigInHDFS |
   |   | 
hadoop.hdfs.server.namenode.TestDecommissioningStatusWithBackoffMonitor |
   |   | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
   |   | hadoop.hdfs.TestHDFSFileSystemContract |
   |   | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3235/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3235 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 75332e3410c6 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git 

[jira] [Work logged] (HDFS-16145) CopyListing fails with FNF exception with snapshot diff

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16145?focusedWorklogId=627931=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627931
 ]

ASF GitHub Bot logged work on HDFS-16145:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 18:52
Start Date: 26/Jul/21 18:52
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on a change in pull request #3234:
URL: https://github.com/apache/hadoop/pull/3234#discussion_r676860099



##
File path: 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java
##
@@ -515,6 +521,26 @@ private DiffInfo getRenameItem(DiffInfo diff, DiffInfo[] 
renameDiffArray) {
 return null;
   }
 
+  /**
+   * checks if a parent dir is marked deleted as a part of dir rename happening
+   * to a path which is excluded by the the filter.
+   * @return true if it's marked deleted
+   */
+  private boolean isParentOrSelfMarkedDeleted(DiffInfo diff,
+  DiffInfo[] deletedDirDiffArray) {
+for (DiffInfo item : deletedDirDiffArray) {
+  if (item.getSource().equals(diff.getSource())) {
+if (diff.getType() == SnapshotDiffReport.DiffType.MODIFY) {

Review comment:
   This is not same as above.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 627931)
Time Spent: 1h 40m  (was: 1.5h)

> CopyListing fails with FNF exception with snapshot diff
> ---
>
> Key: HDFS-16145
> URL: https://issues.apache.org/jira/browse/HDFS-16145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 40m
>  Remaining Estimate: 0h
>
> Distcp with snapshotdiff and with filters, marks a Rename as a delete 
> opeartion on the target if the rename target is to a directory which is 
> exluded by the filter. But, in cases, where files/subdirs created/modified 
> prior to the Rename post the old snapshot will still be present as 
> modified/created entries in the final copy list. Since, the parent diretory 
> is marked for deletion, these subsequent create/modify entries should be 
> ignored while building the final copy list. 
> With such cases, when the final copy list is built, distcp tries to do a 
> lookup for each create/modified file in the newer snapshot which will fail 
> as, the parent dir is already moved to a new location in later snapshot.
>  
> {code:java}
> sudo -u kms hadoop key create testkey
> hadoop fs -mkdir -p /data/gcgdlknnasg/
> hdfs crypto -createZone -keyName testkey -path /data/gcgdlknnasg/
> hadoop fs -mkdir -p /dest/gcgdlknnasg
> hdfs crypto -createZone -keyName testkey -path /dest/gcgdlknnasg
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1
> hdfs dfsadmin -allowSnapshot /data/gcgdlknnasg/ 
> hdfs dfsadmin -allowSnapshot /dest/gcgdlknnasg/ 
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /data/gcgdlknnasg/
> drwxrwxrwt   - hdfs supergroup  0 2021-07-16 14:05 
> /data/gcgdlknnasg/.Trash
> drwxr-xr-x   - hdfs supergroup  0 2021-07-16 13:07 
> /data/gcgdlknnasg/dir1
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /dest/gcgdlknnasg/
> [root@nightly62x-1 logs]#
> hdfs dfs -put /etc/hosts /data/gcgdlknnasg/dir1/
> hdfs dfs -rm -r /data/gcgdlknnasg/dir1/
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1/
> ===> Run BDR with “Abort on Snapshot Diff Failures” CHECKED now in the 
> replication schedule. You get into below error and failure of the BDR job.
> 21/07/16 15:02:30 INFO distcp.DistCp: Failed to use snapshot diff - 
> java.io.FileNotFoundException: File does not exist: 
> /data/gcgdlknnasg/.snapshot/distcp-5-46485360-new/dir1/hosts
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1494)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1487)
> ……..
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16145) CopyListing fails with FNF exception with snapshot diff

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16145?focusedWorklogId=627930=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627930
 ]

ASF GitHub Bot logged work on HDFS-16145:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 18:49
Start Date: 26/Jul/21 18:49
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on a change in pull request #3234:
URL: https://github.com/apache/hadoop/pull/3234#discussion_r676858532



##
File path: 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java
##
@@ -563,10 +589,27 @@ private Path translateRenamedPath(Path sourcePath,
 } else {
   List renameDiffsList =
   diffMap.get(SnapshotDiffReport.DiffType.RENAME);
+  List deletedDirDiffsList =
+  diffMap.get(SnapshotDiffReport.DiffType.DELETE);

Review comment:
   The list will hold all the entries that are marked deleted. For single 
directory with 1000 files when gets deleted, the snapshot diff report itself 
will just have 1 deleted entry for the directory.
   
   It does scan the whole rename list already. With this, it needs to scan the 
deleted list as well and hence it can bring performance problem depending how 
many entries are marked deleted.
   
   I have modified the logic now to build a deleted list for entried which have 
been marked deleted only bcoz of rename to an excluded path (excluded by 
filter). This should limit the scans.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 627930)
Time Spent: 1.5h  (was: 1h 20m)

> CopyListing fails with FNF exception with snapshot diff
> ---
>
> Key: HDFS-16145
> URL: https://issues.apache.org/jira/browse/HDFS-16145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Distcp with snapshotdiff and with filters, marks a Rename as a delete 
> opeartion on the target if the rename target is to a directory which is 
> exluded by the filter. But, in cases, where files/subdirs created/modified 
> prior to the Rename post the old snapshot will still be present as 
> modified/created entries in the final copy list. Since, the parent diretory 
> is marked for deletion, these subsequent create/modify entries should be 
> ignored while building the final copy list. 
> With such cases, when the final copy list is built, distcp tries to do a 
> lookup for each create/modified file in the newer snapshot which will fail 
> as, the parent dir is already moved to a new location in later snapshot.
>  
> {code:java}
> sudo -u kms hadoop key create testkey
> hadoop fs -mkdir -p /data/gcgdlknnasg/
> hdfs crypto -createZone -keyName testkey -path /data/gcgdlknnasg/
> hadoop fs -mkdir -p /dest/gcgdlknnasg
> hdfs crypto -createZone -keyName testkey -path /dest/gcgdlknnasg
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1
> hdfs dfsadmin -allowSnapshot /data/gcgdlknnasg/ 
> hdfs dfsadmin -allowSnapshot /dest/gcgdlknnasg/ 
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /data/gcgdlknnasg/
> drwxrwxrwt   - hdfs supergroup  0 2021-07-16 14:05 
> /data/gcgdlknnasg/.Trash
> drwxr-xr-x   - hdfs supergroup  0 2021-07-16 13:07 
> /data/gcgdlknnasg/dir1
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /dest/gcgdlknnasg/
> [root@nightly62x-1 logs]#
> hdfs dfs -put /etc/hosts /data/gcgdlknnasg/dir1/
> hdfs dfs -rm -r /data/gcgdlknnasg/dir1/
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1/
> ===> Run BDR with “Abort on Snapshot Diff Failures” CHECKED now in the 
> replication schedule. You get into below error and failure of the BDR job.
> 21/07/16 15:02:30 INFO distcp.DistCp: Failed to use snapshot diff - 
> java.io.FileNotFoundException: File does not exist: 
> /data/gcgdlknnasg/.snapshot/distcp-5-46485360-new/dir1/hosts
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1494)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1487)
> ……..
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16145) CopyListing fails with FNF exception with snapshot diff

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16145?focusedWorklogId=627926=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627926
 ]

ASF GitHub Bot logged work on HDFS-16145:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 18:40
Start Date: 26/Jul/21 18:40
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on a change in pull request #3234:
URL: https://github.com/apache/hadoop/pull/3234#discussion_r676850480



##
File path: 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java
##
@@ -515,6 +521,26 @@ private DiffInfo getRenameItem(DiffInfo diff, DiffInfo[] 
renameDiffArray) {
 return null;
   }
 
+  /**
+   * checks if a parent dir is marked deleted as a part of dir rename happening
+   * to a path which is excluded by the the filter.
+   * @return true if it's marked deleted
+   */
+  private boolean isParentOrSelfMarkedDeleted(DiffInfo diff,
+  DiffInfo[] deletedDirDiffArray) {
+for (DiffInfo item : deletedDirDiffArray) {
+  if (item.getSource().equals(diff.getSource())) {
+if (diff.getType() == SnapshotDiffReport.DiffType.MODIFY) {
+  return true;
+}
+  }
+  if (isParentOf(item.getSource(), diff.getSource())) {

Review comment:
   in the later case, "create /foo/dir/f1" won't be reported in the 
snapshot diff itself as the parent itself is created post the old snapshot. The 
modify on the recreated "/foo/dir" won't also be listed in the snapshot diff 
report.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 627926)
Time Spent: 1h 20m  (was: 1h 10m)

> CopyListing fails with FNF exception with snapshot diff
> ---
>
> Key: HDFS-16145
> URL: https://issues.apache.org/jira/browse/HDFS-16145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Distcp with snapshotdiff and with filters, marks a Rename as a delete 
> opeartion on the target if the rename target is to a directory which is 
> exluded by the filter. But, in cases, where files/subdirs created/modified 
> prior to the Rename post the old snapshot will still be present as 
> modified/created entries in the final copy list. Since, the parent diretory 
> is marked for deletion, these subsequent create/modify entries should be 
> ignored while building the final copy list. 
> With such cases, when the final copy list is built, distcp tries to do a 
> lookup for each create/modified file in the newer snapshot which will fail 
> as, the parent dir is already moved to a new location in later snapshot.
>  
> {code:java}
> sudo -u kms hadoop key create testkey
> hadoop fs -mkdir -p /data/gcgdlknnasg/
> hdfs crypto -createZone -keyName testkey -path /data/gcgdlknnasg/
> hadoop fs -mkdir -p /dest/gcgdlknnasg
> hdfs crypto -createZone -keyName testkey -path /dest/gcgdlknnasg
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1
> hdfs dfsadmin -allowSnapshot /data/gcgdlknnasg/ 
> hdfs dfsadmin -allowSnapshot /dest/gcgdlknnasg/ 
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /data/gcgdlknnasg/
> drwxrwxrwt   - hdfs supergroup  0 2021-07-16 14:05 
> /data/gcgdlknnasg/.Trash
> drwxr-xr-x   - hdfs supergroup  0 2021-07-16 13:07 
> /data/gcgdlknnasg/dir1
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /dest/gcgdlknnasg/
> [root@nightly62x-1 logs]#
> hdfs dfs -put /etc/hosts /data/gcgdlknnasg/dir1/
> hdfs dfs -rm -r /data/gcgdlknnasg/dir1/
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1/
> ===> Run BDR with “Abort on Snapshot Diff Failures” CHECKED now in the 
> replication schedule. You get into below error and failure of the BDR job.
> 21/07/16 15:02:30 INFO distcp.DistCp: Failed to use snapshot diff - 
> java.io.FileNotFoundException: File does not exist: 
> /data/gcgdlknnasg/.snapshot/distcp-5-46485360-new/dir1/hosts
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1494)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1487)
> ……..
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, 

[jira] [Commented] (HDFS-14703) NameNode Fine-Grained Locking via Metadata Partitioning

2021-07-26 Thread Daryn Sharp (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-14703?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17387540#comment-17387540
 ] 

Daryn Sharp commented on HDFS-14703:


I applaud this effort but I have concerns about real world scalability.  This 
may positively affect a synthetic benchmark, small clusters with a small 
namespace, or a presumed deterministic data creation rate and locality, but…. 
How will this scale to namespaces up to almost 1 billion total objects with 
250-500 million inodes performing an avg 20-40k ops/sec with occasional spikes 
exceeding 100k ops/sec, with up to 600 active applications?
{quote}entries of the same directory are partitioned into the same range most 
of the time. “Most of the time” means that very large directories containing 
too many children or some of boundary directories can still span across 
multiple partitions.
{quote}
How many is “too many children”? It’s not uncommon for directories to contain 
thousands or even tens/hundreds of thousands of files.

Jobs with large dags that run for minutes, hours, or days seem likely to result 
to violate “most of the time” and create high fragmentation across partitions. 
Task time shift caused by queue resource availability, speculation, preemption, 
etc will violate the premise of inodes neatly clustering into partitions based 
on creation time.

How will this handle things like IBR processing which includes many blocks 
spread across multiple partitions? Especially during a replication flurry 
caused by rack loss or multi-rack decommissioning (over a hundred hosts)?

How will live lock conditions be resolved that result from multiple ops needing 
to lock multiple overlapping partitions? Managing that dependency graph might 
wipe out real world improvements at scale.
{quote}An analysis shows that the two locks are held simultaneously most of the 
time, making one of them redundant. We suggest removing the FSDirectoryLock.
{quote}
Please do not remove the fsdir lock. While the fsn and fsd lock are generally 
redundant, we have internal changes for operations like the horribly expensive 
content summary to not hold the fsn lock after resolving the path. I’ve been 
investigating whether some other operations can safely release the fsn lock or 
downgrade from a fsn write to read lock after acquiring the fsd lock.
{quote}Particularly, this means that each BlocksMap partition contains all 
blocks of files in the corresponding INodeMap partition.
{quote}
If the namespace is “unexpectedly” fragmented across multiple partitions per 
above, what further effect will this have on data skew (blocks per files) in 
the partition? Users generate an assortment of relatively small files plus 
multi-GB or TB files. A directory tree may contain combinations of dirs 
containing a mixture of anything from minutes/hourly/daily/weekly/monthly 
rollup data. This sort of partitioning seems likely to result in further 
lopsided partitioning within the blocks map?
{quote}We ran NNThroughputBenchmark for mkdir() operation creating 10 million 
directories with 200 concurrent threads. The results show
 1. 30-40% improvement in throughput compared to current INodeMap implementation
{quote}
Can you please provide more context?
 # Was the entry point for the calls via the rpc server, fsn, fsdir, etc? 
Relevant since end-to-end benchmarking rarely matches microbenchmarks.
 # What is “30-40%” improvement? How many ops/sec before and after?
 # Did the threads create dirs in a partition-friendly order? As in sequential 
creation under the same dir trees?
 # What impact did it have on gc/min and gc time? These are often hidden 
killers of performance when not taken into consideration.

> NameNode Fine-Grained Locking via Metadata Partitioning
> ---
>
> Key: HDFS-14703
> URL: https://issues.apache.org/jira/browse/HDFS-14703
> Project: Hadoop HDFS
>  Issue Type: Improvement
>  Components: hdfs, namenode
>Reporter: Konstantin Shvachko
>Priority: Major
> Attachments: 001-partitioned-inodeMap-POC.tar.gz, 
> 002-partitioned-inodeMap-POC.tar.gz, 003-partitioned-inodeMap-POC.tar.gz, 
> NameNode Fine-Grained Locking.pdf, NameNode Fine-Grained Locking.pdf
>
>
> We target to enable fine-grained locking by splitting the in-memory namespace 
> into multiple partitions each having a separate lock. Intended to improve 
> performance of NameNode write operations.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16145) CopyListing fails with FNF exception with snapshot diff

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16145?focusedWorklogId=627925=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627925
 ]

ASF GitHub Bot logged work on HDFS-16145:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 18:37
Start Date: 26/Jul/21 18:37
Worklog Time Spent: 10m 
  Work Description: bshashikant commented on a change in pull request #3234:
URL: https://github.com/apache/hadoop/pull/3234#discussion_r676850480



##
File path: 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java
##
@@ -515,6 +521,26 @@ private DiffInfo getRenameItem(DiffInfo diff, DiffInfo[] 
renameDiffArray) {
 return null;
   }
 
+  /**
+   * checks if a parent dir is marked deleted as a part of dir rename happening
+   * to a path which is excluded by the the filter.
+   * @return true if it's marked deleted
+   */
+  private boolean isParentOrSelfMarkedDeleted(DiffInfo diff,
+  DiffInfo[] deletedDirDiffArray) {
+for (DiffInfo item : deletedDirDiffArray) {
+  if (item.getSource().equals(diff.getSource())) {
+if (diff.getType() == SnapshotDiffReport.DiffType.MODIFY) {
+  return true;
+}
+  }
+  if (isParentOf(item.getSource(), diff.getSource())) {

Review comment:
   in the later case, "create /foo/dir/f1" won't be reported in the 
snapshot diff itself as the parent itself is created post the old snapshot.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 627925)
Time Spent: 1h 10m  (was: 1h)

> CopyListing fails with FNF exception with snapshot diff
> ---
>
> Key: HDFS-16145
> URL: https://issues.apache.org/jira/browse/HDFS-16145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Distcp with snapshotdiff and with filters, marks a Rename as a delete 
> opeartion on the target if the rename target is to a directory which is 
> exluded by the filter. But, in cases, where files/subdirs created/modified 
> prior to the Rename post the old snapshot will still be present as 
> modified/created entries in the final copy list. Since, the parent diretory 
> is marked for deletion, these subsequent create/modify entries should be 
> ignored while building the final copy list. 
> With such cases, when the final copy list is built, distcp tries to do a 
> lookup for each create/modified file in the newer snapshot which will fail 
> as, the parent dir is already moved to a new location in later snapshot.
>  
> {code:java}
> sudo -u kms hadoop key create testkey
> hadoop fs -mkdir -p /data/gcgdlknnasg/
> hdfs crypto -createZone -keyName testkey -path /data/gcgdlknnasg/
> hadoop fs -mkdir -p /dest/gcgdlknnasg
> hdfs crypto -createZone -keyName testkey -path /dest/gcgdlknnasg
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1
> hdfs dfsadmin -allowSnapshot /data/gcgdlknnasg/ 
> hdfs dfsadmin -allowSnapshot /dest/gcgdlknnasg/ 
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /data/gcgdlknnasg/
> drwxrwxrwt   - hdfs supergroup  0 2021-07-16 14:05 
> /data/gcgdlknnasg/.Trash
> drwxr-xr-x   - hdfs supergroup  0 2021-07-16 13:07 
> /data/gcgdlknnasg/dir1
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /dest/gcgdlknnasg/
> [root@nightly62x-1 logs]#
> hdfs dfs -put /etc/hosts /data/gcgdlknnasg/dir1/
> hdfs dfs -rm -r /data/gcgdlknnasg/dir1/
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1/
> ===> Run BDR with “Abort on Snapshot Diff Failures” CHECKED now in the 
> replication schedule. You get into below error and failure of the BDR job.
> 21/07/16 15:02:30 INFO distcp.DistCp: Failed to use snapshot diff - 
> java.io.FileNotFoundException: File does not exist: 
> /data/gcgdlknnasg/.snapshot/distcp-5-46485360-new/dir1/hosts
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1494)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1487)
> ……..
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16141) [FGL] Address permission related issues with File / Directory

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16141?focusedWorklogId=627922=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627922
 ]

ASF GitHub Bot logged work on HDFS-16141:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 18:28
Start Date: 26/Jul/21 18:28
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3232:
URL: https://github.com/apache/hadoop/pull/3232#issuecomment-886930077


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  5s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ fgl Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 30s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3232/3/artifact/out/branch-mvninstall-root.txt)
 |  root in fgl failed.  |
   | -1 :x: |  compile  |   0m 29s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3232/3/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs in fgl failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  
|
   | -1 :x: |  compile  |   0m 12s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3232/3/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-hdfs in fgl failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  checkstyle  |   3m 44s |  |  fgl passed  |
   | -1 :x: |  mvnsite  |   0m 25s | 
[/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3232/3/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in fgl failed.  |
   | -1 :x: |  javadoc  |   0m 48s | 
[/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3232/3/artifact/out/branch-javadoc-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs in fgl failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  
|
   | +1 :green_heart: |  javadoc  |   1m 46s |  |  fgl passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  spotbugs  |   0m 18s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3232/3/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in fgl failed.  |
   | +1 :green_heart: |  shadedclient  |  27m 32s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 10s | 
[/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3232/3/artifact/out/patch-mvninstall-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch failed.  |
   | -1 :x: |  compile  |   0m 11s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3232/3/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |   0m 11s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3232/3/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-hdfs in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   0m 11s | 
[/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3232/3/artifact/out/patch-compile-hadoop-hdfs-project_hadoop-hdfs-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-hdfs in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   

[jira] [Updated] (HDFS-16144) Revert HDFS-15372 (Files in snapshots no longer see attribute provider permissions)

2021-07-26 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-16144:
-
Attachment: HDFS-16144.002.patch

> Revert HDFS-15372 (Files in snapshots no longer see attribute provider 
> permissions)
> ---
>
> Key: HDFS-16144
> URL: https://issues.apache.org/jira/browse/HDFS-16144
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-16144.001.patch, HDFS-16144.002.patch
>
>
> In HDFS-15372, I noted a change in behaviour between Hadoop 2 and Hadoop 3. 
> When a user accesses a file in a snapshot, if an attribute provider is 
> configured it would see the original file path (ie no .snapshot folder) in 
> Hadoop 2, but it would see the snapshot path in Hadoop 3.
> HDFS-15372 changed this back, but I noted at the time it may make sense for 
> the provider to see the actual snapshot path instead.
> Recently we discovered HDFS-16132 where the HDFS-15372 does not work 
> correctly. At this stage I believe it is better to revert HDFS-15372 as the 
> fix to this issue is probably not trivial and allow providers to see the 
> actual path the user accessed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-16144) Revert HDFS-15372 (Files in snapshots no longer see attribute provider permissions)

2021-07-26 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17387444#comment-17387444
 ] 

Hadoop QA commented on HDFS-16144:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
56s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 1 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 20m 
 0s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
26s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
18s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
24s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
16m  1s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 21m 
41s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are 
enabled, using SpotBugs. {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  3m 
17s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
15s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
16s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
16s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
6s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 49s{color} | 
{color:orange}https://ci-hadoop.apache.org/job/PreCommit-HDFS-Build/686/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt{color}
 | {color:orange} hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 
64 unchanged - 0 fixed = 65 total (was 64) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
16s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m  9s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green}{color} | 

[jira] [Work logged] (HDFS-16145) CopyListing fails with FNF exception with snapshot diff

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16145?focusedWorklogId=627818=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627818
 ]

ASF GitHub Bot logged work on HDFS-16145:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 15:29
Start Date: 26/Jul/21 15:29
Worklog Time Spent: 10m 
  Work Description: sodonnel commented on a change in pull request #3234:
URL: https://github.com/apache/hadoop/pull/3234#discussion_r676713165



##
File path: 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java
##
@@ -563,10 +589,27 @@ private Path translateRenamedPath(Path sourcePath,
 } else {
   List renameDiffsList =
   diffMap.get(SnapshotDiffReport.DiffType.RENAME);
+  List deletedDirDiffsList =
+  diffMap.get(SnapshotDiffReport.DiffType.DELETE);

Review comment:
   Will this list hold all files and directories that have been deleted, 
eg: if I delete a directory with 1000 entries, will this end up with 1001 
entries?
   
   Then in `isParentOrSelfMarkedDeleted()` we need to scan this list. Could 
this list be very large and cause a performance problem when scanning it over 
and over for each entry in the diffList?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 627818)
Time Spent: 1h  (was: 50m)

> CopyListing fails with FNF exception with snapshot diff
> ---
>
> Key: HDFS-16145
> URL: https://issues.apache.org/jira/browse/HDFS-16145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Distcp with snapshotdiff and with filters, marks a Rename as a delete 
> opeartion on the target if the rename target is to a directory which is 
> exluded by the filter. But, in cases, where files/subdirs created/modified 
> prior to the Rename post the old snapshot will still be present as 
> modified/created entries in the final copy list. Since, the parent diretory 
> is marked for deletion, these subsequent create/modify entries should be 
> ignored while building the final copy list. 
> With such cases, when the final copy list is built, distcp tries to do a 
> lookup for each create/modified file in the newer snapshot which will fail 
> as, the parent dir is already moved to a new location in later snapshot.
>  
> {code:java}
> sudo -u kms hadoop key create testkey
> hadoop fs -mkdir -p /data/gcgdlknnasg/
> hdfs crypto -createZone -keyName testkey -path /data/gcgdlknnasg/
> hadoop fs -mkdir -p /dest/gcgdlknnasg
> hdfs crypto -createZone -keyName testkey -path /dest/gcgdlknnasg
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1
> hdfs dfsadmin -allowSnapshot /data/gcgdlknnasg/ 
> hdfs dfsadmin -allowSnapshot /dest/gcgdlknnasg/ 
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /data/gcgdlknnasg/
> drwxrwxrwt   - hdfs supergroup  0 2021-07-16 14:05 
> /data/gcgdlknnasg/.Trash
> drwxr-xr-x   - hdfs supergroup  0 2021-07-16 13:07 
> /data/gcgdlknnasg/dir1
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /dest/gcgdlknnasg/
> [root@nightly62x-1 logs]#
> hdfs dfs -put /etc/hosts /data/gcgdlknnasg/dir1/
> hdfs dfs -rm -r /data/gcgdlknnasg/dir1/
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1/
> ===> Run BDR with “Abort on Snapshot Diff Failures” CHECKED now in the 
> replication schedule. You get into below error and failure of the BDR job.
> 21/07/16 15:02:30 INFO distcp.DistCp: Failed to use snapshot diff - 
> java.io.FileNotFoundException: File does not exist: 
> /data/gcgdlknnasg/.snapshot/distcp-5-46485360-new/dir1/hosts
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1494)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1487)
> ……..
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16145) CopyListing fails with FNF exception with snapshot diff

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16145?focusedWorklogId=627809=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627809
 ]

ASF GitHub Bot logged work on HDFS-16145:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 15:02
Start Date: 26/Jul/21 15:02
Worklog Time Spent: 10m 
  Work Description: sodonnel commented on a change in pull request #3234:
URL: https://github.com/apache/hadoop/pull/3234#discussion_r676688811



##
File path: 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java
##
@@ -515,6 +521,26 @@ private DiffInfo getRenameItem(DiffInfo diff, DiffInfo[] 
renameDiffArray) {
 return null;
   }
 
+  /**
+   * checks if a parent dir is marked deleted as a part of dir rename happening
+   * to a path which is excluded by the the filter.
+   * @return true if it's marked deleted
+   */
+  private boolean isParentOrSelfMarkedDeleted(DiffInfo diff,
+  DiffInfo[] deletedDirDiffArray) {
+for (DiffInfo item : deletedDirDiffArray) {
+  if (item.getSource().equals(diff.getSource())) {
+if (diff.getType() == SnapshotDiffReport.DiffType.MODIFY) {
+  return true;
+}
+  }
+  if (isParentOf(item.getSource(), diff.getSource())) {

Review comment:
   How does this code distinguish between this scenario:
   
   create /foo/dir/f1
   delete /foo/dir
   
   And:
   
   create /foo/dir/f1
   delete /foo/dir
   create /foo/dir  ** This will be a create in the diff be retained by the 
MODIFY filter above if item.getSource == diff.getSource.
   create /foo/dir/f1 ** This second case should be kept, but it will match the 
parent of the deleted /foo/dir, so what stops it getting excluded?
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 627809)
Time Spent: 40m  (was: 0.5h)

> CopyListing fails with FNF exception with snapshot diff
> ---
>
> Key: HDFS-16145
> URL: https://issues.apache.org/jira/browse/HDFS-16145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Distcp with snapshotdiff and with filters, marks a Rename as a delete 
> opeartion on the target if the rename target is to a directory which is 
> exluded by the filter. But, in cases, where files/subdirs created/modified 
> prior to the Rename post the old snapshot will still be present as 
> modified/created entries in the final copy list. Since, the parent diretory 
> is marked for deletion, these subsequent create/modify entries should be 
> ignored while building the final copy list. 
> With such cases, when the final copy list is built, distcp tries to do a 
> lookup for each create/modified file in the newer snapshot which will fail 
> as, the parent dir is already moved to a new location in later snapshot.
>  
> {code:java}
> sudo -u kms hadoop key create testkey
> hadoop fs -mkdir -p /data/gcgdlknnasg/
> hdfs crypto -createZone -keyName testkey -path /data/gcgdlknnasg/
> hadoop fs -mkdir -p /dest/gcgdlknnasg
> hdfs crypto -createZone -keyName testkey -path /dest/gcgdlknnasg
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1
> hdfs dfsadmin -allowSnapshot /data/gcgdlknnasg/ 
> hdfs dfsadmin -allowSnapshot /dest/gcgdlknnasg/ 
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /data/gcgdlknnasg/
> drwxrwxrwt   - hdfs supergroup  0 2021-07-16 14:05 
> /data/gcgdlknnasg/.Trash
> drwxr-xr-x   - hdfs supergroup  0 2021-07-16 13:07 
> /data/gcgdlknnasg/dir1
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /dest/gcgdlknnasg/
> [root@nightly62x-1 logs]#
> hdfs dfs -put /etc/hosts /data/gcgdlknnasg/dir1/
> hdfs dfs -rm -r /data/gcgdlknnasg/dir1/
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1/
> ===> Run BDR with “Abort on Snapshot Diff Failures” CHECKED now in the 
> replication schedule. You get into below error and failure of the BDR job.
> 21/07/16 15:02:30 INFO distcp.DistCp: Failed to use snapshot diff - 
> java.io.FileNotFoundException: File does not exist: 
> /data/gcgdlknnasg/.snapshot/distcp-5-46485360-new/dir1/hosts
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1494)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1487)
> ……..
> {code}



--
This message was sent by 

[jira] [Work logged] (HDFS-16145) CopyListing fails with FNF exception with snapshot diff

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16145?focusedWorklogId=627810=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627810
 ]

ASF GitHub Bot logged work on HDFS-16145:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 15:02
Start Date: 26/Jul/21 15:02
Worklog Time Spent: 10m 
  Work Description: sodonnel commented on a change in pull request #3234:
URL: https://github.com/apache/hadoop/pull/3234#discussion_r676688811



##
File path: 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java
##
@@ -515,6 +521,26 @@ private DiffInfo getRenameItem(DiffInfo diff, DiffInfo[] 
renameDiffArray) {
 return null;
   }
 
+  /**
+   * checks if a parent dir is marked deleted as a part of dir rename happening
+   * to a path which is excluded by the the filter.
+   * @return true if it's marked deleted
+   */
+  private boolean isParentOrSelfMarkedDeleted(DiffInfo diff,
+  DiffInfo[] deletedDirDiffArray) {
+for (DiffInfo item : deletedDirDiffArray) {
+  if (item.getSource().equals(diff.getSource())) {
+if (diff.getType() == SnapshotDiffReport.DiffType.MODIFY) {
+  return true;
+}
+  }
+  if (isParentOf(item.getSource(), diff.getSource())) {

Review comment:
   How does this code distinguish between this scenario:
   
   ```
   create /foo/dir/f1
   delete /foo/dir
   ```
   
   And:
   
   ```
   create /foo/dir/f1
   delete /foo/dir
   create /foo/dir  ** This will be a create in the diff be retained by the 
MODIFY filter above if item.getSource == diff.getSource.
   create /foo/dir/f1 ** This second case should be kept, but it will match the 
parent of the deleted /foo/dir, so what stops it getting excluded?
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 627810)
Time Spent: 50m  (was: 40m)

> CopyListing fails with FNF exception with snapshot diff
> ---
>
> Key: HDFS-16145
> URL: https://issues.apache.org/jira/browse/HDFS-16145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Distcp with snapshotdiff and with filters, marks a Rename as a delete 
> opeartion on the target if the rename target is to a directory which is 
> exluded by the filter. But, in cases, where files/subdirs created/modified 
> prior to the Rename post the old snapshot will still be present as 
> modified/created entries in the final copy list. Since, the parent diretory 
> is marked for deletion, these subsequent create/modify entries should be 
> ignored while building the final copy list. 
> With such cases, when the final copy list is built, distcp tries to do a 
> lookup for each create/modified file in the newer snapshot which will fail 
> as, the parent dir is already moved to a new location in later snapshot.
>  
> {code:java}
> sudo -u kms hadoop key create testkey
> hadoop fs -mkdir -p /data/gcgdlknnasg/
> hdfs crypto -createZone -keyName testkey -path /data/gcgdlknnasg/
> hadoop fs -mkdir -p /dest/gcgdlknnasg
> hdfs crypto -createZone -keyName testkey -path /dest/gcgdlknnasg
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1
> hdfs dfsadmin -allowSnapshot /data/gcgdlknnasg/ 
> hdfs dfsadmin -allowSnapshot /dest/gcgdlknnasg/ 
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /data/gcgdlknnasg/
> drwxrwxrwt   - hdfs supergroup  0 2021-07-16 14:05 
> /data/gcgdlknnasg/.Trash
> drwxr-xr-x   - hdfs supergroup  0 2021-07-16 13:07 
> /data/gcgdlknnasg/dir1
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /dest/gcgdlknnasg/
> [root@nightly62x-1 logs]#
> hdfs dfs -put /etc/hosts /data/gcgdlknnasg/dir1/
> hdfs dfs -rm -r /data/gcgdlknnasg/dir1/
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1/
> ===> Run BDR with “Abort on Snapshot Diff Failures” CHECKED now in the 
> replication schedule. You get into below error and failure of the BDR job.
> 21/07/16 15:02:30 INFO distcp.DistCp: Failed to use snapshot diff - 
> java.io.FileNotFoundException: File does not exist: 
> /data/gcgdlknnasg/.snapshot/distcp-5-46485360-new/dir1/hosts
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1494)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1487)
> ……..
> {code}



--

[jira] [Work logged] (HDFS-16145) CopyListing fails with FNF exception with snapshot diff

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16145?focusedWorklogId=627803=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627803
 ]

ASF GitHub Bot logged work on HDFS-16145:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 14:52
Start Date: 26/Jul/21 14:52
Worklog Time Spent: 10m 
  Work Description: sodonnel commented on a change in pull request #3234:
URL: https://github.com/apache/hadoop/pull/3234#discussion_r676680004



##
File path: 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCpSync.java
##
@@ -515,6 +521,26 @@ private DiffInfo getRenameItem(DiffInfo diff, DiffInfo[] 
renameDiffArray) {
 return null;
   }
 
+  /**
+   * checks if a parent dir is marked deleted as a part of dir rename happening
+   * to a path which is excluded by the the filter.
+   * @return true if it's marked deleted
+   */
+  private boolean isParentOrSelfMarkedDeleted(DiffInfo diff,
+  DiffInfo[] deletedDirDiffArray) {
+for (DiffInfo item : deletedDirDiffArray) {
+  if (item.getSource().equals(diff.getSource())) {
+if (diff.getType() == SnapshotDiffReport.DiffType.MODIFY) {

Review comment:
   Could you copy the comment from getRenameItem here, as it helps explain 
why we are only filtering modified entries:
   
   > // The same path string may appear in:
   // 1. both renamed and modified snapshot diff entries.
   // 2. both renamed and created snapshot diff entries.
   // Case 1 is the about same file/directory, whereas case 2
   // is about two different files/directories.
   // We are finding case 1 here, thus we check against DiffType.MODIFY.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 627803)
Time Spent: 0.5h  (was: 20m)

> CopyListing fails with FNF exception with snapshot diff
> ---
>
> Key: HDFS-16145
> URL: https://issues.apache.org/jira/browse/HDFS-16145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Distcp with snapshotdiff and with filters, marks a Rename as a delete 
> opeartion on the target if the rename target is to a directory which is 
> exluded by the filter. But, in cases, where files/subdirs created/modified 
> prior to the Rename post the old snapshot will still be present as 
> modified/created entries in the final copy list. Since, the parent diretory 
> is marked for deletion, these subsequent create/modify entries should be 
> ignored while building the final copy list. 
> With such cases, when the final copy list is built, distcp tries to do a 
> lookup for each create/modified file in the newer snapshot which will fail 
> as, the parent dir is already moved to a new location in later snapshot.
>  
> {code:java}
> sudo -u kms hadoop key create testkey
> hadoop fs -mkdir -p /data/gcgdlknnasg/
> hdfs crypto -createZone -keyName testkey -path /data/gcgdlknnasg/
> hadoop fs -mkdir -p /dest/gcgdlknnasg
> hdfs crypto -createZone -keyName testkey -path /dest/gcgdlknnasg
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1
> hdfs dfsadmin -allowSnapshot /data/gcgdlknnasg/ 
> hdfs dfsadmin -allowSnapshot /dest/gcgdlknnasg/ 
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /data/gcgdlknnasg/
> drwxrwxrwt   - hdfs supergroup  0 2021-07-16 14:05 
> /data/gcgdlknnasg/.Trash
> drwxr-xr-x   - hdfs supergroup  0 2021-07-16 13:07 
> /data/gcgdlknnasg/dir1
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /dest/gcgdlknnasg/
> [root@nightly62x-1 logs]#
> hdfs dfs -put /etc/hosts /data/gcgdlknnasg/dir1/
> hdfs dfs -rm -r /data/gcgdlknnasg/dir1/
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1/
> ===> Run BDR with “Abort on Snapshot Diff Failures” CHECKED now in the 
> replication schedule. You get into below error and failure of the BDR job.
> 21/07/16 15:02:30 INFO distcp.DistCp: Failed to use snapshot diff - 
> java.io.FileNotFoundException: File does not exist: 
> /data/gcgdlknnasg/.snapshot/distcp-5-46485360-new/dir1/hosts
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1494)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1487)
> ……..
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HDFS-16141) [FGL] Address permission related issues with File / Directory

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16141?focusedWorklogId=627779=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627779
 ]

ASF GitHub Bot logged work on HDFS-16141:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 14:18
Start Date: 26/Jul/21 14:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3232:
URL: https://github.com/apache/hadoop/pull/3232#issuecomment-886744475


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 45s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ fgl Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 15s |  |  fgl passed  |
   | +1 :green_heart: |  compile  |   1m 26s |  |  fgl passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 14s |  |  fgl passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  fgl passed  |
   | +1 :green_heart: |  mvnsite  |   1m 23s |  |  fgl passed  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  fgl passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  fgl passed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  spotbugs  |   3m 15s | 
[/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3232/2/artifact/out/branch-spotbugs-hadoop-hdfs-project_hadoop-hdfs-warnings.html)
 |  hadoop-hdfs-project/hadoop-hdfs in fgl has 1 extant spotbugs warnings.  |
   | +1 :green_heart: |  shadedclient  |  19m  5s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  8s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  spotbugs  |   3m 25s | 
[/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3232/2/artifact/out/new-spotbugs-hadoop-hdfs-project_hadoop-hdfs.html)
 |  hadoop-hdfs-project/hadoop-hdfs generated 1 new + 0 unchanged - 1 fixed = 1 
total (was 1)  |
   | +1 :green_heart: |  shadedclient  |  18m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 443m 49s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3232/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 20s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 536m  1s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | SpotBugs | module:hadoop-hdfs-project/hadoop-hdfs |
   |  |  Dead store to parent in 
org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.createPathDirectories(FSDirectory,
 INodesInPath, PermissionStatus)  At 
FSDirMkdirOp.java:org.apache.hadoop.hdfs.server.namenode.FSDirMkdirOp.createPathDirectories(FSDirectory,
 INodesInPath, PermissionStatus)  At FSDirMkdirOp.java:[line 292] |
   | Failed junit tests | hadoop.hdfs.TestFileCreation |
   |   | hadoop.hdfs.server.namenode.TestStartup |
   |   | hadoop.hdfs.server.namenode.snapshot.TestSnapshot |
   |   | hadoop.hdfs.server.namenode.TestNameEditsConfigs |
   |   | hadoop.hdfs.TestPersistBlocks |
   |   | hadoop.hdfs.TestRollingUpgradeRollback |
   |   | 

[jira] [Work logged] (HDFS-16145) CopyListing fails with FNF exception with snapshot diff

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16145?focusedWorklogId=627704=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627704
 ]

ASF GitHub Bot logged work on HDFS-16145:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 12:25
Start Date: 26/Jul/21 12:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3234:
URL: https://github.com/apache/hadoop/pull/3234#issuecomment-886656255


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 34s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 42s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m  0s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 17s | 
[/results-checkstyle-hadoop-tools_hadoop-distcp.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3234/1/artifact/out/results-checkstyle-hadoop-tools_hadoop-distcp.txt)
 |  hadoop-tools/hadoop-distcp: The patch generated 50 new + 26 unchanged - 0 
fixed = 76 total (was 26)  |
   | +1 :green_heart: |  mvnsite  |   0m 25s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   0m 49s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  21m 40s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 37s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  90m 18s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3234/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3234 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 0b02cc4a8168 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 011fcb3da27b30c42c556800c67c513bf96af807 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3234/1/testReport/ |
   | Max. process+thread count | 733 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp 

[jira] [Commented] (HDFS-15886) Add a way to get protected dirs from a special configuration file

2021-07-26 Thread Stephen O'Donnell (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-15886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17387273#comment-17387273
 ] 

Stephen O'Donnell commented on HDFS-15886:
--

{quote}
As for reload the config contents, maybe we can add a new protocol like 
refreshProtectedDirectories (similar to refreshNodes command) instead of 
reconfigging fs.protected.directories by calling 
Namenode.reconfProtectedDirectories.
{quote}

I think it would be better and simpler from the users perspective, if we 
changed the reconfiguration framework to allow some parameters to always be 
refreshed. Eg, ` fs.protected.directories` is already reconfigurable, but we 
could flag it somehow so that it always runs the refresh even if the value has 
not changed. That way, it could pick up changes in the file and we don't need a 
special extra comment and two different ways to refresh protected directories.

> Add a way to get protected dirs from a special configuration file
> -
>
> Key: HDFS-15886
> URL: https://issues.apache.org/jira/browse/HDFS-15886
> Project: Hadoop HDFS
>  Issue Type: New Feature
>  Components: namenode
>Affects Versions: 3.4.0
>Reporter: Max  Xie
>Assignee: Max  Xie
>Priority: Minor
>  Labels: pull-request-available
> Attachments: HDFS-15886.patch
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> We used protected dirs to ensure that important data directories cannot be 
> deleted by mistake. But protected dirs can only be configured in 
> hdfs-site.xml.
> For ease of management,  we add a way to get the list of protected dirs from 
> a special configuration file.
> How to use.
> 1. set the config in hdfs-site.xml
> ```
> 
>  fs.protected.directories
>  
> /hdfs/path/1,/hdfs/path/2,[file:///path/to/protected.dirs.config]
>  
> ```
> 2.  add some protected dirs to the config file 
> ([file:///path/to/protected.dirs.config])
> ```
> /hdfs/path/4
> /hdfs/path/5
> ```
> 3. use command to refresh fs.protected.directories instead of 
> FSDirectory.setProtectedDirectories(..)
> ```
> hdfs dfsadmin -refreshProtectedDirectories
> ```
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16143) TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits is flaky

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16143:
--
Labels: pull-request-available  (was: )

> TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits is flaky
> -
>
> Key: HDFS-16143
> URL: https://issues.apache.org/jira/browse/HDFS-16143
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
> Attachments: patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3229/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
> {quote}
> [ERROR] 
> testStandbyTriggersLogRollsWhenTailInProgressEdits[0](org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer)
>   Time elapsed: 6.862 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:87)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at org.junit.Assert.assertTrue(Assert.java:53)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testStandbyTriggersLogRollsWhenTailInProgressEdits(TestEditLogTailer.java:444)
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16143) TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits is flaky

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16143?focusedWorklogId=627655=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627655
 ]

ASF GitHub Bot logged work on HDFS-16143:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 11:00
Start Date: 26/Jul/21 11:00
Worklog Time Spent: 10m 
  Work Description: virajjasani opened a new pull request #3235:
URL: https://github.com/apache/hadoop/pull/3235


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 627655)
Remaining Estimate: 0h
Time Spent: 10m

> TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits is flaky
> -
>
> Key: HDFS-16143
> URL: https://issues.apache.org/jira/browse/HDFS-16143
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
> Attachments: patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3229/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
> {quote}
> [ERROR] 
> testStandbyTriggersLogRollsWhenTailInProgressEdits[0](org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer)
>   Time elapsed: 6.862 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:87)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at org.junit.Assert.assertTrue(Assert.java:53)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testStandbyTriggersLogRollsWhenTailInProgressEdits(TestEditLogTailer.java:444)
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16145) CopyListing fails with FNF exception with snapshot diff

2021-07-26 Thread Shashikant Banerjee (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shashikant Banerjee updated HDFS-16145:
---
Description: 
Distcp with snapshotdiff and with filters, marks a Rename as a delete opeartion 
on the target if the rename target is to a directory which is exluded by the 
filter. But, in cases, where files/subdirs created/modified prior to the Rename 
post the old snapshot will still be present as modified/created entries in the 
final copy list. Since, the parent diretory is marked for deletion, these 
subsequent create/modify entries should be ignored while building the final 
copy list. 

With such cases, when the final copy list is built, distcp tries to do a lookup 
for each create/modified file in the newer snapshot which will fail as, the 
parent dir is already moved to a new location in later snapshot.

 
{code:java}
sudo -u kms hadoop key create testkey
hadoop fs -mkdir -p /data/gcgdlknnasg/
hdfs crypto -createZone -keyName testkey -path /data/gcgdlknnasg/
hadoop fs -mkdir -p /dest/gcgdlknnasg
hdfs crypto -createZone -keyName testkey -path /dest/gcgdlknnasg
hdfs dfs -mkdir /data/gcgdlknnasg/dir1
hdfs dfsadmin -allowSnapshot /data/gcgdlknnasg/ 
hdfs dfsadmin -allowSnapshot /dest/gcgdlknnasg/ 

[root@nightly62x-1 logs]# hdfs dfs -ls -R /data/gcgdlknnasg/
drwxrwxrwt   - hdfs supergroup  0 2021-07-16 14:05 
/data/gcgdlknnasg/.Trash
drwxr-xr-x   - hdfs supergroup  0 2021-07-16 13:07 
/data/gcgdlknnasg/dir1
[root@nightly62x-1 logs]# hdfs dfs -ls -R /dest/gcgdlknnasg/
[root@nightly62x-1 logs]#

hdfs dfs -put /etc/hosts /data/gcgdlknnasg/dir1/
hdfs dfs -rm -r /data/gcgdlknnasg/dir1/
hdfs dfs -mkdir /data/gcgdlknnasg/dir1/

===> Run BDR with “Abort on Snapshot Diff Failures” CHECKED now in the 
replication schedule. You get into below error and failure of the BDR job.

21/07/16 15:02:30 INFO distcp.DistCp: Failed to use snapshot diff - 
java.io.FileNotFoundException: File does not exist: 
/data/gcgdlknnasg/.snapshot/distcp-5-46485360-new/dir1/hosts
at 
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1494)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1487)
……..
{code}

  was:
Distcp with snapshotdiff and with filters, marks a Rename as a delete opeartion 
on the target if the rename target is to a directory which is exluded by the 
filter. But, in cases, where files/subdirs created/modified prior to the Rename 
post the old snapshot will still be present as modified/created entries in the 
final copy list. Since, the parent diretory is marked for deletion, these 
subsequent create/modify entries should be ignored while building the final 
copy list. 

With such cases, when the final copy list is built, distcp tries to do a lookup 
for each create/modified file in the l\newer snapshot which will fail as, the 
parent dir is already moved to a new location in later snapshot.

 
{code:java}
sudo -u kms hadoop key create testkey
hadoop fs -mkdir -p /data/gcgdlknnasg/
hdfs crypto -createZone -keyName testkey -path /data/gcgdlknnasg/
hadoop fs -mkdir -p /dest/gcgdlknnasg
hdfs crypto -createZone -keyName testkey -path /dest/gcgdlknnasg
hdfs dfs -mkdir /data/gcgdlknnasg/dir1
hdfs dfsadmin -allowSnapshot /data/gcgdlknnasg/ 
hdfs dfsadmin -allowSnapshot /dest/gcgdlknnasg/ 

[root@nightly62x-1 logs]# hdfs dfs -ls -R /data/gcgdlknnasg/
drwxrwxrwt   - hdfs supergroup  0 2021-07-16 14:05 
/data/gcgdlknnasg/.Trash
drwxr-xr-x   - hdfs supergroup  0 2021-07-16 13:07 
/data/gcgdlknnasg/dir1
[root@nightly62x-1 logs]# hdfs dfs -ls -R /dest/gcgdlknnasg/
[root@nightly62x-1 logs]#

hdfs dfs -put /etc/hosts /data/gcgdlknnasg/dir1/
hdfs dfs -rm -r /data/gcgdlknnasg/dir1/
hdfs dfs -mkdir /data/gcgdlknnasg/dir1/

===> Run BDR with “Abort on Snapshot Diff Failures” CHECKED now in the 
replication schedule. You get into below error and failure of the BDR job.

21/07/16 15:02:30 INFO distcp.DistCp: Failed to use snapshot diff - 
java.io.FileNotFoundException: File does not exist: 
/data/gcgdlknnasg/.snapshot/distcp-5-46485360-new/dir1/hosts
at 
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1494)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1487)
……..
{code}


> CopyListing fails with FNF exception with snapshot diff
> ---
>
> Key: HDFS-16145
> URL: https://issues.apache.org/jira/browse/HDFS-16145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Distcp with snapshotdiff and 

[jira] [Updated] (HDFS-16145) CopyListing fails with FNF exception with snapshot diff

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16145?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HDFS-16145:
--
Labels: pull-request-available  (was: )

> CopyListing fails with FNF exception with snapshot diff
> ---
>
> Key: HDFS-16145
> URL: https://issues.apache.org/jira/browse/HDFS-16145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Distcp with snapshotdiff and with filters, marks a Rename as a delete 
> opeartion on the target if the rename target is to a directory which is 
> exluded by the filter. But, in cases, where files/subdirs created/modified 
> prior to the Rename post the old snapshot will still be present as 
> modified/created entries in the final copy list. Since, the parent diretory 
> is marked for deletion, these subsequent create/modify entries should be 
> ignored while building the final copy list. 
> With such cases, when the final copy list is built, distcp tries to do a 
> lookup for each create/modified file in the l\newer snapshot which will fail 
> as, the parent dir is already moved to a new location in later snapshot.
>  
> {code:java}
> sudo -u kms hadoop key create testkey
> hadoop fs -mkdir -p /data/gcgdlknnasg/
> hdfs crypto -createZone -keyName testkey -path /data/gcgdlknnasg/
> hadoop fs -mkdir -p /dest/gcgdlknnasg
> hdfs crypto -createZone -keyName testkey -path /dest/gcgdlknnasg
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1
> hdfs dfsadmin -allowSnapshot /data/gcgdlknnasg/ 
> hdfs dfsadmin -allowSnapshot /dest/gcgdlknnasg/ 
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /data/gcgdlknnasg/
> drwxrwxrwt   - hdfs supergroup  0 2021-07-16 14:05 
> /data/gcgdlknnasg/.Trash
> drwxr-xr-x   - hdfs supergroup  0 2021-07-16 13:07 
> /data/gcgdlknnasg/dir1
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /dest/gcgdlknnasg/
> [root@nightly62x-1 logs]#
> hdfs dfs -put /etc/hosts /data/gcgdlknnasg/dir1/
> hdfs dfs -rm -r /data/gcgdlknnasg/dir1/
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1/
> ===> Run BDR with “Abort on Snapshot Diff Failures” CHECKED now in the 
> replication schedule. You get into below error and failure of the BDR job.
> 21/07/16 15:02:30 INFO distcp.DistCp: Failed to use snapshot diff - 
> java.io.FileNotFoundException: File does not exist: 
> /data/gcgdlknnasg/.snapshot/distcp-5-46485360-new/dir1/hosts
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1494)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1487)
> ……..
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16145) CopyListing fails with FNF exception with snapshot diff

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16145?focusedWorklogId=627649=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627649
 ]

ASF GitHub Bot logged work on HDFS-16145:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 10:53
Start Date: 26/Jul/21 10:53
Worklog Time Spent: 10m 
  Work Description: bshashikant opened a new pull request #3234:
URL: https://github.com/apache/hadoop/pull/3234


   please see https://issues.apache.org/jira/browse/HDFS-16145.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 627649)
Remaining Estimate: 0h
Time Spent: 10m

> CopyListing fails with FNF exception with snapshot diff
> ---
>
> Key: HDFS-16145
> URL: https://issues.apache.org/jira/browse/HDFS-16145
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: distcp
>Reporter: Shashikant Banerjee
>Assignee: Shashikant Banerjee
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Distcp with snapshotdiff and with filters, marks a Rename as a delete 
> opeartion on the target if the rename target is to a directory which is 
> exluded by the filter. But, in cases, where files/subdirs created/modified 
> prior to the Rename post the old snapshot will still be present as 
> modified/created entries in the final copy list. Since, the parent diretory 
> is marked for deletion, these subsequent create/modify entries should be 
> ignored while building the final copy list. 
> With such cases, when the final copy list is built, distcp tries to do a 
> lookup for each create/modified file in the l\newer snapshot which will fail 
> as, the parent dir is already moved to a new location in later snapshot.
>  
> {code:java}
> sudo -u kms hadoop key create testkey
> hadoop fs -mkdir -p /data/gcgdlknnasg/
> hdfs crypto -createZone -keyName testkey -path /data/gcgdlknnasg/
> hadoop fs -mkdir -p /dest/gcgdlknnasg
> hdfs crypto -createZone -keyName testkey -path /dest/gcgdlknnasg
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1
> hdfs dfsadmin -allowSnapshot /data/gcgdlknnasg/ 
> hdfs dfsadmin -allowSnapshot /dest/gcgdlknnasg/ 
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /data/gcgdlknnasg/
> drwxrwxrwt   - hdfs supergroup  0 2021-07-16 14:05 
> /data/gcgdlknnasg/.Trash
> drwxr-xr-x   - hdfs supergroup  0 2021-07-16 13:07 
> /data/gcgdlknnasg/dir1
> [root@nightly62x-1 logs]# hdfs dfs -ls -R /dest/gcgdlknnasg/
> [root@nightly62x-1 logs]#
> hdfs dfs -put /etc/hosts /data/gcgdlknnasg/dir1/
> hdfs dfs -rm -r /data/gcgdlknnasg/dir1/
> hdfs dfs -mkdir /data/gcgdlknnasg/dir1/
> ===> Run BDR with “Abort on Snapshot Diff Failures” CHECKED now in the 
> replication schedule. You get into below error and failure of the BDR job.
> 21/07/16 15:02:30 INFO distcp.DistCp: Failed to use snapshot diff - 
> java.io.FileNotFoundException: File does not exist: 
> /data/gcgdlknnasg/.snapshot/distcp-5-46485360-new/dir1/hosts
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1494)
>   at 
> org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1487)
> ……..
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Assigned] (HDFS-16143) TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits is flaky

2021-07-26 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16143?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani reassigned HDFS-16143:
---

Assignee: Viraj Jasani

> TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits is flaky
> -
>
> Key: HDFS-16143
> URL: https://issues.apache.org/jira/browse/HDFS-16143
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Viraj Jasani
>Priority: Major
> Attachments: patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
>
>
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3229/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
> {quote}
> [ERROR] 
> testStandbyTriggersLogRollsWhenTailInProgressEdits[0](org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer)
>   Time elapsed: 6.862 s  <<< FAILURE!
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:87)
>   at org.junit.Assert.assertTrue(Assert.java:42)
>   at org.junit.Assert.assertTrue(Assert.java:53)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestEditLogTailer.testStandbyTriggersLogRollsWhenTailInProgressEdits(TestEditLogTailer.java:444)
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16144) Revert HDFS-15372 (Files in snapshots no longer see attribute provider permissions)

2021-07-26 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-16144:
-
Status: Patch Available  (was: Open)

> Revert HDFS-15372 (Files in snapshots no longer see attribute provider 
> permissions)
> ---
>
> Key: HDFS-16144
> URL: https://issues.apache.org/jira/browse/HDFS-16144
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-16144.001.patch
>
>
> In HDFS-15372, I noted a change in behaviour between Hadoop 2 and Hadoop 3. 
> When a user accesses a file in a snapshot, if an attribute provider is 
> configured it would see the original file path (ie no .snapshot folder) in 
> Hadoop 2, but it would see the snapshot path in Hadoop 3.
> HDFS-15372 changed this back, but I noted at the time it may make sense for 
> the provider to see the actual snapshot path instead.
> Recently we discovered HDFS-16132 where the HDFS-15372 does not work 
> correctly. At this stage I believe it is better to revert HDFS-15372 as the 
> fix to this issue is probably not trivial and allow providers to see the 
> actual path the user accessed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16144) Revert HDFS-15372 (Files in snapshots no longer see attribute provider permissions)

2021-07-26 Thread Stephen O'Donnell (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen O'Donnell updated HDFS-16144:
-
Attachment: HDFS-16144.001.patch

> Revert HDFS-15372 (Files in snapshots no longer see attribute provider 
> permissions)
> ---
>
> Key: HDFS-16144
> URL: https://issues.apache.org/jira/browse/HDFS-16144
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
> Attachments: HDFS-16144.001.patch
>
>
> In HDFS-15372, I noted a change in behaviour between Hadoop 2 and Hadoop 3. 
> When a user accesses a file in a snapshot, if an attribute provider is 
> configured it would see the original file path (ie no .snapshot folder) in 
> Hadoop 2, but it would see the snapshot path in Hadoop 3.
> HDFS-15372 changed this back, but I noted at the time it may make sense for 
> the provider to see the actual snapshot path instead.
> Recently we discovered HDFS-16132 where the HDFS-15372 does not work 
> correctly. At this stage I believe it is better to revert HDFS-15372 as the 
> fix to this issue is probably not trivial and allow providers to see the 
> actual path the user accessed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16144) Revert HDFS-15372 (Files in snapshots no longer see attribute provider permissions)

2021-07-26 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16144?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HDFS-16144:
---
Target Version/s: 3.4.0, 3.3.2  (was: 3.4.0)

> Revert HDFS-15372 (Files in snapshots no longer see attribute provider 
> permissions)
> ---
>
> Key: HDFS-16144
> URL: https://issues.apache.org/jira/browse/HDFS-16144
> Project: Hadoop HDFS
>  Issue Type: Bug
>Reporter: Stephen O'Donnell
>Assignee: Stephen O'Donnell
>Priority: Major
>
> In HDFS-15372, I noted a change in behaviour between Hadoop 2 and Hadoop 3. 
> When a user accesses a file in a snapshot, if an attribute provider is 
> configured it would see the original file path (ie no .snapshot folder) in 
> Hadoop 2, but it would see the snapshot path in Hadoop 3.
> HDFS-15372 changed this back, but I noted at the time it may make sense for 
> the provider to see the actual snapshot path instead.
> Recently we discovered HDFS-16132 where the HDFS-15372 does not work 
> correctly. At this stage I believe it is better to revert HDFS-15372 as the 
> fix to this issue is probably not trivial and allow providers to see the 
> actual path the user accessed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16145) CopyListing fails with FNF exception with snapshot diff

2021-07-26 Thread Shashikant Banerjee (Jira)
Shashikant Banerjee created HDFS-16145:
--

 Summary: CopyListing fails with FNF exception with snapshot diff
 Key: HDFS-16145
 URL: https://issues.apache.org/jira/browse/HDFS-16145
 Project: Hadoop HDFS
  Issue Type: Bug
  Components: distcp
Reporter: Shashikant Banerjee
Assignee: Shashikant Banerjee


Distcp with snapshotdiff and with filters, marks a Rename as a delete opeartion 
on the target if the rename target is to a directory which is exluded by the 
filter. But, in cases, where files/subdirs created/modified prior to the Rename 
post the old snapshot will still be present as modified/created entries in the 
final copy list. Since, the parent diretory is marked for deletion, these 
subsequent create/modify entries should be ignored while building the final 
copy list. 

With such cases, when the final copy list is built, distcp tries to do a lookup 
for each create/modified file in the l\newer snapshot which will fail as, the 
parent dir is already moved to a new location in later snapshot.

 
{code:java}
sudo -u kms hadoop key create testkey
hadoop fs -mkdir -p /data/gcgdlknnasg/
hdfs crypto -createZone -keyName testkey -path /data/gcgdlknnasg/
hadoop fs -mkdir -p /dest/gcgdlknnasg
hdfs crypto -createZone -keyName testkey -path /dest/gcgdlknnasg
hdfs dfs -mkdir /data/gcgdlknnasg/dir1
hdfs dfsadmin -allowSnapshot /data/gcgdlknnasg/ 
hdfs dfsadmin -allowSnapshot /dest/gcgdlknnasg/ 

[root@nightly62x-1 logs]# hdfs dfs -ls -R /data/gcgdlknnasg/
drwxrwxrwt   - hdfs supergroup  0 2021-07-16 14:05 
/data/gcgdlknnasg/.Trash
drwxr-xr-x   - hdfs supergroup  0 2021-07-16 13:07 
/data/gcgdlknnasg/dir1
[root@nightly62x-1 logs]# hdfs dfs -ls -R /dest/gcgdlknnasg/
[root@nightly62x-1 logs]#

hdfs dfs -put /etc/hosts /data/gcgdlknnasg/dir1/
hdfs dfs -rm -r /data/gcgdlknnasg/dir1/
hdfs dfs -mkdir /data/gcgdlknnasg/dir1/

===> Run BDR with “Abort on Snapshot Diff Failures” CHECKED now in the 
replication schedule. You get into below error and failure of the BDR job.

21/07/16 15:02:30 INFO distcp.DistCp: Failed to use snapshot diff - 
java.io.FileNotFoundException: File does not exist: 
/data/gcgdlknnasg/.snapshot/distcp-5-46485360-new/dir1/hosts
at 
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1494)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$29.doCall(DistributedFileSystem.java:1487)
……..
{code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Created] (HDFS-16144) Revert HDFS-15372 (Files in snapshots no longer see attribute provider permissions)

2021-07-26 Thread Stephen O'Donnell (Jira)
Stephen O'Donnell created HDFS-16144:


 Summary: Revert HDFS-15372 (Files in snapshots no longer see 
attribute provider permissions)
 Key: HDFS-16144
 URL: https://issues.apache.org/jira/browse/HDFS-16144
 Project: Hadoop HDFS
  Issue Type: Bug
Reporter: Stephen O'Donnell
Assignee: Stephen O'Donnell


In HDFS-15372, I noted a change in behaviour between Hadoop 2 and Hadoop 3. 
When a user accesses a file in a snapshot, if an attribute provider is 
configured it would see the original file path (ie no .snapshot folder) in 
Hadoop 2, but it would see the snapshot path in Hadoop 3.

HDFS-15372 changed this back, but I noted at the time it may make sense for the 
provider to see the actual snapshot path instead.

Recently we discovered HDFS-16132 where the HDFS-15372 does not work correctly. 
At this stage I believe it is better to revert HDFS-15372 as the fix to this 
issue is probably not trivial and allow providers to see the actual path the 
user accessed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Commented] (HDFS-12920) HDFS default value change (with adding time unit) breaks old version MR tarball work with Hadoop 3.x

2021-07-26 Thread Brahma Reddy Battula (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17387210#comment-17387210
 ] 

Brahma Reddy Battula commented on HDFS-12920:
-

I mean, E.G 3.2.2 and 3.2.3 hdfs-default.xml can be different.(if anybody write 
script or testcase for this, it might fail.)

> HDFS default value change (with adding time unit) breaks old version MR 
> tarball work with Hadoop 3.x
> 
>
> Key: HDFS-12920
> URL: https://issues.apache.org/jira/browse/HDFS-12920
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: configuration, hdfs
>Reporter: Junping Du
>Assignee: Akira Ajisaka
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.3, 3.3.2
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> After HADOOP-15059 get resolved. I tried to deploy 2.9.0 tar ball with 3.0.0 
> RC1, and run the job with following errors:
> {noformat}
> 2017-12-12 13:29:06,824 INFO [main] 
> org.apache.hadoop.service.AbstractService: Service 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; cause: 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:542)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1764)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:308)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1722)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1719)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1650)
> {noformat}
> This is because HDFS-10845, we are adding time unit to hdfs-default.xml but 
> it cannot be recognized by old version MR jars. 
> This break our rolling upgrade story, so should mark as blocker.
> A quick workaround is to add values in hdfs-site.xml with removing all time 
> unit. But the right way may be to revert HDFS-10845 (and get rid of noisy 
> warnings).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16111) Add a configuration to RoundRobinVolumeChoosingPolicy to avoid failed volumes at datanodes.

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16111?focusedWorklogId=627591=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627591
 ]

ASF GitHub Bot logged work on HDFS-16111:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 08:53
Start Date: 26/Jul/21 08:53
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #3175:
URL: https://github.com/apache/hadoop/pull/3175#discussion_r676415640



##
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-default.xml
##
@@ -2657,6 +2657,17 @@
   
 
 
+
+  
dfs.datanode.round-robin-volume-choosing-policy.additional-available-space
+  0

Review comment:
   ok 1GB sounds fine to me.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 627591)
Time Spent: 2h 10m  (was: 2h)

> Add a configuration to RoundRobinVolumeChoosingPolicy to avoid failed volumes 
> at datanodes.
> ---
>
> Key: HDFS-16111
> URL: https://issues.apache.org/jira/browse/HDFS-16111
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: datanode
>Reporter: Zhihai Xu
>Assignee: Zhihai Xu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 10m
>  Remaining Estimate: 0h
>
> When we upgraded our hadoop cluster from hadoop 2.6.0 to hadoop 3.2.2, we got 
> failed volume on a lot of datanodes, which cause some missing blocks at that 
> time. Although later on we recovered all the missing blocks by symlinking the 
> path (dfs/dn/current) on the failed volume to a new directory and copying all 
> the data to the new directory, we missed our SLA and it delayed our upgrading 
> process on our production cluster for several hours.
> When this issue happened, we saw a lot of this exceptions happened before the 
> volumed failed on the datanode:
>  [DataXceiver for client  at /[XX.XX.XX.XX:XXX|http://10.104.103.159:33986/] 
> [Receiving block BP-XX-XX.XX.XX.XX-XX:blk_X_XXX]] 
> datanode.DataNode (BlockReceiver.java:(289)) - IOException in 
> BlockReceiver constructor :Possible disk error: Failed to create 
> /XXX/dfs/dn/current/BP-XX-XX.XX.XX.XX-X/tmp/blk_XX. Cause 
> is
> java.io.IOException: No space left on device
>         at java.io.UnixFileSystem.createFileExclusively(Native Method)
>         at java.io.File.createNewFile(File.java:1012)
>         at 
> org.apache.hadoop.hdfs.server.datanode.FileIoProvider.createFile(FileIoProvider.java:302)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DatanodeUtil.createFileWithExistsCheck(DatanodeUtil.java:69)
>         at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.createTmpFile(BlockPoolSlice.java:292)
>         at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createTmpFile(FsVolumeImpl.java:532)
>         at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.createTemporary(FsVolumeImpl.java:1254)
>         at 
> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.createTemporary(FsDatasetImpl.java:1598)
>         at 
> org.apache.hadoop.hdfs.server.datanode.BlockReceiver.(BlockReceiver.java:212)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.getBlockReceiver(DataXceiver.java:1314)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:768)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:173)
>         at 
> org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:107)
>         at 
> org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:291)
>         at java.lang.Thread.run(Thread.java:748)
>  
> We found this issue happened due to the following two reasons:
> First the upgrade process added some extra disk storage on the each disk 
> volume of the data node:
> BlockPoolSliceStorage.doUpgrade 
> (https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/BlockPoolSliceStorage.java#L445)
>  is the main upgrade function in the datanode, it will add some extra 
> storage. The extra storage added is all new directories created in 
> /current//current, although all block data file and block meta data 
> file are hard-linked with /current//previous after upgrade. Since there 
> 

[jira] [Work logged] (HDFS-16111) Add a configuration to RoundRobinVolumeChoosingPolicy to avoid failed volumes at datanodes.

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16111?focusedWorklogId=627571=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627571
 ]

ASF GitHub Bot logged work on HDFS-16111:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 07:58
Start Date: 26/Jul/21 07:58
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3175:
URL: https://github.com/apache/hadoop/pull/3175#issuecomment-886467804


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 44s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  38m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m 14s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 53s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 45s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   4m  3s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 51s | 
[/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3175/2/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 199 unchanged 
- 0 fixed = 201 total (was 199)  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   0m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m  9s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  15m 52s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 243m  7s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3175/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 46s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 338m 42s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.diskbalancer.command.TestDiskBalancerCommand |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.TestMiniDFSCluster |
   |   | hadoop.hdfs.server.namenode.TestStripedINodeFile |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistFiles |
   |   | hadoop.hdfs.server.mover.TestMover |
   |   | 
hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyPersistReplicaPlacement |
   |   | 
hadoop.hdfs.server.namenode.sps.TestStoragePolicySatisfierWithStripedFile |
   |   | hadoop.hdfs.server.balancer.TestBalancerLongRunningTasks |
   |   | hadoop.hdfs.server.diskbalancer.TestDiskBalancer |
   |   | hadoop.hdfs.server.datanode.fsdataset.impl.TestLazyWriter |
   |   | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   
   
   | Subsystem | Report/Notes |
   

[jira] [Commented] (HDFS-12920) HDFS default value change (with adding time unit) breaks old version MR tarball work with Hadoop 3.x

2021-07-26 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HDFS-12920?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17387144#comment-17387144
 ] 

Akira Ajisaka commented on HDFS-12920:
--

I don't think reverting HDFS-10845 is an incompatible change because the 
default values are treated as the same if the time unit exists or not.

> HDFS default value change (with adding time unit) breaks old version MR 
> tarball work with Hadoop 3.x
> 
>
> Key: HDFS-12920
> URL: https://issues.apache.org/jira/browse/HDFS-12920
> Project: Hadoop HDFS
>  Issue Type: Bug
>  Components: configuration, hdfs
>Reporter: Junping Du
>Assignee: Akira Ajisaka
>Priority: Critical
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.3, 3.3.2
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> After HADOOP-15059 get resolved. I tried to deploy 2.9.0 tar ball with 3.0.0 
> RC1, and run the job with following errors:
> {noformat}
> 2017-12-12 13:29:06,824 INFO [main] 
> org.apache.hadoop.service.AbstractService: Service 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster failed in state INITED; cause: 
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
> org.apache.hadoop.yarn.exceptions.YarnRuntimeException: 
> java.lang.NumberFormatException: For input string: "30s"
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:542)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$2.call(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.callWithJobClassLoader(MRAppMaster.java:1764)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.createOutputCommitter(MRAppMaster.java:522)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceInit(MRAppMaster.java:308)
>   at 
> org.apache.hadoop.service.AbstractService.init(AbstractService.java:164)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster$5.run(MRAppMaster.java:1722)
>   at java.security.AccessController.doPrivileged(Native Method)
>   at javax.security.auth.Subject.doAs(Subject.java:422)
>   at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1886)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1719)
>   at 
> org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1650)
> {noformat}
> This is because HDFS-10845, we are adding time unit to hdfs-default.xml but 
> it cannot be recognized by old version MR jars. 
> This break our rolling upgrade story, so should mark as blocker.
> A quick workaround is to add values in hdfs-site.xml with removing all time 
> unit. But the right way may be to revert HDFS-10845 (and get rid of noisy 
> warnings).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16140) TestBootstrapAliasmap fails by BindException

2021-07-26 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-16140:
-
Fix Version/s: (was: 3.2.3)
   3.3.2

> TestBootstrapAliasmap fails by BindException
> 
>
> Key: HDFS-16140
> URL: https://issues.apache.org/jira/browse/HDFS-16140
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.3.2
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> TestBootstrapAliasmap fails if 50200 port is already in use.
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3227/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
> {quote}
> [ERROR] 
> testAliasmapBootstrap(org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap)
>   Time elapsed: 0.472 s  <<< ERROR!
> java.net.BindException: Problem binding to [0.0.0.0:50200] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:914)
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:810)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:642)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:1301)
>   at org.apache.hadoop.ipc.Server.(Server.java:3199)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:1062)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server.(ProtobufRpcEngine2.java:464)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2.getServer(ProtobufRpcEngine2.java:371)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:853)
>   at 
> org.apache.hadoop.hdfs.server.aliasmap.InMemoryLevelDBAliasMapServer.start(InMemoryLevelDBAliasMapServer.java:98)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startAliasMapServerIfNecessary(NameNode.java:801)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:761)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:1014)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:989)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1763)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1378)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1147)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1020)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:952)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:576)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:518)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap.setup(TestBootstrapAliasmap.java:56)
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Updated] (HDFS-16140) TestBootstrapAliasmap fails by BindException

2021-07-26 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16140?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HDFS-16140:
-
Fix Version/s: 3.2.3
   3.4.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk and branch-3.3.

> TestBootstrapAliasmap fails by BindException
> 
>
> Key: HDFS-16140
> URL: https://issues.apache.org/jira/browse/HDFS-16140
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0, 3.2.3
>
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> TestBootstrapAliasmap fails if 50200 port is already in use.
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3227/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
> {quote}
> [ERROR] 
> testAliasmapBootstrap(org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap)
>   Time elapsed: 0.472 s  <<< ERROR!
> java.net.BindException: Problem binding to [0.0.0.0:50200] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:914)
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:810)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:642)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:1301)
>   at org.apache.hadoop.ipc.Server.(Server.java:3199)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:1062)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server.(ProtobufRpcEngine2.java:464)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2.getServer(ProtobufRpcEngine2.java:371)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:853)
>   at 
> org.apache.hadoop.hdfs.server.aliasmap.InMemoryLevelDBAliasMapServer.start(InMemoryLevelDBAliasMapServer.java:98)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startAliasMapServerIfNecessary(NameNode.java:801)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:761)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:1014)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:989)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1763)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1378)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1147)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1020)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:952)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:576)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:518)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap.setup(TestBootstrapAliasmap.java:56)
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16140) TestBootstrapAliasmap fails by BindException

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16140?focusedWorklogId=627566=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627566
 ]

ASF GitHub Bot logged work on HDFS-16140:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 07:46
Start Date: 26/Jul/21 07:46
Worklog Time Spent: 10m 
  Work Description: aajisaka merged pull request #3229:
URL: https://github.com/apache/hadoop/pull/3229


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 627566)
Time Spent: 40m  (was: 0.5h)

> TestBootstrapAliasmap fails by BindException
> 
>
> Key: HDFS-16140
> URL: https://issues.apache.org/jira/browse/HDFS-16140
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> TestBootstrapAliasmap fails if 50200 port is already in use.
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3227/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
> {quote}
> [ERROR] 
> testAliasmapBootstrap(org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap)
>   Time elapsed: 0.472 s  <<< ERROR!
> java.net.BindException: Problem binding to [0.0.0.0:50200] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:914)
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:810)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:642)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:1301)
>   at org.apache.hadoop.ipc.Server.(Server.java:3199)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:1062)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server.(ProtobufRpcEngine2.java:464)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2.getServer(ProtobufRpcEngine2.java:371)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:853)
>   at 
> org.apache.hadoop.hdfs.server.aliasmap.InMemoryLevelDBAliasMapServer.start(InMemoryLevelDBAliasMapServer.java:98)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startAliasMapServerIfNecessary(NameNode.java:801)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:761)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:1014)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:989)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1763)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1378)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1147)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1020)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:952)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:576)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:518)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap.setup(TestBootstrapAliasmap.java:56)
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16140) TestBootstrapAliasmap fails by BindException

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16140?focusedWorklogId=627567=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627567
 ]

ASF GitHub Bot logged work on HDFS-16140:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 07:46
Start Date: 26/Jul/21 07:46
Worklog Time Spent: 10m 
  Work Description: aajisaka commented on pull request #3229:
URL: https://github.com/apache/hadoop/pull/3229#issuecomment-886460677


   Merged. Thank you @ferhui, @tomscut, and @virajjasani 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 627567)
Time Spent: 50m  (was: 40m)

> TestBootstrapAliasmap fails by BindException
> 
>
> Key: HDFS-16140
> URL: https://issues.apache.org/jira/browse/HDFS-16140
> Project: Hadoop HDFS
>  Issue Type: Sub-task
>  Components: test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> TestBootstrapAliasmap fails if 50200 port is already in use.
> https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3227/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
> {quote}
> [ERROR] 
> testAliasmapBootstrap(org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap)
>   Time elapsed: 0.472 s  <<< ERROR!
> java.net.BindException: Problem binding to [0.0.0.0:50200] 
> java.net.BindException: Address already in use; For more details see:  
> http://wiki.apache.org/hadoop/BindException
>   at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
>   at 
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
>   at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
>   at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:914)
>   at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:810)
>   at org.apache.hadoop.ipc.Server.bind(Server.java:642)
>   at org.apache.hadoop.ipc.Server$Listener.(Server.java:1301)
>   at org.apache.hadoop.ipc.Server.(Server.java:3199)
>   at org.apache.hadoop.ipc.RPC$Server.(RPC.java:1062)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2$Server.(ProtobufRpcEngine2.java:464)
>   at 
> org.apache.hadoop.ipc.ProtobufRpcEngine2.getServer(ProtobufRpcEngine2.java:371)
>   at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:853)
>   at 
> org.apache.hadoop.hdfs.server.aliasmap.InMemoryLevelDBAliasMapServer.start(InMemoryLevelDBAliasMapServer.java:98)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.startAliasMapServerIfNecessary(NameNode.java:801)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:761)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:1014)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.(NameNode.java:989)
>   at 
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1763)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNode(MiniDFSCluster.java:1378)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.configureNameService(MiniDFSCluster.java:1147)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.createNameNodesAndSetConf(MiniDFSCluster.java:1020)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster.initMiniDFSCluster(MiniDFSCluster.java:952)
>   at org.apache.hadoop.hdfs.MiniDFSCluster.(MiniDFSCluster.java:576)
>   at 
> org.apache.hadoop.hdfs.MiniDFSCluster$Builder.build(MiniDFSCluster.java:518)
>   at 
> org.apache.hadoop.hdfs.server.namenode.ha.TestBootstrapAliasmap.setup(TestBootstrapAliasmap.java:56)
> {quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org



[jira] [Work logged] (HDFS-16131) Show storage type for failed volumes on namenode web

2021-07-26 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HDFS-16131?focusedWorklogId=627544=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-627544
 ]

ASF GitHub Bot logged work on HDFS-16131:
-

Author: ASF GitHub Bot
Created on: 26/Jul/21 06:26
Start Date: 26/Jul/21 06:26
Worklog Time Spent: 10m 
  Work Description: tomscut commented on pull request #3211:
URL: https://github.com/apache/hadoop/pull/3211#issuecomment-886414534


   Thanks @Hexiaoqiao for your review and merge.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 627544)
Time Spent: 1h 10m  (was: 1h)

> Show storage type for failed volumes on namenode web
> 
>
> Key: HDFS-16131
> URL: https://issues.apache.org/jira/browse/HDFS-16131
> Project: Hadoop HDFS
>  Issue Type: Wish
>Reporter: tomscut
>Assignee: tomscut
>Priority: Minor
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: failed-volumes.jpg
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> To make it easy to query the storage type for failed volumes,  we can display 
> them on namenode web.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org