[jira] [Commented] (HADOOP-17998) Enable get command run with multi-thread

2021-11-08 Thread Chengwei Wang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440891#comment-17440891
 ] 

Chengwei Wang commented on HADOOP-17998:


Submit path v001  [^HADOOP-17998.001.patch] 

> Enable get command run with multi-thread
> 
>
> Key: HADOOP-17998
> URL: https://issues.apache.org/jira/browse/HADOOP-17998
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chengwei Wang
>Priority: Major
> Attachments: HADOOP-17998.001.patch
>
>
> CopyFromLocal/Put is enabled to run with multi-thread with HDFS-11786 and 
> HADOOP-14698, and make put dirs or multiple files faster.
> So, It's necessary to enable get and copyToLocal command run with 
> multi-thread.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17998) Enable get command run with multi-thread

2021-11-08 Thread Chengwei Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengwei Wang updated HADOOP-17998:
---
Attachment: HADOOP-17998.001.patch

> Enable get command run with multi-thread
> 
>
> Key: HADOOP-17998
> URL: https://issues.apache.org/jira/browse/HADOOP-17998
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chengwei Wang
>Priority: Major
> Attachments: HADOOP-17998.001.patch
>
>
> CopyFromLocal/Put is enabled to run with multi-thread with HDFS-11786 and 
> HADOOP-14698, and make put dirs or multiple files faster.
> So, It's necessary to enable get and copyToLocal command run with 
> multi-thread.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17998) Enable get command run with multi-thread

2021-11-08 Thread Chengwei Wang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chengwei Wang updated HADOOP-17998:
---
Summary: Enable get command run with multi-thread  (was: Enable get command 
run with multi-thread.)

> Enable get command run with multi-thread
> 
>
> Key: HADOOP-17998
> URL: https://issues.apache.org/jira/browse/HADOOP-17998
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chengwei Wang
>Priority: Major
>
> CopyFromLocal/Put is enabled to run with multi-thread with HDFS-11786 and 
> HADOOP-14698, and make put dirs or multiple files faster.
> So, It's necessary to enable get and copyToLocal command run with 
> multi-thread.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17997) RBF: Namespace usage of mount table with multi subclusters can exceed quota

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17997:

Labels: pull-request-available  (was: )

> RBF: Namespace usage of mount table with multi subclusters can exceed quota
> ---
>
> Key: HADOOP-17997
> URL: https://issues.apache.org/jira/browse/HADOOP-17997
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: xichaomin
>Priority: Major
>  Labels: pull-request-available
> Attachments: 1636424307319.jpg
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> # mkdir on each subcluster /test10
>  # mount table: hdfs dfsrouteradmin -add /test10 ns1,ns2 /test10
>  # set quota: hdfs dfsrouteradmin -setQuota /test10 -nsQuota 4
>  # touch two files under hdfs://\{fed}/test10, now the namespace usage is 4(2 
> directories and 2 files). 
>  # refresh and put a new file, the put is successfull, and the namespace 
> usage comes to 5.
> !1636424307319.jpg!
> The router checks quota without considering the increments, It is difficult 
> to limit quota precisely without increments , but throw exception when usage 
> >= quota will be better.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17997) RBF: Namespace usage of mount table with multi subclusters can exceed quota

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17997?focusedWorklogId=678886=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-678886
 ]

ASF GitHub Bot logged work on HADOOP-17997:
---

Author: ASF GitHub Bot
Created on: 09/Nov/21 03:27
Start Date: 09/Nov/21 03:27
Worklog Time Spent: 10m 
  Work Description: xicm opened a new pull request #3634:
URL: https://github.com/apache/hadoop/pull/3634


   …ers can exceed quota
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 678886)
Remaining Estimate: 0h
Time Spent: 10m

> RBF: Namespace usage of mount table with multi subclusters can exceed quota
> ---
>
> Key: HADOOP-17997
> URL: https://issues.apache.org/jira/browse/HADOOP-17997
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: xichaomin
>Priority: Major
> Attachments: 1636424307319.jpg
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> # mkdir on each subcluster /test10
>  # mount table: hdfs dfsrouteradmin -add /test10 ns1,ns2 /test10
>  # set quota: hdfs dfsrouteradmin -setQuota /test10 -nsQuota 4
>  # touch two files under hdfs://\{fed}/test10, now the namespace usage is 4(2 
> directories and 2 files). 
>  # refresh and put a new file, the put is successfull, and the namespace 
> usage comes to 5.
> !1636424307319.jpg!
> The router checks quota without considering the increments, It is difficult 
> to limit quota precisely without increments , but throw exception when usage 
> >= quota will be better.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xicm opened a new pull request #3634: HADOOP-17997. RBF: Namespace usage of mount table with multi subclust…

2021-11-08 Thread GitBox


xicm opened a new pull request #3634:
URL: https://github.com/apache/hadoop/pull/3634


   …ers can exceed quota
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17998) Enable get command run with multi-thread.

2021-11-08 Thread Chengwei Wang (Jira)
Chengwei Wang created HADOOP-17998:
--

 Summary: Enable get command run with multi-thread.
 Key: HADOOP-17998
 URL: https://issues.apache.org/jira/browse/HADOOP-17998
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Chengwei Wang


CopyFromLocal/Put is enabled to run with multi-thread with HDFS-11786 and 
HADOOP-14698, and make put dirs or multiple files faster.
So, It's necessary to enable get and copyToLocal command run with multi-thread.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17997) RBF: Namespace usage of mount table with multi subclusters can exceed quota

2021-11-08 Thread xichaomin (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xichaomin updated HADOOP-17997:
---
Description: 
# mkdir on each subcluster /test10
 # mount table: hdfs dfsrouteradmin -add /test10 ns1,ns2 /test10
 # set quota: hdfs dfsrouteradmin -setQuota /test10 -nsQuota 4
 # touch two files under hdfs://\{fed}/test10, now the namespace usage is 4(2 
directories and 2 files). 
 # refresh and put a new file, the put is successfull, and the namespace usage 
comes to 5.
!1636424307319.jpg!


The router checks quota without considering the increments, It is difficult to 
limit quota precisely without increments , but throw exception when usage >= 
quota will be better.

  was:
# mkdir on each subcluster /test10
 # mount table: hdfs dfsrouteradmin -add /test10 ns1,ns2 /test10
 # set quota: hdfs dfsrouteradmin -setQuota /test10 -nsQuota 4
 # touch two files under hdfs://\{fed}/test10, now the namespace usage is 4(2 
directories and 2 files). 
 # refresh and put a new file, the put is successfull, and the namespace usage 
comes to 5.
!1636424307319.jpg!


> RBF: Namespace usage of mount table with multi subclusters can exceed quota
> ---
>
> Key: HADOOP-17997
> URL: https://issues.apache.org/jira/browse/HADOOP-17997
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: xichaomin
>Priority: Major
> Attachments: 1636424307319.jpg
>
>
> # mkdir on each subcluster /test10
>  # mount table: hdfs dfsrouteradmin -add /test10 ns1,ns2 /test10
>  # set quota: hdfs dfsrouteradmin -setQuota /test10 -nsQuota 4
>  # touch two files under hdfs://\{fed}/test10, now the namespace usage is 4(2 
> directories and 2 files). 
>  # refresh and put a new file, the put is successfull, and the namespace 
> usage comes to 5.
> !1636424307319.jpg!
> The router checks quota without considering the increments, It is difficult 
> to limit quota precisely without increments , but throw exception when usage 
> >= quota will be better.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17997) RBF: Namespace usage of mount table with multi subclusters can exceed quota

2021-11-08 Thread xichaomin (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

xichaomin updated HADOOP-17997:
---
Attachment: 1636424307319.jpg

> RBF: Namespace usage of mount table with multi subclusters can exceed quota
> ---
>
> Key: HADOOP-17997
> URL: https://issues.apache.org/jira/browse/HADOOP-17997
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: xichaomin
>Priority: Major
> Attachments: 1636424307319.jpg
>
>
> # mkdir on each subcluster /test10
>  # mount table: hdfs dfsrouteradmin -add /test10 ns1,ns2 /test10
>  # set quota: hdfs dfsrouteradmin -setQuota /test10 -nsQuota 4
>  # touch two files under hdfs://\{fed}/test10, now the namespace usage is 4(2 
> directories and 2 files). 
>  # refresh and put a new file, the put is successfull, and the namespace 
> usage comes to 5.
> !1636424307319.jpg!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17997) RBF: Namespace usage of mount table with multi subclusters can exceed quota

2021-11-08 Thread xichaomin (Jira)
xichaomin created HADOOP-17997:
--

 Summary: RBF: Namespace usage of mount table with multi 
subclusters can exceed quota
 Key: HADOOP-17997
 URL: https://issues.apache.org/jira/browse/HADOOP-17997
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: xichaomin
 Attachments: 1636424307319.jpg

# mkdir on each subcluster /test10
 # mount table: hdfs dfsrouteradmin -add /test10 ns1,ns2 /test10
 # set quota: hdfs dfsrouteradmin -setQuota /test10 -nsQuota 4
 # touch two files under hdfs://\{fed}/test10, now the namespace usage is 4(2 
directories and 2 files). 
 # refresh and put a new file, the put is successfull, and the namespace usage 
comes to 5.
!1636424307319.jpg!



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] manuzhang commented on pull request #3628: YARN-11001. Add docs on removing node-to-labels mapping

2021-11-08 Thread GitBox


manuzhang commented on pull request #3628:
URL: https://github.com/apache/hadoop/pull/3628#issuecomment-963754004


   @szilard-nemeth could you help review this PR ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jianghuazhu commented on pull request #3629: HDFS-16305.Record the remote NameNode address when the rolling log is triggered.

2021-11-08 Thread GitBox


jianghuazhu commented on pull request #3629:
URL: https://github.com/apache/hadoop/pull/3629#issuecomment-963746132


   It looks like jenkins failed, there are some exceptions.
   E.g:
   TestWebHdfsFileSystemContract
   TestMover
   TestDataNodeUUID
   TestAddOverReplicatedStripedBlocks
   TestDecommission
   TestLeaseRecovery2
   TestBlockReaderLocal
   
   It doesn't seem to have much to do with the pr I submitted.
   @jojochuang @virajjasani @tomscut, would you like to spare some time to help 
review this pr.
   Thank you very much.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17979) Interface EtagSource to allow FileStatus subclasses to provide etags

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17979?focusedWorklogId=678811=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-678811
 ]

ASF GitHub Bot logged work on HADOOP-17979:
---

Author: ASF GitHub Bot
Created on: 08/Nov/21 22:49
Start Date: 08/Nov/21 22:49
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3633:
URL: https://github.com/apache/hadoop/pull/3633#issuecomment-963644661


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 39s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  28m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  23m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   4m  3s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 24s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   4m 55s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javac  |  23m 41s | 
[/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3633/1/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 49 new + 1887 unchanged - 2 
fixed = 1936 total (was 1889)  |
   | +1 :green_heart: |  compile  |  19m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  javac  |  19m 59s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3633/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 48 new + 1763 
unchanged - 1 fixed = 1811 total (was 1764)  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3633/1/artifact/out/blanks-eol.txt)
 |  The patch has 2 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 44s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3633/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 2 new + 9 unchanged - 0 fixed = 11 total (was 9)  
|
   | +1 :green_heart: |  mvnsite  |   3m  7s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m  9s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 50s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  spotbugs  |   1m 22s | 
[/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3633/1/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  23m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3633: HADOOP-17979. Add Interface EtagSource to allow FileStatus subclasses to provide etags

2021-11-08 Thread GitBox


hadoop-yetus commented on pull request #3633:
URL: https://github.com/apache/hadoop/pull/3633#issuecomment-963644661


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 39s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  28m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  23m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   4m  3s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 24s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 50s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   4m 55s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m 16s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 25s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 12s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javac  |  23m 41s | 
[/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3633/1/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 49 new + 1887 unchanged - 2 
fixed = 1936 total (was 1889)  |
   | +1 :green_heart: |  compile  |  19m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  javac  |  19m 59s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3633/1/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 48 new + 1763 
unchanged - 1 fixed = 1811 total (was 1764)  |
   | -1 :x: |  blanks  |   0m  0s | 
[/blanks-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3633/1/artifact/out/blanks-eol.txt)
 |  The patch has 2 line(s) that end in blanks. Use git apply --whitespace=fix 
<>. Refer https://git-scm.com/docs/git-apply  |
   | -0 :warning: |  checkstyle  |   3m 44s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3633/1/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 2 new + 9 unchanged - 0 fixed = 11 total (was 9)  
|
   | +1 :green_heart: |  mvnsite  |   3m  7s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   2m  9s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 50s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  spotbugs  |   1m 22s | 
[/new-spotbugs-hadoop-tools_hadoop-azure.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3633/1/artifact/out/new-spotbugs-hadoop-tools_hadoop-azure.html)
 |  hadoop-tools/hadoop-azure generated 1 new + 0 unchanged - 0 fixed = 1 total 
(was 0)  |
   | +1 :green_heart: |  shadedclient  |  23m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m 13s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 46s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m  4s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 246m 49s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
  

[jira] [Commented] (HADOOP-17006) Fix the CosCrendentials Provider in core-site.xml for unit tests.

2021-11-08 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440764#comment-17440764
 ] 

Chao Sun commented on HADOOP-17006:
---

Hey [~yuyang733], any update on this? do you have a ETA?

> Fix the CosCrendentials Provider in core-site.xml for unit tests.
> -
>
> Key: HADOOP-17006
> URL: https://issues.apache.org/jira/browse/HADOOP-17006
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/cos
>Reporter: Yang Yu
>Assignee: Yang Yu
>Priority: Blocker
>
> Fix the CosCredentials Provider classpath in core-site.xml for unit tests.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16752) ABFS: test failure testLastModifiedTime()

2021-11-08 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HADOOP-16752:
--
Target Version/s: 3.3.3  (was: 3.3.2)

> ABFS: test failure testLastModifiedTime()
> -
>
> Key: HADOOP-16752
> URL: https://issues.apache.org/jira/browse/HADOOP-16752
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Da Zhou
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> java.lang.AssertionError: lastModifiedTime should be after minCreateStartTime
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> org.apache.hadoop.fs.azurebfs.ITestAzureBlobFileSystemFileStatus.testLastModifiedTime(ITestAzureBlobFileSystemFileStatus.java:138)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:55)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:298)
>   at 
> org.junit.internal.runners.statements.FailOnTimeout$CallableStatement.call(FailOnTimeout.java:292)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at java.lang.Thread.run(Thread.java:748)



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16955) Umbrella Jira for improving the Hadoop-cos support in Hadoop

2021-11-08 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16955?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HADOOP-16955:
--
Target Version/s: 3.3.3  (was: 3.3.2)

> Umbrella Jira for improving the Hadoop-cos support in Hadoop
> 
>
> Key: HADOOP-16955
> URL: https://issues.apache.org/jira/browse/HADOOP-16955
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/cos
>Reporter: Yang Yu
>Assignee: Yang Yu
>Priority: Major
> Attachments: HADOOP-16955-branch-3.3.001.patch
>
>   Original Estimate: 48h
>  Time Spent: 4h
>  Remaining Estimate: 44h
>
> This Umbrella Jira focus on fixing some known bugs and adding some important 
> features.
>  
> bugfix:
>  # resolve the dependency conflict;
>  # fix the upload buffer returning failed when some exceptions occur;
>  # fix the issue that the single file upload can not be retried;
>  # fix the bug of checking if a file exists through listing the file 
> frequently.
> features:
>  # support SessionCredentialsProvider and InstanceCredentialsProvider, which 
> allows users to specify the credentials in URI or get it from the CVM 
> (Tencent Cloud Virtual Machine) bound to the CAM role that can access the COS 
> bucket;
>  # support the server encryption  based on SSE-COS and SSE-C;
>  # support the HTTP proxy settings;
>  # support the storage class settings;
>  # support the CRC64 checksum.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16968) Optimizing the upload buffer in Hadoop-cos

2021-11-08 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HADOOP-16968:
--
Target Version/s: 3.3.3  (was: 3.3.2)

> Optimizing the upload buffer in Hadoop-cos
> --
>
> Key: HADOOP-16968
> URL: https://issues.apache.org/jira/browse/HADOOP-16968
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/cos
>Reporter: Yang Yu
>Assignee: Yang Yu
>Priority: Major
> Attachments: HADOOP-16968-branch-3.3.001.patch
>
>
> This task focus on fixing the bug of returning an upload buffer failed when 
> some exceptions occur.
>  
> What's more, the optimizing upload buffer management would be provided.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16963) HADOOP-16582 changed mkdirs() behavior

2021-11-08 Thread Chao Sun (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16963?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440763#comment-17440763
 ] 

Chao Sun commented on HADOOP-16963:
---

Hey [~weichiu], what's the status of this JIRA? I'm currently working on the 
3.3.2 release.

> HADOOP-16582 changed mkdirs() behavior
> --
>
> Key: HADOOP-16963
> URL: https://issues.apache.org/jira/browse/HADOOP-16963
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.10.0, 3.3.0, 2.8.6, 2.9.3, 3.1.3, 3.2.2
>Reporter: Wei-Chiu Chuang
>Priority: Critical
>
> HADOOP-16582 changed behavior of {{mkdirs()}}
> Some Hive tests depend on the old behavior and they fail miserably.
> {quote}
> earlier:
> all plain mkdirs(somePath) were fast-tracked to FileSystem.mkdirs which have 
> rerouted them to mkdirs(somePath, somePerm) method with some defaults (which 
> were static)
> an implementation of FileSystem have only needed implement "mkdirs(somePath, 
> somePerm)" - because the other was not neccessarily called if it was always 
> in a FilterFileSystem or something like that
> now:
> especially FilterFileSystem forwards the call of mkdirs(p) to the actual fs 
> implementation...which may skip overriden mkdirs(somPath,somePerm) methods
> ...and could cause issues for existing FileSystem implementations
> {quote}
> File this jira to address this problem.
> [~kgyrtkirk] [~ste...@apache.org] [~kihwal]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16970) Supporting the new credentials provider in Hadoop-cos

2021-11-08 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HADOOP-16970:
--
Target Version/s: 3.3.3  (was: 3.3.2)

> Supporting the new credentials provider in Hadoop-cos
> -
>
> Key: HADOOP-16970
> URL: https://issues.apache.org/jira/browse/HADOOP-16970
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/cos
>Reporter: Yang Yu
>Assignee: Yang Yu
>Priority: Major
>
> This task aims to support three credentials provider in Hadoop-cos:
>  * SessionCredentialsProvider
>  * InstanceCredentialsProvider



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17063) S3A deleteObjects hanging/retrying forever

2021-11-08 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17063?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HADOOP-17063:
--
Target Version/s: 3.3.3  (was: 3.3.2)

> S3A deleteObjects hanging/retrying forever
> --
>
> Key: HADOOP-17063
> URL: https://issues.apache.org/jira/browse/HADOOP-17063
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1
> Environment: hadoop 3.2.1
> spark 2.4.4
>  
>Reporter: Dyno
>Priority: Minor
> Attachments: jstack_exec-34.log, jstack_exec-40.log, 
> jstack_exec-74.log
>
>
> {code}
> sun.misc.Unsafe.park(Native Method) 
> java.util.concurrent.locks.LockSupport.park(LockSupport.java:175) 
> com.google.common.util.concurrent.AbstractFuture.get(AbstractFuture.java:523) 
> com.google.common.util.concurrent.FluentFuture$TrustedFuture.get(FluentFuture.java:82)
>  
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.putObject(S3ABlockOutputStream.java:446)
>  
> org.apache.hadoop.fs.s3a.S3ABlockOutputStream.close(S3ABlockOutputStream.java:365)
>  
> org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
>  org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:101) 
> org.apache.parquet.hadoop.util.HadoopPositionOutputStream.close(HadoopPositionOutputStream.java:64)
>  org.apache.parquet.hadoop.ParquetFileWriter.end(ParquetFileWriter.java:685) 
> org.apache.parquet.hadoop.InternalParquetRecordWriter.close(InternalParquetRecordWriter.java:122)
>  
> org.apache.parquet.hadoop.ParquetRecordWriter.close(ParquetRecordWriter.java:165)
>  
> org.apache.spark.sql.execution.datasources.parquet.ParquetOutputWriter.close(ParquetOutputWriter.scala:42)
>  
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.releaseResources(FileFormatDataWriter.scala:57)
>  
> org.apache.spark.sql.execution.datasources.FileFormatDataWriter.commit(FileFormatDataWriter.scala:74)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:247)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask$3.apply(FileFormatWriter.scala:242)
>  
> org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:248)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:170)
>  
> org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
>  org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) 
> org.apache.spark.scheduler.Task.run(Task.scala:123) 
> org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
>  org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360) 
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414) 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  java.lang.Thread.run(Thread.java:748)
> {code}
>  
> we are using spark 2.4.4 with hadoop 3.2.1 on kubernetes/spark-operator, 
> sometimes we see this hang with the stacktrace above. it looks like the 
> putObject never return, we have to kill the executor to make the job move 
> forward. 
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17583) Enable shelldoc check in GitHub PR

2021-11-08 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HADOOP-17583:
--
Target Version/s: 3.4.0, 3.2.4, 3.3.3  (was: 3.4.0, 3.3.2, 3.2.4)

> Enable shelldoc check in GitHub PR
> --
>
> Key: HADOOP-17583
> URL: https://issues.apache.org/jira/browse/HADOOP-17583
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> After HADOOP-17570, we can enable shelldoc check again because the commit 
> hash of Yetus includes YETUS-1099.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17682) ABFS: Support FileStatus input to OpenFileWithOptions() via OpenFileParameters

2021-11-08 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HADOOP-17682:
--
Target Version/s: 3.3.3  (was: 3.3.2)

> ABFS: Support FileStatus input to OpenFileWithOptions() via OpenFileParameters
> --
>
> Key: HADOOP-17682
> URL: https://issues.apache.org/jira/browse/HADOOP-17682
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sumangala Patki
>Assignee: Sumangala Patki
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 8h 10m
>  Remaining Estimate: 0h
>
> ABFS open methods require certain information (contentLength, eTag, etc) to  
> to create an InputStream for the file at the given path. This information is 
> retrieved via a GetFileStatus request to backend.
> However, client applications may often have access to the FileStatus prior to 
> invoking the open API. Providing this FileStatus to the driver through the 
> OpenFileParameters argument of openFileWithOptions() can help avoid the call 
> to Store for FileStatus.
> This PR adds handling for the FileStatus instance (if any) provided via the 
> OpenFileParameters argument.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17446) Print the thread parker and lock information in stacks page

2021-11-08 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17446?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HADOOP-17446:
--
Target Version/s: 3.4.0, 3.2.4, 3.3.3  (was: 3.4.0, 3.3.2, 3.2.4)

> Print the thread parker and lock information in stacks page
> ---
>
> Key: HADOOP-17446
> URL: https://issues.apache.org/jira/browse/HADOOP-17446
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: common
>Affects Versions: 3.4.0
>Reporter: Baolong Mao
>Assignee: Baolong Mao
>Priority: Major
>  Labels: pull-request-available
> Attachments: image-2020-12-25-08-32-32-982.png
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Sometimes, our service stuck because of some lock held by other thread, but 
> we can get nothing from "stacks" for ReadWriteLock, and it is widely used in 
> our services, like the fslock, cplock, dirlock of namenode.
> Luckily, we can get thread parker from Thread object, it can help us see the 
> thread parker clearly.
>  !image-2020-12-25-08-32-32-982.png! 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17500) S3A doesn't calculate Content-MD5 on uploads

2021-11-08 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HADOOP-17500:
--
Target Version/s: 3.3.3  (was: 3.3.2)

> S3A doesn't calculate Content-MD5 on uploads
> 
>
> Key: HADOOP-17500
> URL: https://issues.apache.org/jira/browse/HADOOP-17500
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Reporter: Pedro Tôrres
>Priority: Major
>
> Hadoop doesn't specify the Content-MD5 of an object when uploading it to an 
> S3 Bucket. This prevents uploads to buckets with Object Lock, that require 
> the Content-MD5 to be specified.
>  
> {code:java}
> com.amazonaws.services.s3.model.AmazonS3Exception: Content-MD5 HTTP header is 
> required for Put Part requests with Object Lock parameters (Service: Amazon 
> S3; Status Code: 400; Error Code: InvalidRequest; Request ID: 
> ; S3 Extended Request ID: 
> ; 
> Proxy: null), S3 Extended Request ID: 
> 
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1819)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1403)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1372)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
>   at 
> com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550)
>   at 
> com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5248)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5195)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.doUploadPart(AmazonS3Client.java:3768)
>   at 
> com.amazonaws.services.s3.AmazonS3Client.uploadPart(AmazonS3Client.java:3753)
>   at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.uploadPart(S3AFileSystem.java:2230)
>   at 
> org.apache.hadoop.fs.s3a.WriteOperationHelper.lambda$uploadPart$8(WriteOperationHelper.java:558)
>   at org.apache.hadoop.fs.s3a.Invoker.once(Invoker.java:110)
>   ... 15 more{code}
>  
> Similar to https://issues.apache.org/jira/browse/JCLOUDS-1549
> Related to https://issues.apache.org/jira/browse/HADOOP-13076



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17543) HDFS Put was failed with IPV6 cluster

2021-11-08 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HADOOP-17543:
--
Target Version/s: 3.3.3  (was: 3.3.2)

> HDFS Put was failed with IPV6 cluster 
> --
>
> Key: HADOOP-17543
> URL: https://issues.apache.org/jira/browse/HADOOP-17543
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.1.1
>Reporter: ANANDA G B
>Priority: Minor
>  Labels: ipv6
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17669) Port HADOOP-17079, HADOOP-17505 to branch-3.3

2021-11-08 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HADOOP-17669:
--
Target Version/s: 3.3.3  (was: 3.3.2)

> Port HADOOP-17079, HADOOP-17505  to branch-3.3
> --
>
> Key: HADOOP-17669
> URL: https://issues.apache.org/jira/browse/HADOOP-17669
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.3.1
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17673) IOStatistics API in branch-3.3 break compatibility

2021-11-08 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HADOOP-17673:
--
Target Version/s: 3.3.3  (was: 3.3.2)

> IOStatistics API in branch-3.3 break compatibility
> --
>
> Key: HADOOP-17673
> URL: https://issues.apache.org/jira/browse/HADOOP-17673
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Steve Loughran
>Priority: Critical
>  Labels: release-blocker
>
> The S3 delegation token feature (3.3.0) added API 
> {code:java}
>   AmazonS3 createS3Client(URI name,
>   String bucket,
>   AWSCredentialsProvider credentialSet,
>   String userAgentSuffix) throws IOException;
>  {code}
> However, the IOStatistics API (HADOOP-17271, HADOOP-13551. in 3.3.1) changed 
> it to
> {code:java}
>   AmazonS3 createS3Client(URI name,
>   String bucket,
>   AWSCredentialsProvider credentialSet,
>   String userAgentSuffix) throws IOException; {code}
> The API is declared evolving, so we're not supposed to break compat between 
> maintenance releases.
> [~ste...@apache.org]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17717) Update wildfly openssl to 1.1.3.Final

2021-11-08 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HADOOP-17717:
--
Target Version/s: 3.3.3  (was: 3.3.2)

> Update wildfly openssl to 1.1.3.Final
> -
>
> Key: HADOOP-17717
> URL: https://issues.apache.org/jira/browse/HADOOP-17717
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> HADOOP-17649 got stalled. IMO we can bump the version to 1.1.3.Final instead, 
> at least, for branch-3.3.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17915) ABFS AbfsDelegationTokenManager to generate canonicalServiceName if DT plugin doesn't

2021-11-08 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17915?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HADOOP-17915:
--
Target Version/s: 3.3.3  (was: 3.3.2)

> ABFS AbfsDelegationTokenManager to generate canonicalServiceName if DT plugin 
> doesn't
> -
>
> Key: HADOOP-17915
> URL: https://issues.apache.org/jira/browse/HADOOP-17915
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Critical
>  Labels: pull-request-available
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Currently in {{AbfsDelegationTokenManager}}, any 
> {{CustomDelegationTokenManager}} only provides a canonical service name if it
> implements {{BoundDTExtension}} and its {{getCanonicalServiceName()}} method.
> If this doesn't hold, {{AbfsDelegationTokenManager}} returns null, which 
> causes {{AzureBlobFileSystem.getCanonicalServiceName()}}
> to call {{super.getCanonicalServiceName()}} *which resolves the IP address of 
> the abfs endpoint, and then the FQDN of that IPAddr
> If a storage account is served over >1 endpoint, then the DT will only have a 
> valid service name for one of the possible
> endpoints, so _only work if all process get the same IP address when the look 
> up the storage account address_
> Fix
> # DT plugins SHOULD generate the canonical service name
> #  If they don't, and DTs are enabled: {{AbfsDelegationTokenManager}} to 
> create a default one
> # and {{AzureBlobFileSystem.getCanonicalServiceName()}} MUST NOT call 
> superclass.
> The default canonical service name of a store will be {{abfs:// + 
> FsURI.getHost() + "/"}}, so all containers in same storage account has the 
> same service name
> {code}
> abfs://buc...@stevel-testing.dfs.core.windows.net/path
> {code}
> maps to 
> {code}
> abfs://stevel-testing.dfs.core.windows.net/ 
> {code}
> This will mean that only one DT will be created per storage a/c; Applications 
> will not need to list all containers which deployed processes will wish to 
> interact with. Today's behaviour, based on rDNS lookup of storage account, is 
> possibly slightly broader in that all storage accounts which map to the same 
> IPAddr share a DT. The proposed scheme will still be much broader than that 
> of S3A, where every bucket has its unique service name, so apps need to list 
> all target filesystems at launch time (easy for MR, source of trouble in 
> spark).
> Fix: straightforward. 
> Test
> * no DTs: service name == null
> * DTs: will match proposed pattern, even if extension returns null.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17928) s3a: set fs.s3a.downgrade.syncable.exceptions = true by default

2021-11-08 Thread Chao Sun (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Sun updated HADOOP-17928:
--
Target Version/s: 3.3.3  (was: 3.3.2)

> s3a: set fs.s3a.downgrade.syncable.exceptions = true by default
> ---
>
> Key: HADOOP-17928
> URL: https://issues.apache.org/jira/browse/HADOOP-17928
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> HADOOP-17597 set policy of reacting to hsync() on an s3 output stream to be 
> one of :Fail, warn, with default == fail.
> I propose downgrading this to warn. We've done it internally, after having it 
> on fail long enough to identify which processes were doing either of
> * having unrealistic expectations about the output stream (fix: move off s3)
> * were using hflush() as a variant of flush(), with the failure being an 
> over-reaction



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17975) Fallback to simple auth does not work for a secondary DistributedFileSystem instance

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17975?focusedWorklogId=678748=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-678748
 ]

ASF GitHub Bot logged work on HADOOP-17975:
---

Author: ASF GitHub Bot
Created on: 08/Nov/21 21:07
Start Date: 08/Nov/21 21:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3579:
URL: https://github.com/apache/hadoop/pull/3579#issuecomment-963573593


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  20m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 29s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  28m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  28m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  24m  2s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  2s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3579/6/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 5 new + 131 
unchanged - 0 fixed = 136 total (was 131)  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 49s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 18s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 12s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 218m 12s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3579/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3579 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 458744339c29 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4503c3fb8b55b32b3946eca8e0d73d64e6d1adb5 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3579/6/testReport/ |
   | Max. process+thread count | 1654 (vs. ulimit of 5500) |
   | modules | C: 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3579: HADOOP-17975 Fallback to simple auth does not work for a secondary DistributedFileSystem instance.

2021-11-08 Thread GitBox


hadoop-yetus commented on pull request #3579:
URL: https://github.com/apache/hadoop/pull/3579#issuecomment-963573593


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  23m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  20m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  2s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 36s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 29s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 57s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  28m  5s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  28m  5s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  24m  2s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  24m  2s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   1m  2s | 
[/results-checkstyle-hadoop-common-project_hadoop-common.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3579/6/artifact/out/results-checkstyle-hadoop-common-project_hadoop-common.txt)
 |  hadoop-common-project/hadoop-common: The patch generated 5 new + 131 
unchanged - 0 fixed = 136 total (was 131)  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   2m 49s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m 15s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  18m 18s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m 12s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 218m 12s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3579/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3579 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 458744339c29 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 4503c3fb8b55b32b3946eca8e0d73d64e6d1adb5 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3579/6/testReport/ |
   | Max. process+thread count | 1654 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3579/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3632: YARN-10822. Containers going from New to Scheduled transition for kil…

2021-11-08 Thread GitBox


hadoop-yetus commented on pull request #3632:
URL: https://github.com/apache/hadoop/pull/3632#issuecomment-963527914


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 58s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m  7s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 32s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 31s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  23m  4s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 38s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 36s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 40s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 40s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 44s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m 50s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  22m 39s |  |  hadoop-yarn-server-nodemanager 
in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 120m 37s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3632/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3632 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux f66fcf996109 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ffb53fa0eec67e2db78fd0d74977848e2e0a4dfe |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3632/1/testReport/ |
   | Max. process+thread count | 521 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3632/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this 

[jira] [Work logged] (HADOOP-17995) Stale record should be remove when DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17995?focusedWorklogId=678677=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-678677
 ]

ASF GitHub Bot logged work on HADOOP-17995:
---

Author: ASF GitHub Bot
Created on: 08/Nov/21 19:07
Start Date: 08/Nov/21 19:07
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3630:
URL: https://github.com/apache/hadoop/pull/3630#issuecomment-963483638


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 39s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 20s |  |  trunk passed  |
   | -1 :x: |  compile  |   3m 34s | 
[/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3630/1/artifact/out/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   3m  7s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3630/1/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  checkstyle  |   3m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  6s |  |  the patch passed  |
   | -1 :x: |  compile  |   3m 23s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3630/1/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |   3m 23s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3630/1/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   2m 57s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3630/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  javac  |   2m 57s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3630/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 13s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  spotbugs  |   2m 15s | 
[/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3630/1/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html)
 |  

[GitHub] [hadoop] hadoop-yetus commented on pull request #3630: HADOOP-17995. Stale record should be remove when DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson

2021-11-08 Thread GitBox


hadoop-yetus commented on pull request #3630:
URL: https://github.com/apache/hadoop/pull/3630#issuecomment-963483638


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 39s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  21m 20s |  |  trunk passed  |
   | -1 :x: |  compile  |   3m 34s | 
[/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3630/1/artifact/out/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   3m  7s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3630/1/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  checkstyle  |   3m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 41s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 52s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m  6s |  |  the patch passed  |
   | -1 :x: |  compile  |   3m 23s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3630/1/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |   3m 23s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3630/1/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   2m 57s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3630/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  javac  |   2m 57s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3630/1/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 13s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 31s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 32s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  spotbugs  |   2m 15s | 
[/new-spotbugs-hadoop-common-project_hadoop-common.html](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3630/1/artifact/out/new-spotbugs-hadoop-common-project_hadoop-common.html)
 |  hadoop-common-project/hadoop-common generated 1 new + 0 unchanged - 0 fixed 
= 1 total (was 0)  |
   | +1 :green_heart: |  shadedclient  |  21m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  2s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  | 223m 48s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense 

[jira] [Work logged] (HADOOP-15566) Support OpenTelemetry

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15566?focusedWorklogId=678666=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-678666
 ]

ASF GitHub Bot logged work on HADOOP-15566:
---

Author: ASF GitHub Bot
Created on: 08/Nov/21 18:52
Start Date: 08/Nov/21 18:52
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3445:
URL: https://github.com/apache/hadoop/pull/3445#issuecomment-963471894


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 17s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 21s |  |  trunk passed  |
   | -1 :x: |  compile  |   3m 46s | 
[/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/2/artifact/out/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   3m 12s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/2/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  checkstyle  |   3m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 57s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 23s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  24m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  25m 12s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 12s |  |  the patch passed  |
   | -1 :x: |  compile  |   3m 40s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  cc  |   3m 40s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |   3m 40s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   3m  5s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/2/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  cc  |   3m  5s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/2/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  javac  |   3m  5s | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3445: HADOOP-15566 Opentelemetry changes using java agent

2021-11-08 Thread GitBox


hadoop-yetus commented on pull request #3445:
URL: https://github.com/apache/hadoop/pull/3445#issuecomment-963471894


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  1s |  |  Shelldocs was not available.  |
   | +0 :ok: |  buf  |   0m  1s |  |  buf was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 17s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  25m 21s |  |  trunk passed  |
   | -1 :x: |  compile  |   3m 46s | 
[/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/2/artifact/out/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   3m 12s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/2/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  checkstyle  |   3m 40s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 57s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 43s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 43s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 23s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  24m 52s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | -0 :warning: |  patch  |  25m 12s |  |  Used diff version of patch file. 
Binary files and potentially other changes not applied. Please rebase and 
squash commits if necessary.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m 12s |  |  the patch passed  |
   | -1 :x: |  compile  |   3m 40s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  cc  |   3m 40s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |   3m 40s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/2/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   3m  5s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/2/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  cc  |   3m  5s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/2/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  javac  |   3m  5s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3445/2/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  

[GitHub] [hadoop] hadoop-yetus commented on pull request #3631: HDFS-16307. Improve HdfsBlockPlacementPolicies docs readability

2021-11-08 Thread GitBox


hadoop-yetus commented on pull request #3631:
URL: https://github.com/apache/hadoop/pull/3631#issuecomment-963468683


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 43s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m  9s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  49m 27s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  73m 20s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3631/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3631 |
   | Optional Tests | dupname asflicense mvnsite codespell markdownlint |
   | uname | Linux 05f25b7b5fd0 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d5cf473761424c62697f2875f90ef4905da6e33d |
   | Max. process+thread count | 548 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3631/1/console |
   | versions | git=2.25.1 maven=3.6.3 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17979) Interface EtagSource to allow FileStatus subclasses to provide etags

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17979?focusedWorklogId=678656=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-678656
 ]

ASF GitHub Bot logged work on HADOOP-17979:
---

Author: ASF GitHub Bot
Created on: 08/Nov/21 18:42
Start Date: 08/Nov/21 18:42
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3633:
URL: https://github.com/apache/hadoop/pull/3633#issuecomment-963462053


   + @mukund-thakur @mehakmeet @snvijaya 
   
   new tests happy; regression tests in progress.
   
   note this also moves abfs listFiles and listLocatedStatus API calls into 
being incremental. this is a good thing anyway


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 678656)
Time Spent: 20m  (was: 10m)

> Interface EtagSource to allow FileStatus subclasses to provide etags
> 
>
> Key: HADOOP-17979
> URL: https://issues.apache.org/jira/browse/HADOOP-17979
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Various objects stores provide etags in their FileStatus implementations
> Make these values accessible
> * new interface {{EtagFromFileStatus}} to be implemented when provided
> * filesystem.md to declare requirements of etags (constant between LIST and 
> HEAD)...
> * path capabilities for (a) etag and (b) etags consistent across rename
> Add implementation for abfs, later s3a (and google gcs)
> This is initially to handle recovery from certain failures in job commit 
> against abfs, but it would allow a cloud-ready version of distcp to track 
> etags of uploaded files, so diff properly.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3633: HADOOP-17979. Add Interface EtagSource to allow FileStatus subclasses to provide etags

2021-11-08 Thread GitBox


steveloughran commented on pull request #3633:
URL: https://github.com/apache/hadoop/pull/3633#issuecomment-963462053


   + @mukund-thakur @mehakmeet @snvijaya 
   
   new tests happy; regression tests in progress.
   
   note this also moves abfs listFiles and listLocatedStatus API calls into 
being incremental. this is a good thing anyway


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17979) Interface EtagSource to allow FileStatus subclasses to provide etags

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17979:

Labels: pull-request-available  (was: )

> Interface EtagSource to allow FileStatus subclasses to provide etags
> 
>
> Key: HADOOP-17979
> URL: https://issues.apache.org/jira/browse/HADOOP-17979
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Various objects stores provide etags in their FileStatus implementations
> Make these values accessible
> * new interface {{EtagFromFileStatus}} to be implemented when provided
> * filesystem.md to declare requirements of etags (constant between LIST and 
> HEAD)...
> * path capabilities for (a) etag and (b) etags consistent across rename
> Add implementation for abfs, later s3a (and google gcs)
> This is initially to handle recovery from certain failures in job commit 
> against abfs, but it would allow a cloud-ready version of distcp to track 
> etags of uploaded files, so diff properly.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17979) Interface EtagSource to allow FileStatus subclasses to provide etags

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17979?focusedWorklogId=678655=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-678655
 ]

ASF GitHub Bot logged work on HADOOP-17979:
---

Author: ASF GitHub Bot
Created on: 08/Nov/21 18:40
Start Date: 08/Nov/21 18:40
Worklog Time Spent: 10m 
  Work Description: steveloughran opened a new pull request #3633:
URL: https://github.com/apache/hadoop/pull/3633


   
   * Pull out from HADOOP-17981/#3611
   * Add S3A support
   * Add requirement that located status API calls MUST also support the API,
   and do this for ABFS.
   
   s3a code retains now deprecated getETag entries. lots of references in
   the s3guard code which I am leaving alone.
   
   
   ### How was this patch tested?
   
   new integration tests for abfs and s3a
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 678655)
Remaining Estimate: 0h
Time Spent: 10m

> Interface EtagSource to allow FileStatus subclasses to provide etags
> 
>
> Key: HADOOP-17979
> URL: https://issues.apache.org/jira/browse/HADOOP-17979
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure, fs/s3
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Various objects stores provide etags in their FileStatus implementations
> Make these values accessible
> * new interface {{EtagFromFileStatus}} to be implemented when provided
> * filesystem.md to declare requirements of etags (constant between LIST and 
> HEAD)...
> * path capabilities for (a) etag and (b) etags consistent across rename
> Add implementation for abfs, later s3a (and google gcs)
> This is initially to handle recovery from certain failures in job commit 
> against abfs, but it would allow a cloud-ready version of distcp to track 
> etags of uploaded files, so diff properly.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #3633: HADOOP-17979. Add Interface EtagSource to allow FileStatus subclasses to provide etags

2021-11-08 Thread GitBox


steveloughran opened a new pull request #3633:
URL: https://github.com/apache/hadoop/pull/3633


   
   * Pull out from HADOOP-17981/#3611
   * Add S3A support
   * Add requirement that located status API calls MUST also support the API,
   and do this for ABFS.
   
   s3a code retains now deprecated getETag entries. lots of references in
   the s3guard code which I am leaving alone.
   
   
   ### How was this patch tested?
   
   new integration tests for abfs and s3a
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri merged pull request #3631: HDFS-16307. Improve HdfsBlockPlacementPolicies docs readability

2021-11-08 Thread GitBox


goiri merged pull request #3631:
URL: https://github.com/apache/hadoop/pull/3631


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] minni31 opened a new pull request #3632: YARN-10822. Containers going from New to Scheduled transition for kil…

2021-11-08 Thread GitBox


minni31 opened a new pull request #3632:
URL: https://github.com/apache/hadoop/pull/3632


   …led container on recovery
   
   
   
   ### Description of PR
   
   
   ### How was this patch tested?
   
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17981) Support etag-assisted renames in FileOutputCommitter

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17981?focusedWorklogId=678641=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-678641
 ]

ASF GitHub Bot logged work on HADOOP-17981:
---

Author: ASF GitHub Bot
Created on: 08/Nov/21 17:59
Start Date: 08/Nov/21 17:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3611:
URL: https://github.com/apache/hadoop/pull/3611#issuecomment-963418250


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 21s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 31s |  |  trunk passed  |
   | -1 :x: |  compile  |   3m 46s | 
[/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/7/artifact/out/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   3m 11s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/7/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  checkstyle  |   3m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 12s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 28s |  |  the patch passed  |
   | -1 :x: |  compile  |   3m 41s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/7/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |   3m 41s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/7/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   3m  5s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/7/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  javac  |   3m  5s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/7/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 32s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 52s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 41s |  |  patch has no errors 
when building and testing our 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3611: HADOOP-17981. resilient commit through etag validation

2021-11-08 Thread GitBox


hadoop-yetus commented on pull request #3611:
URL: https://github.com/apache/hadoop/pull/3611#issuecomment-963418250


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 21s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 31s |  |  trunk passed  |
   | -1 :x: |  compile  |   3m 46s | 
[/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/7/artifact/out/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   3m 11s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/7/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  checkstyle  |   3m 46s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 12s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  8s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 37s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m  8s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 28s |  |  the patch passed  |
   | -1 :x: |  compile  |   3m 41s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/7/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |   3m 41s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/7/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   3m  5s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/7/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  javac  |   3m  5s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/7/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 32s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 52s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m 42s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 41s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 54s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 47s |  |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  unit  | 138m 26s |  |  
hadoop-mapreduce-client-jobclient in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 56s |  |  hadoop-azure in the patch 
passed.  |
   

[GitHub] [hadoop] GuoPhilipse commented on pull request #3631: HDFS-16307. improve docs readability

2021-11-08 Thread GitBox


GuoPhilipse commented on pull request #3631:
URL: https://github.com/apache/hadoop/pull/3631#issuecomment-963407262


   > @GuoPhilipse thanks for the work. Can you link it with a JIRA?
   
   Thanks @goiri ,have just updated


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] goiri commented on pull request #3631: improve docs readability

2021-11-08 Thread GitBox


goiri commented on pull request #3631:
URL: https://github.com/apache/hadoop/pull/3631#issuecomment-963402778


   @GuoPhilipse thanks for the work.
   Can you link it with a JIRA?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GuoPhilipse opened a new pull request #3631: imrove docs readability

2021-11-08 Thread GitBox


GuoPhilipse opened a new pull request #3631:
URL: https://github.com/apache/hadoop/pull/3631


   imrove docs readability


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17996) UserGroupInformation#unprotectedRelogin sets the last login time before logging in

2021-11-08 Thread Prabhu Joseph (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Prabhu Joseph updated HADOOP-17996:
---
Description: 
UserGroupInformation#unprotectedRelogin sets the last login time before logging 
in. IPC#Client does reloginFromKeytab when there is a connection reset failure 
from AD which does logout and set the last login time to now and then tries to 
login. The login also fails as not able to connect to AD. Then the reattempts 
does not happen as kerberosMinSecondsBeforeRelogin check fails. All Client and 
Server operations fails with *GSS initiate failed*

{code}
2021-10-31 09:50:53,546 WARN  ha.EditLogTailer - Unable to trigger a roll of 
the active NN
java.util.concurrent.ExecutionException: 
org.apache.hadoop.security.KerberosAuthException:  DestHost:destPort 
namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. Failed on local 
exception: org.apache.hadoop.security.KerberosAuthException: Login failure for 
user: nn/nameno...@example.com javax.security.auth.login.LoginException: 
Connection reset
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:382)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:441)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1712)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:480)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
Caused by: org.apache.hadoop.security.KerberosAuthException:  DestHost:destPort 
namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. Failed on local 
exception: org.apache.hadoop.security.KerberosAuthException: Login failure for 
user: nn/nameno...@example.com javax.security.auth.login.LoginException: 
Connection reset
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1501)
at org.apache.hadoop.ipc.Client.call(Client.java:1443)
at org.apache.hadoop.ipc.Client.call(Client.java:1353)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy21.rollEditLog(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:150)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:367)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:364)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$MultipleNameNodeProxy.call(EditLogTailer.java:514)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.security.KerberosAuthException: Login failure for 
user: nn/nameno...@example.com javax.security.auth.login.LoginException: 
Connection reset
at 
org.apache.hadoop.security.UserGroupInformation.unprotectedRelogin(UserGroupInformation.java:1193)
at 
org.apache.hadoop.security.UserGroupInformation.relogin(UserGroupInformation.java:1159)
at 
org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1128)
at 
org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1110)
at org.apache.hadoop.ipc.Client$Connection$1.run(Client.java:734)
at java.security.AccessController.doPrivileged(Native Method)
at 

[jira] [Created] (HADOOP-17996) UserGroupInformation#unprotectedRelogin sets the last login time before logging in

2021-11-08 Thread Prabhu Joseph (Jira)
Prabhu Joseph created HADOOP-17996:
--

 Summary: UserGroupInformation#unprotectedRelogin sets the last 
login time before logging in
 Key: HADOOP-17996
 URL: https://issues.apache.org/jira/browse/HADOOP-17996
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 3.3.1
Reporter: Prabhu Joseph
Assignee: Prabhu Joseph


UserGroupInformation#unprotectedRelogin sets the last login time before logging 
in. IPC#Client does reloginFromKeytab when there is a connection reset failure 
from AD which does logout and set the last login time to now and then tries to 
login. The login also fails as not able to connect to AD. Then the reattempts 
does not happen as kerberosMinSecondsBeforeRelogin check fails. All Client and 
Server operations fails with "GSS initiate failed".

{code}
2021-10-31 09:50:53,546 WARN  ha.EditLogTailer - Unable to trigger a roll of 
the active NN
java.util.concurrent.ExecutionException: 
org.apache.hadoop.security.KerberosAuthException:  DestHost:destPort 
namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. Failed on local 
exception: org.apache.hadoop.security.KerberosAuthException: Login failure for 
user: nn/nameno...@example.com javax.security.auth.login.LoginException: 
Connection reset
at java.util.concurrent.FutureTask.report(FutureTask.java:122)
at java.util.concurrent.FutureTask.get(FutureTask.java:206)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer.triggerActiveLogRoll(EditLogTailer.java:382)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.doWork(EditLogTailer.java:441)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.access$400(EditLogTailer.java:410)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread$1.run(EditLogTailer.java:427)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:360)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1712)
at 
org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:480)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$EditLogTailerThread.run(EditLogTailer.java:423)
Caused by: org.apache.hadoop.security.KerberosAuthException:  DestHost:destPort 
namenode0:8020 , LocalHost:localPort namenode1/1.2.3.4:0. Failed on local 
exception: org.apache.hadoop.security.KerberosAuthException: Login failure for 
user: nn/nameno...@example.com javax.security.auth.login.LoginException: 
Connection reset
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:831)
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:806)
at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1501)
at org.apache.hadoop.ipc.Client.call(Client.java:1443)
at org.apache.hadoop.ipc.Client.call(Client.java:1353)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)
at com.sun.proxy.$Proxy21.rollEditLog(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolTranslatorPB.rollEditLog(NamenodeProtocolTranslatorPB.java:150)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:367)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$2.doWork(EditLogTailer.java:364)
at 
org.apache.hadoop.hdfs.server.namenode.ha.EditLogTailer$MultipleNameNodeProxy.call(EditLogTailer.java:514)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hadoop.security.KerberosAuthException: Login failure for 
user: nn/nameno...@example.com javax.security.auth.login.LoginException: 
Connection reset
at 
org.apache.hadoop.security.UserGroupInformation.unprotectedRelogin(UserGroupInformation.java:1193)
at 
org.apache.hadoop.security.UserGroupInformation.relogin(UserGroupInformation.java:1159)
at 
org.apache.hadoop.security.UserGroupInformation.reloginFromKeytab(UserGroupInformation.java:1128)
   

[GitHub] [hadoop] goiri merged pull request #3625: HDFS-16304. Locate OpenSSL libs for libhdfspp

2021-11-08 Thread GitBox


goiri merged pull request #3625:
URL: https://github.com/apache/hadoop/pull/3625


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GauthamBanasandra edited a comment on pull request #3625: HDFS-16304. Locate OpenSSL libs for libhdfspp

2021-11-08 Thread GitBox


GauthamBanasandra edited a comment on pull request #3625:
URL: https://github.com/apache/hadoop/pull/3625#issuecomment-963348257


   > Is the build currently breaking for all PRs without this change?
   
   @sodonnel yes, please see the JIRA of this PR for more details.
   
   > We are getting native compile issues on #3579 too. Seems this fix may be 
related.
   
   Yes, this PR fixes it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GauthamBanasandra commented on pull request #3625: HDFS-16304. Locate OpenSSL libs for libhdfspp

2021-11-08 Thread GitBox


GauthamBanasandra commented on pull request #3625:
URL: https://github.com/apache/hadoop/pull/3625#issuecomment-963348257


   > Is the build currently breaking for all PRs without this change?
   @sodonnel yes, please see the JIRA of this PR for more details.
   
   > We are getting native compile issues on #3579 too. Seems this fix may be 
related.
   Yes, this PR fixes it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17990) Failing concurrent FS.initialize commands when fs.azure.createRemoteFileSystemDuringInitialization is enabled on hadoop-azure ABFS

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17990?focusedWorklogId=678584=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-678584
 ]

ASF GitHub Bot logged work on HADOOP-17990:
---

Author: ASF GitHub Bot
Created on: 08/Nov/21 16:11
Start Date: 08/Nov/21 16:11
Worklog Time Spent: 10m 
  Work Description: majdyz commented on pull request #3620:
URL: https://github.com/apache/hadoop/pull/3620#issuecomment-963312509


   I saw this compilation error coming from Yetus, is this expected? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 678584)
Time Spent: 3.5h  (was: 3h 20m)

> Failing concurrent FS.initialize commands when 
> fs.azure.createRemoteFileSystemDuringInitialization is enabled on 
> hadoop-azure ABFS
> --
>
> Key: HADOOP-17990
> URL: https://issues.apache.org/jira/browse/HADOOP-17990
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Zamil Majdy
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3.5h
>  Remaining Estimate: 0h
>
> *Bug description:*
> When {{fs.azure.createRemoteFileSystemDuringInitialization}} is enabled, the 
> filesystem will create a container if it does not already exist inside the 
> {{initialize}} method. The current flow of creating the container will fail 
> in the case of concurrent {{initialize}} methods being executed 
> simultaneously (only one request can create the container, the rest will fail 
> instead of moving on). This is happen due to the `checkException` method that 
> is not catching the Hadoop `FileAlreadyExists` exception.
> Stacktrace:
> {{Caused by: org.apache.hadoop.fs.FileAlreadyExistsException: Operation 
> failed: "The specified filesystem already exists.", 409, PUT, 
> https://.dfs.core.windows.net/project?resource=filesystem, 
> FilesystemAlreadyExists, "The specified filesystem already exists. 
> RequestId: Time:2021-10-18T13:46:05.7504906Z"}}
>  {{ {{at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.checkException(AzureBlobFileSystem.java:1182)
>  {{ {{at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.createFileSystem(AzureBlobFileSystem.java:1067)
>  {{ {{at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:126)
>  {{ {{at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
> *To reproduce:*
>  * Set `fs.azure.createRemoteFileSystemDuringInitialization` to `true`
>  * Run two concurrent `initialize` commands with the root to the non existing 
> container/filesystem.
>  
> *Proposed fix:*
> [https://github.com/apache/hadoop/pull/3620]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] majdyz commented on pull request #3620: HADOOP-17990. Fix failing concurrent FS.initialize commands when fs.azure.createRemoteFileSystemDuringInitialization is enabled.

2021-11-08 Thread GitBox


majdyz commented on pull request #3620:
URL: https://github.com/apache/hadoop/pull/3620#issuecomment-963312509


   I saw this compilation error coming from Yetus, is this expected? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sodonnel commented on pull request #3625: HDFS-16304. Locate OpenSSL libs for libhdfspp

2021-11-08 Thread GitBox


sodonnel commented on pull request #3625:
URL: https://github.com/apache/hadoop/pull/3625#issuecomment-963309508


   Is the build currently breaking for all PRs without this change? We are 
getting native compile issues on #3579 too. Seems this fix may be related.
   
   The last CI run is giving errors like below on the step 
hadoop-hdfs-native-client:
   
   ```
   [WARNING] Cannot find a usable OpenSSL library. 
OPENSSL_LIBRARY=OPENSSL_LIBRARY-NOTFOUND, OPENSSL_INCLUDE_DIR=/usr/include, 
CUSTOM_OPENSSL_LIB=, CUSTOM_OPENSSL_PREFIX=, CUSTOM_OPENSSL_INCLUDE=
   [WARNING] CMake Error at CMakeLists.txt:146 (message):
   [WARNING]   Terminating build because require.openssl was specified.
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17990) Failing concurrent FS.initialize commands when fs.azure.createRemoteFileSystemDuringInitialization is enabled on hadoop-azure ABFS

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17990?focusedWorklogId=678573=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-678573
 ]

ASF GitHub Bot logged work on HADOOP-17990:
---

Author: ASF GitHub Bot
Created on: 08/Nov/21 15:59
Start Date: 08/Nov/21 15:59
Worklog Time Spent: 10m 
  Work Description: majdyz commented on pull request #3620:
URL: https://github.com/apache/hadoop/pull/3620#issuecomment-963299887


   Thanks for the review, I have addressed the commets.
   
   1. The annotation used in AzureBlobFileSystem is 
`org.apache.hadoop.thirdparty.com.google.common.annotations.VisibleForTesting`
   2. I ran the `dev-support/testrun-scripts/runtests.sh` using the own HNS 
enabled storage.
   
   Test result output
   
   ```
   AppendBlob-HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 106, Failures: 0, Errors: 0, Skipped: 26
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 559, Failures: 0, Errors: 0, Skipped: 559
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 257, Failures: 0, Errors: 0, Skipped: 257
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 106, Failures: 0, Errors: 0, Skipped: 26
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 559, Failures: 0, Errors: 0, Skipped: 559
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 257, Failures: 0, Errors: 0, Skipped: 257
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 106, Failures: 0, Errors: 0, Skipped: 26
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 559, Failures: 0, Errors: 0, Skipped: 559
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 257, Failures: 0, Errors: 0, Skipped: 257
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 106, Failures: 0, Errors: 0, Skipped: 26
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 559, Failures: 0, Errors: 0, Skipped: 559
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 257, Failures: 0, Errors: 0, Skipped: 257
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 678573)
Time Spent: 3h 20m  (was: 3h 10m)

> Failing concurrent FS.initialize commands when 
> fs.azure.createRemoteFileSystemDuringInitialization is enabled on 
> hadoop-azure ABFS
> --
>
> Key: HADOOP-17990
> URL: https://issues.apache.org/jira/browse/HADOOP-17990
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Zamil Majdy
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 20m
>  Remaining Estimate: 0h
>
> *Bug description:*
> When {{fs.azure.createRemoteFileSystemDuringInitialization}} is enabled, the 
> filesystem will create a container if it does not already exist inside the 
> {{initialize}} method. The current flow of creating the container will fail 
> in the case of concurrent {{initialize}} methods being executed 
> simultaneously (only one request can create the container, the rest will fail 
> instead of moving on). This is happen due to the `checkException` method that 
> is not catching the Hadoop `FileAlreadyExists` exception.
> Stacktrace:
> {{Caused by: org.apache.hadoop.fs.FileAlreadyExistsException: Operation 
> failed: "The specified filesystem already exists.", 409, PUT, 
> https://.dfs.core.windows.net/project?resource=filesystem, 
> FilesystemAlreadyExists, "The specified filesystem already exists. 
> RequestId: Time:2021-10-18T13:46:05.7504906Z"}}
>  {{ {{at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.checkException(AzureBlobFileSystem.java:1182)
>  {{ {{at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.createFileSystem(AzureBlobFileSystem.java:1067)
>  {{ {{at 
> org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.initialize(AzureBlobFileSystem.java:126)
>  {{ {{at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
> *To reproduce:*
>  * Set `fs.azure.createRemoteFileSystemDuringInitialization` to `true`
>  * Run two concurrent `initialize` commands with the root to the non existing 
> container/filesystem.
>  
> *Proposed fix:*
> [https://github.com/apache/hadoop/pull/3620]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[GitHub] [hadoop] majdyz commented on pull request #3620: HADOOP-17990. Fix failing concurrent FS.initialize commands when fs.azure.createRemoteFileSystemDuringInitialization is enabled.

2021-11-08 Thread GitBox


majdyz commented on pull request #3620:
URL: https://github.com/apache/hadoop/pull/3620#issuecomment-963299887


   Thanks for the review, I have addressed the commets.
   
   1. The annotation used in AzureBlobFileSystem is 
`org.apache.hadoop.thirdparty.com.google.common.annotations.VisibleForTesting`
   2. I ran the `dev-support/testrun-scripts/runtests.sh` using the own HNS 
enabled storage.
   
   Test result output
   
   ```
   AppendBlob-HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 106, Failures: 0, Errors: 0, Skipped: 26
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 559, Failures: 0, Errors: 0, Skipped: 559
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 257, Failures: 0, Errors: 0, Skipped: 257
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 106, Failures: 0, Errors: 0, Skipped: 26
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 559, Failures: 0, Errors: 0, Skipped: 559
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 257, Failures: 0, Errors: 0, Skipped: 257
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 106, Failures: 0, Errors: 0, Skipped: 26
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 559, Failures: 0, Errors: 0, Skipped: 559
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 257, Failures: 0, Errors: 0, Skipped: 257
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 106, Failures: 0, Errors: 0, Skipped: 26
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 559, Failures: 0, Errors: 0, Skipped: 559
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 257, Failures: 0, Errors: 0, Skipped: 257
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17691) Abfs directory delete times out on large directory tree w/ Oauth: OperationTimedOut

2021-11-08 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440564#comment-17440564
 ] 

Steve Loughran commented on HADOOP-17691:
-

happens with oauth as it does permissions checks.

rename doesn't have this problem, so moving under trash is a possible recovery 
mechanism.

the MAPREDUCE-7341 manifest committer will workaround this problem by
* deleting individual task attempt dirs before the whole job attempt
* supporting rename to trash as sole/fallback strategy

> Abfs directory delete times out on large directory tree w/ Oauth: 
> OperationTimedOut
> ---
>
> Key: HADOOP-17691
> URL: https://issues.apache.org/jira/browse/HADOOP-17691
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> Timeouts surfacing on abfs when a delete of a large directory tree is invoked.
> {code}
> StatusDescription=Operation could not be completed within the specified time.
> ErrorCode=OperationTimedOut
> ErrorMessage=Operation could not be completed within the specified time.
> {code}
> This has surfaced in v1 FileOutputCommitter cleanups, implying the 
> directories created there (many many dirs, no files remaining after the job 
> commit) is sufficient to create the problem.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17691) Abfs directory delete times out on large directory tree w/ Oauth: OperationTimedOut

2021-11-08 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-17691:

Summary: Abfs directory delete times out on large directory tree w/ Oauth: 
OperationTimedOut  (was: Abfs directory delete times out on large directory 
tree: OperationTimedOut)

> Abfs directory delete times out on large directory tree w/ Oauth: 
> OperationTimedOut
> ---
>
> Key: HADOOP-17691
> URL: https://issues.apache.org/jira/browse/HADOOP-17691
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Priority: Major
>
> Timeouts surfacing on abfs when a delete of a large directory tree is invoked.
> {code}
> StatusDescription=Operation could not be completed within the specified time.
> ErrorCode=OperationTimedOut
> ErrorMessage=Operation could not be completed within the specified time.
> {code}
> This has surfaced in v1 FileOutputCommitter cleanups, implying the 
> directories created there (many many dirs, no files remaining after the job 
> commit) is sufficient to create the problem.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17990) Failing concurrent FS.initialize commands when fs.azure.createRemoteFileSystemDuringInitialization is enabled on hadoop-azure ABFS

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17990?focusedWorklogId=678549=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-678549
 ]

ASF GitHub Bot logged work on HADOOP-17990:
---

Author: ASF GitHub Bot
Created on: 08/Nov/21 15:25
Start Date: 08/Nov/21 15:25
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3620:
URL: https://github.com/apache/hadoop/pull/3620#issuecomment-963269058


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m  8s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 50s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 26s | 
[/patch-mvninstall-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3620/2/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   0m 30s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3620/2/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |   0m 30s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3620/2/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   0m 26s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3620/2/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-azure in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  javac  |   0m 26s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3620/2/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-azure in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 17s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3620/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 2 new + 2 unchanged - 0 
fixed = 4 total (was 2)  |
   | -1 :x: |  mvnsite  |   0m 27s | 
[/patch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3620/2/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  spotbugs  |   0m 26s | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3620: HADOOP-17990. Fix failing concurrent FS.initialize commands when fs.azure.createRemoteFileSystemDuringInitialization is enabled.

2021-11-08 Thread GitBox


hadoop-yetus commented on pull request #3620:
URL: https://github.com/apache/hadoop/pull/3620#issuecomment-963269058


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 52s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m  8s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 38s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m  0s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 50s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | -1 :x: |  mvninstall  |   0m 26s | 
[/patch-mvninstall-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3620/2/artifact/out/patch-mvninstall-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | -1 :x: |  compile  |   0m 30s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3620/2/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |   0m 30s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3620/2/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   0m 26s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3620/2/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-azure in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  javac  |   0m 26s | 
[/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3620/2/artifact/out/patch-compile-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  hadoop-azure in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 17s | 
[/results-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3620/2/artifact/out/results-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 2 new + 2 unchanged - 0 
fixed = 4 total (was 2)  |
   | -1 :x: |  mvnsite  |   0m 27s | 
[/patch-mvnsite-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3620/2/artifact/out/patch-mvnsite-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  spotbugs  |   0m 26s | 
[/patch-spotbugs-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3620/2/artifact/out/patch-spotbugs-hadoop-tools_hadoop-azure.txt)
 |  hadoop-azure in the patch failed.  |
   | +1 :green_heart: |  shadedclient  |  24m 44s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  |   0m 29s | 

[jira] [Work logged] (HADOOP-17981) Support etag-assisted renames in FileOutputCommitter

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17981?focusedWorklogId=678548=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-678548
 ]

ASF GitHub Bot logged work on HADOOP-17981:
---

Author: ASF GitHub Bot
Created on: 08/Nov/21 15:23
Start Date: 08/Nov/21 15:23
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3611:
URL: https://github.com/apache/hadoop/pull/3611#issuecomment-963267230


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  2s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 24s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 37s |  |  trunk passed  |
   | -1 :x: |  compile  |   3m 50s | 
[/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/6/artifact/out/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   3m 12s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/6/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  checkstyle  |   3m 42s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 13s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m 12s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 31s |  |  the patch passed  |
   | -1 :x: |  compile  |   3m 40s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/6/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |   3m 40s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/6/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   3m  7s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/6/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  javac  |   3m  7s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/6/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 38s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 55s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 47s |  |  patch has no errors 
when building and testing our 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3611: HADOOP-17981. resilient commit through etag validation

2021-11-08 Thread GitBox


hadoop-yetus commented on pull request #3611:
URL: https://github.com/apache/hadoop/pull/3611#issuecomment-963267230


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  2s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 24s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 37s |  |  trunk passed  |
   | -1 :x: |  compile  |   3m 50s | 
[/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/6/artifact/out/branch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in trunk failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   3m 12s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/6/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  checkstyle  |   3m 42s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 13s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m  9s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m 12s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 57s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   2m 31s |  |  the patch passed  |
   | -1 :x: |  compile  |   3m 40s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/6/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  javac  |   3m 40s | 
[/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/6/artifact/out/patch-compile-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root in the patch failed with JDK Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.  |
   | -1 :x: |  compile  |   3m  7s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/6/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -1 :x: |  javac  |   3m  7s | 
[/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3611/6/artifact/out/patch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in the patch failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 38s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 55s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m 39s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  2s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   5m 48s |  |  hadoop-mapreduce-client-core in 
the patch passed.  |
   | +1 :green_heart: |  unit  | 140m 43s |  |  
hadoop-mapreduce-client-jobclient in the patch passed.  |
   | +1 :green_heart: |  unit  |   1m 57s |  |  hadoop-azure in the patch 
passed.  |
   

[GitHub] [hadoop] hadoop-yetus commented on pull request #3629: HDFS-16305.Record the remote NameNode address when the rolling log is triggered.

2021-11-08 Thread GitBox


hadoop-yetus commented on pull request #3629:
URL: https://github.com/apache/hadoop/pull/3629#issuecomment-963132135


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 41s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 25s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 19s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 51s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 23s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  25m  5s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 370m 54s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3629/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 477m 29s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.web.TestWebHdfsFileSystemContract |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3629/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3629 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux a74a6a57576d 4.15.0-143-generic #147-Ubuntu SMP Wed Apr 14 
16:10:11 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 09d0cea925b2451cdae9e7d1b12d6f0b05ab |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3629/1/testReport/ |
   | Max. process+thread count | 2027 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3629/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message 

[jira] [Work logged] (HADOOP-17981) Support etag-assisted renames in FileOutputCommitter

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17981?focusedWorklogId=678451=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-678451
 ]

ASF GitHub Bot logged work on HADOOP-17981:
---

Author: ASF GitHub Bot
Created on: 08/Nov/21 12:53
Start Date: 08/Nov/21 12:53
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3611:
URL: https://github.com/apache/hadoop/pull/3611#issuecomment-963120046


   thanks, updated javadocs.
   
   bear in mind this is not going to be merged in as is; once the review is in 
I plan to
   * isolate the etag changes into their own pr & add S3A support
   * use the resilient commit helper to commit in the manifest committer.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 678451)
Time Spent: 11h 10m  (was: 11h)

> Support etag-assisted renames in FileOutputCommitter
> 
>
> Key: HADOOP-17981
> URL: https://issues.apache.org/jira/browse/HADOOP-17981
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs, fs/azure
>Affects Versions: 3.4.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 11h 10m
>  Remaining Estimate: 0h
>
> To deal with some throttling/retry issues in object stores,
> pass the FileStatus entries retrieved during listing
> into a private interface ResilientCommitByRename which filesystems
> may implement to use extra attributes in the listing (etag, version)
> to constrain and validate the operation.
> Although targeting azure, GCS and others could use. no point in S3A as they 
> shouldn't use this committer.
> # And we are not going to do any changes to FileSystem as there are explicit 
> guarantees of public use and stability.
> I am not going to make that change as the hive thing that will suddenly start 
> expecting it to work forever.
> # I'm not planning to merge this in, as the manifest committer is going to 
> include this and more (MAPREDUCE-7341)
> However, I do need to get this in on a branch, so am doing this work on trunk 
> for dev & test and for others to review



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3611: HADOOP-17981. resilient commit through etag validation

2021-11-08 Thread GitBox


steveloughran commented on pull request #3611:
URL: https://github.com/apache/hadoop/pull/3611#issuecomment-963120046


   thanks, updated javadocs.
   
   bear in mind this is not going to be merged in as is; once the review is in 
I plan to
   * isolate the etag changes into their own pr & add S3A support
   * use the resilient commit helper to commit in the manifest committer.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17995) Stale record should be remove when DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17995:

Labels: pull-request-available  (was: )

> Stale record should be remove when 
> DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson
> 
>
> Key: HADOOP-17995
> URL: https://issues.apache.org/jira/browse/HADOOP-17995
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haiyang Hu
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As [HADOOP-16947|https://issues.apache.org/jira/browse/HADOOP-16947] problem 
> with description, 
> Stale SumAndCount also should be remove when 
> DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson. 
> Ensure the DataNode JMX get SendPacketDownstreamAvgInfo Metrics is accurate



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17995) Stale record should be remove when DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17995?focusedWorklogId=678447=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-678447
 ]

ASF GitHub Bot logged work on HADOOP-17995:
---

Author: ASF GitHub Bot
Created on: 08/Nov/21 12:48
Start Date: 08/Nov/21 12:48
Worklog Time Spent: 10m 
  Work Description: haiyang1987 opened a new pull request #3630:
URL: https://github.com/apache/hadoop/pull/3630


   **Description of PR**
   As HADOOP-16947 problem with description.
   Stale SumAndCount also should be remove when 
DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson.
   Ensure the DataNode JMX get SendPacketDownstreamAvgInfo Metrics is accurate.
   
   Details: HADOOP-17995


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 678447)
Remaining Estimate: 0h
Time Spent: 10m

> Stale record should be remove when 
> DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson
> 
>
> Key: HADOOP-17995
> URL: https://issues.apache.org/jira/browse/HADOOP-17995
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haiyang Hu
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> As [HADOOP-16947|https://issues.apache.org/jira/browse/HADOOP-16947] problem 
> with description, 
> Stale SumAndCount also should be remove when 
> DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson. 
> Ensure the DataNode JMX get SendPacketDownstreamAvgInfo Metrics is accurate



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] haiyang1987 opened a new pull request #3630: HADOOP-17995. Stale record should be remove when DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson

2021-11-08 Thread GitBox


haiyang1987 opened a new pull request #3630:
URL: https://github.com/apache/hadoop/pull/3630


   **Description of PR**
   As HADOOP-16947 problem with description.
   Stale SumAndCount also should be remove when 
DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson.
   Ensure the DataNode JMX get SendPacketDownstreamAvgInfo Metrics is accurate.
   
   Details: HADOOP-17995


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] mariosmeim-db commented on a change in pull request #3440: ABFS: Support for Encryption Context

2021-11-08 Thread GitBox


mariosmeim-db commented on a change in pull request #3440:
URL: https://github.com/apache/hadoop/pull/3440#discussion_r744679833



##
File path: 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azurebfs/AbfsConfiguration.java
##
@@ -906,6 +907,36 @@ public SASTokenProvider getSASTokenProvider() throws 
AzureBlobFileSystemExceptio
 }
   }
 
+  public EncryptionContextProvider initializeEncryptionContextProvider() {

Review comment:
   Since the actual call to `initialize()` takes place in 
https://github.com/apache/hadoop/pull/3440/files#diff-94925ffd3b21968d7e6b476f7e85f68f5ea326f186262017fad61a5a6a3815cbR1630,
 maybe this should be renamed to `createEncryptionContextProvider`.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17880) Build Hadoop on Centos 7

2021-11-08 Thread Gautham Banasandra (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440425#comment-17440425
 ] 

Gautham Banasandra commented on HADOOP-17880:
-

Thanks for the PR [~baizhendong]. I've merged your PR 3535 to branch-2.10.

> Build Hadoop on Centos 7
> 
>
> Key: HADOOP-17880
> URL: https://issues.apache.org/jira/browse/HADOOP-17880
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.10.2
> Environment: mac os x86_64
>Reporter: baizhendong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.10.2
>
>  Time Spent: 13h 50m
>  Remaining Estimate: 0h
>
> Getting Hadoop to build on Centos 7 will greatly benefit the community. Here, 
> we aim to provide a Dockerfile that builds out the image with all the 
> dependencies needed to build Hadoop on Centos 7.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17880) Build Hadoop on Centos 7

2021-11-08 Thread Gautham Banasandra (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gautham Banasandra resolved HADOOP-17880.
-
Fix Version/s: 2.10.2
   Resolution: Fixed

> Build Hadoop on Centos 7
> 
>
> Key: HADOOP-17880
> URL: https://issues.apache.org/jira/browse/HADOOP-17880
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.10.2
> Environment: mac os x86_64
>Reporter: baizhendong
>Priority: Major
>  Labels: pull-request-available
> Fix For: 2.10.2
>
>  Time Spent: 13h 50m
>  Remaining Estimate: 0h
>
> Getting Hadoop to build on Centos 7 will greatly benefit the community. Here, 
> we aim to provide a Dockerfile that builds out the image with all the 
> dependencies needed to build Hadoop on Centos 7.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17880) Build Hadoop on Centos 7

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17880?focusedWorklogId=678439=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-678439
 ]

ASF GitHub Bot logged work on HADOOP-17880:
---

Author: ASF GitHub Bot
Created on: 08/Nov/21 12:15
Start Date: 08/Nov/21 12:15
Worklog Time Spent: 10m 
  Work Description: GauthamBanasandra merged pull request #3535:
URL: https://github.com/apache/hadoop/pull/3535


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 678439)
Time Spent: 13h 50m  (was: 13h 40m)

> Build Hadoop on Centos 7
> 
>
> Key: HADOOP-17880
> URL: https://issues.apache.org/jira/browse/HADOOP-17880
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.10.2
> Environment: mac os x86_64
>Reporter: baizhendong
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 13h 50m
>  Remaining Estimate: 0h
>
> Getting Hadoop to build on Centos 7 will greatly benefit the community. Here, 
> we aim to provide a Dockerfile that builds out the image with all the 
> dependencies needed to build Hadoop on Centos 7.
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GauthamBanasandra merged pull request #3535: HADOOP-17880. Build Hadoop on Centos 7

2021-11-08 Thread GitBox


GauthamBanasandra merged pull request #3535:
URL: https://github.com/apache/hadoop/pull/3535


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17995) Stale record should be remove when DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson

2021-11-08 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HADOOP-17995:

Description: 
As [HADOOP-16947|https://issues.apache.org/jira/browse/HADOOP-16947] problem 
with description, 
Stale SumAndCount also should be remove when 
DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson. 
Ensure the DataNode JMX get SendPacketDownstreamAvgInfo Metrics is accurate

  was:As [HADOOP-16947|https://issues.apache.org/jira/browse/HADOOP-16947] 
problem with description, Stale SumAndCount also should be remove when 
DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson. Ensure the DataNode 
JMX get SendPacketDownstreamAvgInfo Metrics is accurate


> Stale record should be remove when 
> DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson
> 
>
> Key: HADOOP-17995
> URL: https://issues.apache.org/jira/browse/HADOOP-17995
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haiyang Hu
>Priority: Major
>
> As [HADOOP-16947|https://issues.apache.org/jira/browse/HADOOP-16947] problem 
> with description, 
> Stale SumAndCount also should be remove when 
> DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson. 
> Ensure the DataNode JMX get SendPacketDownstreamAvgInfo Metrics is accurate



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17995) Stale record should be remove when DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson

2021-11-08 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HADOOP-17995:

Description: As 
[HADOOP-16947|https://issues.apache.org/jira/browse/HADOOP-16947] problem with 
description, Stale SumAndCount also should be remove when 
DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson. Ensure the DataNode 
JMX get SendPacketDownstreamAvgInfo Metrics is accurate  (was: As 
[HADOOP-16947|https://issues.apache.org/jira/browse/HADOOP-16947] problem with 
description, Stale SumAndCount also should be remove when 
DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson)

> Stale record should be remove when 
> DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson
> 
>
> Key: HADOOP-17995
> URL: https://issues.apache.org/jira/browse/HADOOP-17995
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haiyang Hu
>Priority: Major
>
> As [HADOOP-16947|https://issues.apache.org/jira/browse/HADOOP-16947] problem 
> with description, Stale SumAndCount also should be remove when 
> DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson. Ensure the 
> DataNode JMX get SendPacketDownstreamAvgInfo Metrics is accurate



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17713) Update apache/hadoop:3 docker image to 3.3.1 release

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17713?focusedWorklogId=678432=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-678432
 ]

ASF GitHub Bot logged work on HADOOP-17713:
---

Author: ASF GitHub Bot
Created on: 08/Nov/21 12:00
Start Date: 08/Nov/21 12:00
Worklog Time Spent: 10m 
  Work Description: adoroszlai commented on pull request #3577:
URL: https://github.com/apache/hadoop/pull/3577#issuecomment-963080996


   Thanks @ayushtkn and @aajisaka for the reviews.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 678432)
Time Spent: 2h  (was: 1h 50m)

> Update apache/hadoop:3 docker image to 3.3.1 release
> 
>
> Key: HADOOP-17713
> URL: https://issues.apache.org/jira/browse/HADOOP-17713
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Wei-Chiu Chuang
>Assignee: Attila Doroszlai
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> After the release passes the vote, update apache/hadoop:3 docker image by 
> pointing it to 3.3.1 release bits.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] adoroszlai commented on pull request #3577: HADOOP-17713. Update apache/hadoop:3 docker image to 3.3.1 release

2021-11-08 Thread GitBox


adoroszlai commented on pull request #3577:
URL: https://github.com/apache/hadoop/pull/3577#issuecomment-963080996


   Thanks @ayushtkn and @aajisaka for the reviews.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17995) Stale record should be remove when DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson

2021-11-08 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HADOOP-17995:

Description: As 
[HADOOP-16947|https://issues.apache.org/jira/browse/HADOOP-16947] problem with 
description, Stale SumAndCount also should be remove when 
DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson  (was: As 
[HADOOP-16947|https://issues.apache.org/jira/browse/HADOOP-16947]Problem with 
description, Stale SumAndCount also should be remove when 
DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson)

> Stale record should be remove when 
> DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson
> 
>
> Key: HADOOP-17995
> URL: https://issues.apache.org/jira/browse/HADOOP-17995
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haiyang Hu
>Priority: Major
>
> As [HADOOP-16947|https://issues.apache.org/jira/browse/HADOOP-16947] problem 
> with description, Stale SumAndCount also should be remove when 
> DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17995) Stale record should be remove when DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson

2021-11-08 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HADOOP-17995:

Description: As 
[HADOOP-16947|https://issues.apache.org/jira/browse/HADOOP-16947]Problem with 
description, Stale SumAndCount also should be remove when 
DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson  (was: As 
https://issues.apache.org/jira/browse/HADOOP-16947, Problem with description, 
Stale SumAndCount also should be remove when 
DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson)

> Stale record should be remove when 
> DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson
> 
>
> Key: HADOOP-17995
> URL: https://issues.apache.org/jira/browse/HADOOP-17995
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haiyang Hu
>Priority: Major
>
> As [HADOOP-16947|https://issues.apache.org/jira/browse/HADOOP-16947]Problem 
> with description, Stale SumAndCount also should be remove when 
> DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17995) Stale record should be remove when DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson

2021-11-08 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HADOOP-17995:

Description: As https://issues.apache.org/jira/browse/HADOOP-16947, Problem 
with description, Stale SumAndCount also should be remove when 
DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson  (was: As 
https://issues.apache.org/jira/browse/HADOOP-16947 Problem with description, 
Stale SumAndCount also should be remove when 
DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson)

> Stale record should be remove when 
> DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson
> 
>
> Key: HADOOP-17995
> URL: https://issues.apache.org/jira/browse/HADOOP-17995
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haiyang Hu
>Priority: Major
>
> As https://issues.apache.org/jira/browse/HADOOP-16947, Problem with 
> description, Stale SumAndCount also should be remove when 
> DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17995) Stale record should be remove when DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson

2021-11-08 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu updated HADOOP-17995:

Description: As https://issues.apache.org/jira/browse/HADOOP-16947 Problem 
with description, Stale SumAndCount also should be remove when 
DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson

> Stale record should be remove when 
> DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson
> 
>
> Key: HADOOP-17995
> URL: https://issues.apache.org/jira/browse/HADOOP-17995
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haiyang Hu
>Priority: Major
>
> As https://issues.apache.org/jira/browse/HADOOP-16947 Problem with 
> description, Stale SumAndCount also should be remove when 
> DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17995) Stale record should be remove when DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson

2021-11-08 Thread Haiyang Hu (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haiyang Hu reassigned HADOOP-17995:
---

 Key: HADOOP-17995  (was: HDFS-16306)
Assignee: (was: Haiyang Hu)
 Project: Hadoop Common  (was: Hadoop HDFS)

> Stale record should be remove when 
> DataNodePeerMetrics#dumpSendPacketDownstreamAvgInfoAsJson
> 
>
> Key: HADOOP-17995
> URL: https://issues.apache.org/jira/browse/HADOOP-17995
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Haiyang Hu
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ferhui commented on a change in pull request #3596: HDFS-16287. Support to make dfs.namenode.avoid.read.slow.datanode reconfigurable

2021-11-08 Thread GitBox


ferhui commented on a change in pull request #3596:
URL: https://github.com/apache/hadoop/pull/3596#discussion_r744641344



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/DatanodeManager.java
##
@@ -260,17 +257,14 @@
 final Timer timer = new Timer();
 this.slowPeerTracker = dataNodePeerStatsEnabled ?
 new SlowPeerTracker(conf, timer) : null;
-this.excludeSlowNodesEnabled = conf.getBoolean(

Review comment:
   It is used in BlockPlacementPolicyDefault




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3596: HDFS-16287. Support to make dfs.namenode.avoid.read.slow.datanode reconfigurable

2021-11-08 Thread GitBox


hadoop-yetus commented on pull request #3596:
URL: https://github.com/apache/hadoop/pull/3596#issuecomment-963013131


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 54s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 46s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   1m  2s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 21s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  4s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  25m 19s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 53s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 24s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  24m 54s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  | 330m 15s |  |  hadoop-hdfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 436m 59s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3596/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3596 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 3388666e1f8e 4.15.0-147-generic #151-Ubuntu SMP Fri Jun 18 
19:21:19 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 87920fce0c2d28bf28fa2d0ff7589b142e0d3a31 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3596/5/testReport/ |
   | Max. process+thread count | 2491 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3596/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Work logged] (HADOOP-15566) Support OpenTelemetry

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15566?focusedWorklogId=678406=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-678406
 ]

ASF GitHub Bot logged work on HADOOP-15566:
---

Author: ASF GitHub Bot
Created on: 08/Nov/21 10:22
Start Date: 08/Nov/21 10:22
Worklog Time Spent: 10m 
  Work Description: ArkenKiran commented on pull request #3445:
URL: https://github.com/apache/hadoop/pull/3445#issuecomment-963008044


   @ndimiduk Thanks for sharing your approach. I was able to generate the 
traces for hbase. I could see the hadoop components only in few paths. Please 
let me know the next steps for HADOOP-15566
   Hbase meta  scan block cache miss. 
   
![meta-scan-block-cache-miss](https://user-images.githubusercontent.com/1924534/140724417-6964d94d-3497-4d9c-b215-3fc14d6c69fa.png)
   Hbase meta scan block cache hit
   
![meta-scan-block-cache-hit](https://user-images.githubusercontent.com/1924534/140724491-e050a128-8ecd-4a49-b1e2-50677fc5c200.png)
   hbase put 
   
![hbase-put](https://user-images.githubusercontent.com/1924534/140724661-a7b8647c-d51a-4ac8-9ece-ffd8ca170328.png)
   hbase scan
   
![hbase-scan](https://user-images.githubusercontent.com/1924534/140724868-f02d0aef-7e75-4e27-ab44-f4876a1ab1f1.png)
   hadoop shell ls 
   
![hadoop-shell-ls](https://user-images.githubusercontent.com/1924534/140725036-fadfdc8e-0398-4adb-86c9-892521ea9e14.png)
   hadoop shell mkdir
   
![hadoop-shell-mkdir](https://user-images.githubusercontent.com/1924534/140725120-bbe70088-0311-449d-83b0-85256e0db5a8.png)
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 678406)
Time Spent: 6.5h  (was: 6h 20m)

> Support OpenTelemetry
> -
>
> Key: HADOOP-15566
> URL: https://issues.apache.org/jira/browse/HADOOP-15566
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics, tracing
>Affects Versions: 3.1.0
>Reporter: Todd Lipcon
>Assignee: Siyao Meng
>Priority: Major
>  Labels: pull-request-available, security
> Attachments: HADOOP-15566-WIP.1.patch, HADOOP-15566.000.WIP.patch, 
> OpenTelemetry Support Scope Doc v2.pdf, OpenTracing Support Scope Doc.pdf, 
> Screen Shot 2018-06-29 at 11.59.16 AM.png, ss-trace-s3a.png
>
>  Time Spent: 6.5h
>  Remaining Estimate: 0h
>
> The HTrace incubator project has voted to retire itself and won't be making 
> further releases. The Hadoop project currently has various hooks with HTrace. 
> It seems in some cases (eg HDFS-13702) these hooks have had measurable 
> performance overhead. Given these two factors, I think we should consider 
> removing the HTrace integration. If there is someone willing to do the work, 
> replacing it with OpenTracing might be a better choice since there is an 
> active community.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ArkenKiran commented on pull request #3445: HADOOP-15566 Opentelemetry changes using java agent

2021-11-08 Thread GitBox


ArkenKiran commented on pull request #3445:
URL: https://github.com/apache/hadoop/pull/3445#issuecomment-963008044


   @ndimiduk Thanks for sharing your approach. I was able to generate the 
traces for hbase. I could see the hadoop components only in few paths. Please 
let me know the next steps for HADOOP-15566
   Hbase meta  scan block cache miss. 
   
![meta-scan-block-cache-miss](https://user-images.githubusercontent.com/1924534/140724417-6964d94d-3497-4d9c-b215-3fc14d6c69fa.png)
   Hbase meta scan block cache hit
   
![meta-scan-block-cache-hit](https://user-images.githubusercontent.com/1924534/140724491-e050a128-8ecd-4a49-b1e2-50677fc5c200.png)
   hbase put 
   
![hbase-put](https://user-images.githubusercontent.com/1924534/140724661-a7b8647c-d51a-4ac8-9ece-ffd8ca170328.png)
   hbase scan
   
![hbase-scan](https://user-images.githubusercontent.com/1924534/140724868-f02d0aef-7e75-4e27-ab44-f4876a1ab1f1.png)
   hadoop shell ls 
   
![hadoop-shell-ls](https://user-images.githubusercontent.com/1924534/140725036-fadfdc8e-0398-4adb-86c9-892521ea9e14.png)
   hadoop shell mkdir
   
![hadoop-shell-mkdir](https://user-images.githubusercontent.com/1924534/140725120-bbe70088-0311-449d-83b0-85256e0db5a8.png)
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17991) Improve CustomTokenProviderAdapter to import VisibleForTesting

2021-11-08 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17440339#comment-17440339
 ] 

Steve Loughran commented on HADOOP-17991:
-

im chasing up getting a new PR for HADOOP-17183 with the right import.

> Improve CustomTokenProviderAdapter to import VisibleForTesting
> --
>
> Key: HADOOP-17991
> URL: https://issues.apache.org/jira/browse/HADOOP-17991
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: JiangHua Zhu
>Priority: Major
>
> Recently, some new features have been added to CustomTokenProviderAdapter, 
> which is of course very good.
> Here is the introduction 
> of'org.apache.hadoop.thirdparty.com.google.common.annotations.VisibleForTesting'.
>  When using Maven to compile the hadoop project, an error occurred as 
> expected.
> E.g:
> org.apache.maven.enforcer.rule.api.EnforcerRuleException: 
> Banned imports detected:
> Reason: Use hadoop-annotation provided VisibleForTesting rather than the one 
> provided by Guava
>   in file: 
> org/apache/hadoop/fs/azurebfs/oauth2/CustomTokenProviderAdapter.java
>   
> org.apache.hadoop.thirdparty.com.google.common.annotations.VisibleForTesting 
> (Line: 25, Matched by: 
> org.apache.hadoop.thirdparty.com.google.common.annotations.VisibleForTesting)
> Analysis took 0 seconds
> at de.skuzzle.enforcer.restrictimports.rule.RestrictImports.execute 
> (RestrictImports.java:70)
> at org.apache.maven.plugins.enforcer.EnforceMojo.execute 
> (EnforceMojo.java:202)
> at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo 
> (DefaultBuildPluginManager.java:137)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:210)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:156)
> at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
> (MojoExecutor.java:148)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
> (LifecycleModuleBuilder.java:117)
> at 
> org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
> (LifecycleModuleBuilder.java:81)
> at 
> org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
>  (SingleThreadedBuilder.java:56)
> at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
> (LifecycleStarter.java:128)
> at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
> at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
> at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
> at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957)
> at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289)
> at org.apache.maven.cli.MavenCli.main (MavenCli.java:193)
> at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke 
> (NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke 
> (DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke (Method.java:498)
> at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced 
> (Launcher.java:282)
> at org.codehaus.plexus.classworlds.launcher.Launcher.launch 
> (Launcher.java:225)
> at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode 
> (Launcher.java:406)
> at org.codehaus.plexus.classworlds.launcher.Launcher.main 
> (Launcher.java:347)
> In the end it was unsuccessful.
> In addition, I want to explain that this happened in a Mac environment.
> Obviously, we should 
> introduce'org.apache.hadoop.classification.VisibleForTesting'.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] symious commented on pull request #3625: HDFS-16304. Locate OpenSSL libs for libhdfspp

2021-11-08 Thread GitBox


symious commented on pull request #3625:
URL: https://github.com/apache/hadoop/pull/3625#issuecomment-962989469


   +1. The fix works fine.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17873) ABFS: Fix transient failures in ITestAbfsStreamStatistics and ITestAbfsRestOperationException

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17873?focusedWorklogId=678398=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-678398
 ]

ASF GitHub Bot logged work on HADOOP-17873:
---

Author: ASF GitHub Bot
Created on: 08/Nov/21 10:01
Start Date: 08/Nov/21 10:01
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3341:
URL: https://github.com/apache/hadoop/pull/3341#issuecomment-962989351


   afraid i had to revert this change as the build now insists that the 
@VisibleForTesting annotation refers to the hadoop one. 
   can you create a new PR with the updated import?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 678398)
Time Spent: 6h 40m  (was: 6.5h)

> ABFS: Fix transient failures in ITestAbfsStreamStatistics and 
> ITestAbfsRestOperationException
> -
>
> Key: HADOOP-17873
> URL: https://issues.apache.org/jira/browse/HADOOP-17873
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.1
>Reporter: Sumangala Patki
>Assignee: Sumangala Patki
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.2
>
>  Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> To address transient failures in the following test classes:
>  * ITestAbfsStreamStatistics: Uses a filesystem level instance to record 
> read/write statistics, which also tracks these operations in other tests. 
> running parallelly. To be marked for sequential run only to avoid transient 
> failure
>  * ITestAbfsRestOperationException: The use of a static member to track retry 
> count causes transient failures when two tests of this class happen to run 
> together. Switch to non-static variable for assertions on retry count



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3341: HADOOP-17873. ABFS: Fix transient failures in ITestAbfsStreamStatistics and ITestAbfsRestOperationException

2021-11-08 Thread GitBox


steveloughran commented on pull request #3341:
URL: https://github.com/apache/hadoop/pull/3341#issuecomment-962989351


   afraid i had to revert this change as the build now insists that the 
@VisibleForTesting annotation refers to the hadoop one. 
   can you create a new PR with the updated import?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17982) OpensslCipher initialization error should log a WARN message

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17982?focusedWorklogId=678390=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-678390
 ]

ASF GitHub Bot logged work on HADOOP-17982:
---

Author: ASF GitHub Bot
Created on: 08/Nov/21 09:56
Start Date: 08/Nov/21 09:56
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3599:
URL: https://github.com/apache/hadoop/pull/3599#issuecomment-962985062


   thanks for the feedback
   
   +1 from me


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 678390)
Time Spent: 50m  (was: 40m)

> OpensslCipher initialization error should log a WARN message
> 
>
> Key: HADOOP-17982
> URL: https://issues.apache.org/jira/browse/HADOOP-17982
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: kms, security
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Trivial
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> We spent months troubleshooting a RangerKMS performance problem, only to 
> realize that the openssl library wasn't even loaded properly.
> The failure to load openssl lib is currently logged as a debug message during 
> initialization. We really should upgrade it to at least INFO/WARN. 
> {code}
> static {
> String loadingFailure = null;
> try {
>   if (!NativeCodeLoader.buildSupportsOpenssl()) {
> PerformanceAdvisory.LOG.debug("Build does not support openssl");
> loadingFailure = "build does not support openssl.";
>   } else {
> initIDs();
>   }
> } catch (Throwable t) {
>   loadingFailure = t.getMessage();
>   LOG.debug("Failed to load OpenSSL Cipher.", t);
> } finally {
>   loadingFailureReason = loadingFailure;
> }
>   }
> {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3599: HADOOP-17982. OpensslCipher initialization error should log a WARN message.

2021-11-08 Thread GitBox


steveloughran commented on pull request #3599:
URL: https://github.com/apache/hadoop/pull/3599#issuecomment-962985062


   thanks for the feedback
   
   +1 from me


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop-site] GauthamBanasandra commented on pull request #29: Remove hugo binary

2021-11-08 Thread GitBox


GauthamBanasandra commented on pull request #29:
URL: https://github.com/apache/hadoop-site/pull/29#issuecomment-962944622


   Thanks @aajisaka. I'll keep it mind next time 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17981) Support etag-assisted renames in FileOutputCommitter

2021-11-08 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17981?focusedWorklogId=678374=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-678374
 ]

ASF GitHub Bot logged work on HADOOP-17981:
---

Author: ASF GitHub Bot
Created on: 08/Nov/21 09:01
Start Date: 08/Nov/21 09:01
Worklog Time Spent: 10m 
  Work Description: mukund-thakur commented on a change in pull request 
#3611:
URL: https://github.com/apache/hadoop/pull/3611#discussion_r744464883



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemResilientCommit.java
##
@@ -0,0 +1,294 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.assertj.core.api.Assertions;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.mapreduce.lib.output.ResilientCommitByRenameHelper;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_RENAME_RAISES_EXCEPTIONS;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.toAsciiByteArray;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+/**
+ * Test the commit helper; parameterized on whether or not the FS
+ * raises exceptions on rename failures.
+ * The outcome must be the same through the commit helper;
+ * exceptions and error messages will be different.
+ */
+@RunWith(Parameterized.class)
+public class ITestAzureBlobFileSystemResilientCommit
+extends AbstractAbfsIntegrationTest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ITestAzureBlobFileSystemResilientCommit.class);
+  private static final byte[] DATA = toAsciiByteArray("hello");
+  private static final byte[] DATA2 = toAsciiByteArray("world");
+
+  private final boolean raiseExceptions;
+
+  /**
+   * error keyword from azure storage when exceptions are being
+   * raised.
+   */
+  public static final String E_NO_SOURCE = "SourcePathNotFound";
+
+  public ITestAzureBlobFileSystemResilientCommit(
+  final boolean raiseExceptions) throws Exception {
+this.raiseExceptions = raiseExceptions;
+  }
+
+  /**
+   * Does FS raise exceptions?
+   * @return test params
+   */
+  @Parameterized.Parameters(name = "raising-{0}")
+  public static Collection getParameters() {
+// -1 is covered in separate test case
+return Arrays.asList(true, false);
+  }
+
+  /**
+   * FS raising exceptions on rename.
+   */
+  private AzureBlobFileSystem targetFS;
+  private Path outputPath;
+  private ResilientCommitByRenameHelper commitHelper;
+  private Path sourcePath;
+  private Path destPath;
+
+  @Override
+  public void setup() throws Exception {
+super.setup();
+final AzureBlobFileSystem currentFs = getFileSystem();
+Configuration conf = new Configuration(this.getRawConfiguration());
+conf.setBoolean(FS_AZURE_RENAME_RAISES_EXCEPTIONS, raiseExceptions);
+
+targetFS = (AzureBlobFileSystem) FileSystem.newInstance(
+currentFs.getUri(),
+conf);
+Assertions.assertThat(
+targetFS.getConf().getBoolean(FS_AZURE_RENAME_RAISES_EXCEPTIONS, 
false))
+.describedAs("FS raises exceptions on rename %s", targetFS)
+.isEqualTo(raiseExceptions);
+outputPath = path(getMethodName());
+sourcePath = new Path(outputPath, "source");
+destPath = new Path(outputPath, "dest");
+targetFS.mkdirs(outputPath);
+
+commitHelper = new ResilientCommitByRenameHelper(
+targetFS,
+outputPath, true);
+
+  }
+
+  @Override
+  public void teardown() throws Exception {
+IOUtils.cleanupWithLogger(LOG, targetFS);
+

[GitHub] [hadoop] mukund-thakur commented on a change in pull request #3611: HADOOP-17981. resilient commit through etag validation

2021-11-08 Thread GitBox


mukund-thakur commented on a change in pull request #3611:
URL: https://github.com/apache/hadoop/pull/3611#discussion_r744464883



##
File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/ITestAzureBlobFileSystemResilientCommit.java
##
@@ -0,0 +1,294 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs;
+
+import java.io.FileNotFoundException;
+import java.io.IOException;
+import java.util.Arrays;
+import java.util.Collection;
+
+import org.assertj.core.api.Assertions;
+import org.junit.Test;
+import org.junit.runner.RunWith;
+import org.junit.runners.Parameterized;
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.fs.contract.ContractTestUtils;
+import org.apache.hadoop.io.IOUtils;
+import org.apache.hadoop.mapreduce.lib.output.ResilientCommitByRenameHelper;
+
+import static 
org.apache.hadoop.fs.azurebfs.constants.ConfigurationKeys.FS_AZURE_RENAME_RAISES_EXCEPTIONS;
+import static org.apache.hadoop.fs.contract.ContractTestUtils.toAsciiByteArray;
+import static org.apache.hadoop.test.LambdaTestUtils.intercept;
+
+/**
+ * Test the commit helper; parameterized on whether or not the FS
+ * raises exceptions on rename failures.
+ * The outcome must be the same through the commit helper;
+ * exceptions and error messages will be different.
+ */
+@RunWith(Parameterized.class)
+public class ITestAzureBlobFileSystemResilientCommit
+extends AbstractAbfsIntegrationTest {
+
+  private static final Logger LOG =
+  LoggerFactory.getLogger(ITestAzureBlobFileSystemResilientCommit.class);
+  private static final byte[] DATA = toAsciiByteArray("hello");
+  private static final byte[] DATA2 = toAsciiByteArray("world");
+
+  private final boolean raiseExceptions;
+
+  /**
+   * error keyword from azure storage when exceptions are being
+   * raised.
+   */
+  public static final String E_NO_SOURCE = "SourcePathNotFound";
+
+  public ITestAzureBlobFileSystemResilientCommit(
+  final boolean raiseExceptions) throws Exception {
+this.raiseExceptions = raiseExceptions;
+  }
+
+  /**
+   * Does FS raise exceptions?
+   * @return test params
+   */
+  @Parameterized.Parameters(name = "raising-{0}")
+  public static Collection getParameters() {
+// -1 is covered in separate test case
+return Arrays.asList(true, false);
+  }
+
+  /**
+   * FS raising exceptions on rename.
+   */
+  private AzureBlobFileSystem targetFS;
+  private Path outputPath;
+  private ResilientCommitByRenameHelper commitHelper;
+  private Path sourcePath;
+  private Path destPath;
+
+  @Override
+  public void setup() throws Exception {
+super.setup();
+final AzureBlobFileSystem currentFs = getFileSystem();
+Configuration conf = new Configuration(this.getRawConfiguration());
+conf.setBoolean(FS_AZURE_RENAME_RAISES_EXCEPTIONS, raiseExceptions);
+
+targetFS = (AzureBlobFileSystem) FileSystem.newInstance(
+currentFs.getUri(),
+conf);
+Assertions.assertThat(
+targetFS.getConf().getBoolean(FS_AZURE_RENAME_RAISES_EXCEPTIONS, 
false))
+.describedAs("FS raises exceptions on rename %s", targetFS)
+.isEqualTo(raiseExceptions);
+outputPath = path(getMethodName());
+sourcePath = new Path(outputPath, "source");
+destPath = new Path(outputPath, "dest");
+targetFS.mkdirs(outputPath);
+
+commitHelper = new ResilientCommitByRenameHelper(
+targetFS,
+outputPath, true);
+
+  }
+
+  @Override
+  public void teardown() throws Exception {
+IOUtils.cleanupWithLogger(LOG, targetFS);
+super.teardown();
+  }
+
+  /**
+   * Create a file; return the status.
+   * @param path file path
+   * @param data text of file
+   * @return the status
+   * @throws IOException creation failure
+   */
+  FileStatus file(Path path, byte[] data) throws IOException {
+ContractTestUtils.createFile(targetFS, path, true,
+data);
+return targetFS.getFileStatus(path);
+  }
+
+  /**
+   * make sure the filesystem 

  1   2   >