[jira] [Updated] (HADOOP-17825) Add BuiltInGzipCompressor

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17825:

Labels: pull-request-available  (was: )

> Add BuiltInGzipCompressor
> -
>
> Key: HADOOP-17825
> URL: https://issues.apache.org/jira/browse/HADOOP-17825
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: L. C. Hsieh
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, GzipCodec only supports BuiltInGzipDecompressor, if native zlib is 
> not loaded. So, without Hadoop native codec installed, saving SequenceFile 
> using GzipCodec will throw exception like "SequenceFile doesn't work with 
> GzipCodec without native-hadoop code!"
> Same as other codecs which we migrated to using prepared packages (lz4, 
> snappy), it will be better if we support GzipCodec generally without Hadoop 
> native codec installed. Similar to BuiltInGzipDecompressor, we can use Java 
> Deflater to support BuiltInGzipCompressor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17825) Add BuiltInGzipCompressor

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17825?focusedWorklogId=631938=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631938
 ]

ASF GitHub Bot logged work on HADOOP-17825:
---

Author: ASF GitHub Bot
Created on: 31/Jul/21 04:35
Start Date: 31/Jul/21 04:35
Worklog Time Spent: 10m 
  Work Description: viirya opened a new pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250


   Currently, GzipCodec only supports BuiltInGzipDecompressor, if native zlib 
is not loaded. So, without Hadoop native codec installed, saving SequenceFile 
using GzipCodec will throw exception like "SequenceFile doesn't work with 
GzipCodec without native-hadoop code!"
   
   Same as other codecs which we migrated to using prepared packages (lz4, 
snappy), it will be better if we support GzipCodec generally without Hadoop 
native codec installed. Similar to BuiltInGzipDecompressor, we can use Java 
Deflater to support BuiltInGzipCompressor.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 631938)
Remaining Estimate: 0h
Time Spent: 10m

> Add BuiltInGzipCompressor
> -
>
> Key: HADOOP-17825
> URL: https://issues.apache.org/jira/browse/HADOOP-17825
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: L. C. Hsieh
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Currently, GzipCodec only supports BuiltInGzipDecompressor, if native zlib is 
> not loaded. So, without Hadoop native codec installed, saving SequenceFile 
> using GzipCodec will throw exception like "SequenceFile doesn't work with 
> GzipCodec without native-hadoop code!"
> Same as other codecs which we migrated to using prepared packages (lz4, 
> snappy), it will be better if we support GzipCodec generally without Hadoop 
> native codec installed. Similar to BuiltInGzipDecompressor, we can use Java 
> Deflater to support BuiltInGzipCompressor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] viirya opened a new pull request #3250: HADOOP-17825. Add BuiltInGzipCompressor

2021-07-30 Thread GitBox


viirya opened a new pull request #3250:
URL: https://github.com/apache/hadoop/pull/3250


   Currently, GzipCodec only supports BuiltInGzipDecompressor, if native zlib 
is not loaded. So, without Hadoop native codec installed, saving SequenceFile 
using GzipCodec will throw exception like "SequenceFile doesn't work with 
GzipCodec without native-hadoop code!"
   
   Same as other codecs which we migrated to using prepared packages (lz4, 
snappy), it will be better if we support GzipCodec generally without Hadoop 
native codec installed. Similar to BuiltInGzipDecompressor, we can use Java 
Deflater to support BuiltInGzipCompressor.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17612) Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17612?focusedWorklogId=631937=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631937
 ]

ASF GitHub Bot logged work on HADOOP-17612:
---

Author: ASF GitHub Bot
Created on: 31/Jul/21 04:30
Start Date: 31/Jul/21 04:30
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3241:
URL: https://github.com/apache/hadoop/pull/3241#issuecomment-890289286


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  14m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 54s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  27m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  compile  |  20m 49s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/6/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -0 :warning: |  checkstyle  |   0m 41s | 
[/buildtool-branch-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/6/artifact/out/buildtool-branch-checkstyle-root.txt)
 |  The patch fails to run checkstyle in root  |
   | +1 :green_heart: |  mvnsite  |  29m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   9m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   7m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 21s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  45m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 37s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  25m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javac  |  20m 39s | 
[/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/6/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 10 new + 1921 unchanged - 0 
fixed = 1931 total (was 1921)  |
   | +1 :green_heart: |  compile  |  18m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  javac  |  18m 29s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/6/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 196 new + 1611 
unchanged - 0 fixed = 1807 total (was 1611)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 38s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/6/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 357 new + 0 unchanged - 0 fixed = 357 total (was 
0)  |
   | +1 :green_heart: |  mvnsite  |  20m 32s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  xml  |   0m 13s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   7m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   7m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 21s |  |  

[GitHub] [hadoop] hadoop-yetus commented on pull request #3241: HADOOP-17612. Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0

2021-07-30 Thread GitBox


hadoop-yetus commented on pull request #3241:
URL: https://github.com/apache/hadoop/pull/3241#issuecomment-890289286


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  14m 21s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  shelldocs  |   0m  0s |  |  Shelldocs was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 54s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 49s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  27m 11s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  compile  |  20m 49s | 
[/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/6/artifact/out/branch-compile-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root in trunk failed with JDK Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.  |
   | -0 :warning: |  checkstyle  |   0m 41s | 
[/buildtool-branch-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/6/artifact/out/buildtool-branch-checkstyle-root.txt)
 |  The patch fails to run checkstyle in root  |
   | +1 :green_heart: |  mvnsite  |  29m 29s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   9m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   7m 58s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 21s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  45m 28s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 37s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  25m 34s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | -1 :x: |  javac  |  20m 39s | 
[/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/6/artifact/out/results-compile-javac-root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04.txt)
 |  root-jdkUbuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 generated 10 new + 1921 unchanged - 0 
fixed = 1931 total (was 1921)  |
   | +1 :green_heart: |  compile  |  18m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | -1 :x: |  javac  |  18m 29s | 
[/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/6/artifact/out/results-compile-javac-root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10.txt)
 |  root-jdkPrivateBuild-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 generated 196 new + 1611 
unchanged - 0 fixed = 1807 total (was 1611)  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 38s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/6/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 357 new + 0 unchanged - 0 fixed = 357 total (was 
0)  |
   | +1 :green_heart: |  mvnsite  |  20m 32s |  |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  1s |  |  No new issues.  |
   | +1 :green_heart: |  xml  |   0m 13s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   7m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   7m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 21s |  |  hadoop-project has no data from 
spotbugs  |
   | +1 :green_heart: |  shadedclient  |  45m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 770m 28s | 
[/patch-unit-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3241/6/artifact/out/patch-unit-root.txt)
 |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  

[jira] [Commented] (HADOOP-17825) Add BuiltInGzipCompressor

2021-07-30 Thread L. C. Hsieh (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17390845#comment-17390845
 ] 

L. C. Hsieh commented on HADOOP-17825:
--

The patch is ready locally. Wait for approval internal. Will submit it later.

> Add BuiltInGzipCompressor
> -
>
> Key: HADOOP-17825
> URL: https://issues.apache.org/jira/browse/HADOOP-17825
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: L. C. Hsieh
>Priority: Major
>
> Currently, GzipCodec only supports BuiltInGzipDecompressor, if native zlib is 
> not loaded. So, without Hadoop native codec installed, saving SequenceFile 
> using GzipCodec will throw exception like "SequenceFile doesn't work with 
> GzipCodec without native-hadoop code!"
> Same as other codecs which we migrated to using prepared packages (lz4, 
> snappy), it will be better if we support GzipCodec generally without Hadoop 
> native codec installed. Similar to BuiltInGzipDecompressor, we can use Java 
> Deflater to support BuiltInGzipCompressor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17825) Add BuiltInGzipCompressor

2021-07-30 Thread L. C. Hsieh (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17825?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17390846#comment-17390846
 ] 

L. C. Hsieh commented on HADOOP-17825:
--

cc [~sunchao],[~dbtsai].

> Add BuiltInGzipCompressor
> -
>
> Key: HADOOP-17825
> URL: https://issues.apache.org/jira/browse/HADOOP-17825
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: L. C. Hsieh
>Priority: Major
>
> Currently, GzipCodec only supports BuiltInGzipDecompressor, if native zlib is 
> not loaded. So, without Hadoop native codec installed, saving SequenceFile 
> using GzipCodec will throw exception like "SequenceFile doesn't work with 
> GzipCodec without native-hadoop code!"
> Same as other codecs which we migrated to using prepared packages (lz4, 
> snappy), it will be better if we support GzipCodec generally without Hadoop 
> native codec installed. Similar to BuiltInGzipDecompressor, we can use Java 
> Deflater to support BuiltInGzipCompressor.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17825) Add BuiltInGzipCompressor

2021-07-30 Thread L. C. Hsieh (Jira)
L. C. Hsieh created HADOOP-17825:


 Summary: Add BuiltInGzipCompressor
 Key: HADOOP-17825
 URL: https://issues.apache.org/jira/browse/HADOOP-17825
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: L. C. Hsieh


Currently, GzipCodec only supports BuiltInGzipDecompressor, if native zlib is 
not loaded. So, without Hadoop native codec installed, saving SequenceFile 
using GzipCodec will throw exception like "SequenceFile doesn't work with 
GzipCodec without native-hadoop code!"

Same as other codecs which we migrated to using prepared packages (lz4, 
snappy), it will be better if we support GzipCodec generally without Hadoop 
native codec installed. Similar to BuiltInGzipDecompressor, we can use Java 
Deflater to support BuiltInGzipCompressor.





--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17628) Distcp contract test is really slow with ABFS and S3A; timing out

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17628?focusedWorklogId=631910=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631910
 ]

ASF GitHub Bot logged work on HADOOP-17628:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 23:05
Start Date: 30/Jul/21 23:05
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3240:
URL: https://github.com/apache/hadoop/pull/3240#issuecomment-890243250


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 34s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 40s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  27m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  24m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   4m 57s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   4m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   6m 51s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 37s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  28m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  28m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 34s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   4m 19s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   6m 49s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  9s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  20m 16s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 23s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 17s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 268m 47s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3240/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3240 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux e83c31a8977e 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6e0e4d90e9bc3864c73df95376f3af5056eb044b |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3240: HADOOP-17628. Distcp contract test is really slow with ABFS and S3A; timing out.

2021-07-30 Thread GitBox


hadoop-yetus commented on pull request #3240:
URL: https://github.com/apache/hadoop/pull/3240#issuecomment-890243250


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 56s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 7 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 34s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 40s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  27m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  24m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   4m 57s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   4m 40s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   3m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   4m  4s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   6m 51s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  19m 37s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 27s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   3m  4s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  28m 41s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  28m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 29s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 34s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   4m 19s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   3m 19s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 59s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   6m 49s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  9s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  20m 16s |  |  hadoop-distcp in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 23s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  unit  |   2m 17s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   1m  1s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 268m 47s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3240/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3240 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux e83c31a8977e 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 6e0e4d90e9bc3864c73df95376f3af5056eb044b |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3240/5/testReport/ |
   | Max. process+thread count | 1559 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common 
hadoop-tools/hadoop-distcp hadoop-tools/hadoop-aws hadoop-tools/hadoop-azure U: 
. |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3240/5/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT 

[jira] [Work logged] (HADOOP-17822) fs.s3a.acl.default not working after S3A Audit feature added

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17822?focusedWorklogId=631898=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631898
 ]

ASF GitHub Bot logged work on HADOOP-17822:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 21:58
Start Date: 30/Jul/21 21:58
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-890177678


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 26s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 22s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 40s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  22m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 46s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 21s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   4m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 58s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 28s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 206m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3249 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell xml spotbugs checkstyle markdownlint |
   | uname | Linux fff2241c118c 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0af71d6570a12521c89e2c8d7359fd2c3cd81532 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3249: HADOOP-17822. fs.s3a.acl.default not working after S3A Audit feature

2021-07-30 Thread GitBox


hadoop-yetus commented on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-890177678


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 26s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  22m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  22m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   3m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 22s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 33s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 40s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 22s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  22m  1s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  22m  1s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  19m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  19m 24s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  1s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   3m 46s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   2m 21s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   4m  5s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  17m 10s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  16m 58s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 28s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 48s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 206m 52s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/6/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3249 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell xml spotbugs checkstyle markdownlint |
   | uname | Linux fff2241c118c 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 0af71d6570a12521c89e2c8d7359fd2c3cd81532 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/6/testReport/ |
   | Max. process+thread count | 1952 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/6/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
  

[GitHub] [hadoop] xkrogen commented on a change in pull request #3235: HDFS-16143. De-flake TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits

2021-07-30 Thread GitBox


xkrogen commented on a change in pull request #3235:
URL: https://github.com/apache/hadoop/pull/3235#discussion_r680158212



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
##
@@ -452,26 +472,20 @@ public void 
testStandbyTriggersLogRollsWhenTailInProgressEdits()
   private static void waitForStandbyToCatchUpWithInProgressEdits(
   final NameNode standby, final long activeTxId,
   int maxWaitSec) throws Exception {
-GenericTestUtils.waitFor(new Supplier() {
-  @Override
-  public Boolean get() {
-long standbyTxId = standby.getNamesystem().getFSImage()
-.getLastAppliedTxId();
-return (standbyTxId >= activeTxId);
-  }
-}, 100, maxWaitSec * 1000);
+GenericTestUtils.waitFor(() -> {
+  long standbyTxId = standby.getNamesystem().getFSImage()
+  .getLastAppliedTxId();
+  return (standbyTxId >= activeTxId);
+}, 100, TimeUnit.SECONDS.toMillis(maxWaitSec));
   }
 
   private static void checkForLogRoll(final NameNode active,
   final long origTxId, int maxWaitSec) throws Exception {
-GenericTestUtils.waitFor(new Supplier() {
-  @Override
-  public Boolean get() {
-long curSegmentTxId = active.getNamesystem().getFSImage().getEditLog()
-.getCurSegmentTxId();
-return (origTxId != curSegmentTxId);
-  }
-}, 100, maxWaitSec * 1000);
+GenericTestUtils.waitFor(() -> {
+  long curSegmentTxId = active.getNamesystem().getFSImage().getEditLog()
+  .getCurSegmentTxId();
+  return (origTxId != curSegmentTxId);
+}, 500, TimeUnit.SECONDS.toMillis(maxWaitSec));

Review comment:
   why is the check interval increased from 100ms to 500ms? 

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java
##
@@ -423,21 +423,22 @@ void triggerActiveLogRoll() {
 try {
   future = rollEditsRpcExecutor.submit(getNameNodeProxy());
   future.get(rollEditsTimeoutMs, TimeUnit.MILLISECONDS);
-  lastRollTimeMs = monotonicNow();
+  resetLastRollTimeMs();
   lastRollTriggerTxId = lastLoadedTxnId;
-} catch (ExecutionException e) {
+} catch (ExecutionException | InterruptedException e) {
   LOG.warn("Unable to trigger a roll of the active NN", e);
 } catch (TimeoutException e) {
-  if (future != null) {
-future.cancel(true);
-  }
+  future.cancel(true);
   LOG.warn(String.format(
   "Unable to finish rolling edits in %d ms", rollEditsTimeoutMs));
-} catch (InterruptedException e) {
-  LOG.warn("Unable to trigger a roll of the active NN", e);
 }
   }
 
+  @VisibleForTesting
+  public void resetLastRollTimeMs() {

Review comment:
   package-private instead of public?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
##
@@ -433,15 +440,28 @@ public void 
testStandbyTriggersLogRollsWhenTailInProgressEdits()
 NameNodeAdapter.mkdirs(active, getDirPath(i),
 new PermissionStatus("test", "test",
 new FsPermission((short)00755)), true);
+// reset lastRollTimeMs in EditLogTailer.
+active.getNamesystem().getEditLogTailer().resetLastRollTimeMs();
   }
 
-  boolean exceptionThrown = false;
+  // We should explicitly update lastRollTimeMs in EditLogTailer
+  // so that our timeout test provided just below can take advantage
+  // of validation: (monotonicNow() - lastRollTimeMs) > logRollPeriodMs
+  // provided in EditLogTailer#tooLongSinceLastLoad().
+  active.getNamesystem().getEditLogTailer().resetLastRollTimeMs();

Review comment:
   if you just updated the last roll time on L444 above, why do we need to 
do it again?

##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/EditLogTailer.java
##
@@ -423,21 +423,22 @@ void triggerActiveLogRoll() {
 try {
   future = rollEditsRpcExecutor.submit(getNameNodeProxy());
   future.get(rollEditsTimeoutMs, TimeUnit.MILLISECONDS);
-  lastRollTimeMs = monotonicNow();
+  resetLastRollTimeMs();
   lastRollTriggerTxId = lastLoadedTxnId;
-} catch (ExecutionException e) {
+} catch (ExecutionException | InterruptedException e) {
   LOG.warn("Unable to trigger a roll of the active NN", e);
 } catch (TimeoutException e) {
-  if (future != null) {
-future.cancel(true);
-  }
+  future.cancel(true);

Review comment:
   why is the null-check removed here?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please 

[jira] [Work started] (HADOOP-17822) fs.s3a.acl.default not working after S3A Audit feature added

2021-07-30 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-17822 started by Steve Loughran.
---
> fs.s3a.acl.default not working after S3A Audit feature added
> 
>
> Key: HADOOP-17822
> URL: https://issues.apache.org/jira/browse/HADOOP-17822
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> After HADOOP-17511 the fs.s3a.acl.default propperty isn't being passed 
> through to S3 PUT/COPY requests.
> The new RequestFactory is being given the acl values from the S3A FS 
> instance, but the factory is being created before the acl settings are loaded 
> from the configuration.
> Fix, and ideally, test (if the getXAttr lets us see this now)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17812) NPE in S3AInputStream read() after failure to reconnect to store

2021-07-30 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-17812:
---

Assignee: Bobby Wang

> NPE in S3AInputStream read() after failure to reconnect to store
> 
>
> Key: HADOOP-17812
> URL: https://issues.apache.org/jira/browse/HADOOP-17812
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.2, 3.3.1
>Reporter: Bobby Wang
>Assignee: Bobby Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: failsafe-report.html.gz, s3a-test.tar.gz
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> when [reading from S3a 
> storage|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450],
>  SSLException (which extends IOException) happens, which will trigger 
> [onReadFailure|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L458].
> onReadFailure calls "reopen". it will first close the original 
> *wrappedStream* and set *wrappedStream = null*, and then it will try to 
> [re-get 
> *wrappedStream*|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L184].
>  But what if the previous code [obtaining 
> S3Object|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L183]
>  throw exception, then "wrappedStream" will be null.
> And the 
> [retry|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L446]
>  mechanism may re-execute the 
> [wrappedStream.read|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450]
>  and cause NPE.
>  
> For more details, please refer to 
> [https://github.com/NVIDIA/spark-rapids/issues/2915]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-17812) NPE in S3AInputStream read() after failure to reconnect to store

2021-07-30 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-17812.
-
Fix Version/s: 3.4.0
   Resolution: Fixed

fixed in 3.4; backport to 3.3.2 planned

> NPE in S3AInputStream read() after failure to reconnect to store
> 
>
> Key: HADOOP-17812
> URL: https://issues.apache.org/jira/browse/HADOOP-17812
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.2, 3.3.1
>Reporter: Bobby Wang
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
> Attachments: failsafe-report.html.gz, s3a-test.tar.gz
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> when [reading from S3a 
> storage|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450],
>  SSLException (which extends IOException) happens, which will trigger 
> [onReadFailure|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L458].
> onReadFailure calls "reopen". it will first close the original 
> *wrappedStream* and set *wrappedStream = null*, and then it will try to 
> [re-get 
> *wrappedStream*|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L184].
>  But what if the previous code [obtaining 
> S3Object|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L183]
>  throw exception, then "wrappedStream" will be null.
> And the 
> [retry|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L446]
>  mechanism may re-execute the 
> [wrappedStream.read|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450]
>  and cause NPE.
>  
> For more details, please refer to 
> [https://github.com/NVIDIA/spark-rapids/issues/2915]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17812) NPE in S3AInputStream read() after failure to reconnect to store

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17812?focusedWorklogId=631844=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631844
 ]

ASF GitHub Bot logged work on HADOOP-17812:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 19:06
Start Date: 30/Jul/21 19:06
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3222:
URL: https://github.com/apache/hadoop/pull/3222#issuecomment-890095880


   +1, merged to trunk. Thank you for this
   
   Bobby -can you cherrypick the test from trunk, apply to branch-3.3 and 
retest? I just need to know that it works. There'll be no extra review unless 
there are merge problems.
   
   There would be major merge pain going back to 3.2; I'd like to avoid that. 
If you do want to go that way, it would be a separate patch, and we'd skip the 
test changes because that is so radically different


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 631844)
Time Spent: 3h 50m  (was: 3h 40m)

> NPE in S3AInputStream read() after failure to reconnect to store
> 
>
> Key: HADOOP-17812
> URL: https://issues.apache.org/jira/browse/HADOOP-17812
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.2, 3.3.1
>Reporter: Bobby Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: failsafe-report.html.gz, s3a-test.tar.gz
>
>  Time Spent: 3h 50m
>  Remaining Estimate: 0h
>
> when [reading from S3a 
> storage|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450],
>  SSLException (which extends IOException) happens, which will trigger 
> [onReadFailure|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L458].
> onReadFailure calls "reopen". it will first close the original 
> *wrappedStream* and set *wrappedStream = null*, and then it will try to 
> [re-get 
> *wrappedStream*|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L184].
>  But what if the previous code [obtaining 
> S3Object|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L183]
>  throw exception, then "wrappedStream" will be null.
> And the 
> [retry|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L446]
>  mechanism may re-execute the 
> [wrappedStream.read|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450]
>  and cause NPE.
>  
> For more details, please refer to 
> [https://github.com/NVIDIA/spark-rapids/issues/2915]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3222: HADOOP-17812. NPE in S3AInputStream read() after failure to reconnect to store

2021-07-30 Thread GitBox


steveloughran commented on pull request #3222:
URL: https://github.com/apache/hadoop/pull/3222#issuecomment-890095880


   +1, merged to trunk. Thank you for this
   
   Bobby -can you cherrypick the test from trunk, apply to branch-3.3 and 
retest? I just need to know that it works. There'll be no extra review unless 
there are merge problems.
   
   There would be major merge pain going back to 3.2; I'd like to avoid that. 
If you do want to go that way, it would be a separate patch, and we'd skip the 
test changes because that is so radically different


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17812) NPE in S3AInputStream read() after failure to reconnect to store

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17812?focusedWorklogId=631842=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631842
 ]

ASF GitHub Bot logged work on HADOOP-17812:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 19:04
Start Date: 30/Jul/21 19:04
Worklog Time Spent: 10m 
  Work Description: steveloughran merged pull request #3222:
URL: https://github.com/apache/hadoop/pull/3222


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 631842)
Time Spent: 3h 40m  (was: 3.5h)

> NPE in S3AInputStream read() after failure to reconnect to store
> 
>
> Key: HADOOP-17812
> URL: https://issues.apache.org/jira/browse/HADOOP-17812
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.2, 3.3.1
>Reporter: Bobby Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: failsafe-report.html.gz, s3a-test.tar.gz
>
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> when [reading from S3a 
> storage|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450],
>  SSLException (which extends IOException) happens, which will trigger 
> [onReadFailure|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L458].
> onReadFailure calls "reopen". it will first close the original 
> *wrappedStream* and set *wrappedStream = null*, and then it will try to 
> [re-get 
> *wrappedStream*|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L184].
>  But what if the previous code [obtaining 
> S3Object|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L183]
>  throw exception, then "wrappedStream" will be null.
> And the 
> [retry|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L446]
>  mechanism may re-execute the 
> [wrappedStream.read|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450]
>  and cause NPE.
>  
> For more details, please refer to 
> [https://github.com/NVIDIA/spark-rapids/issues/2915]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17812) NPE in S3AInputStream read() after failure to reconnect to store

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17812?focusedWorklogId=631841=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631841
 ]

ASF GitHub Bot logged work on HADOOP-17812:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 19:03
Start Date: 30/Jul/21 19:03
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #3222:
URL: https://github.com/apache/hadoop/pull/3222#issuecomment-887930908


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 19s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 31s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 35s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  82m 22s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3222/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3222 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 8fce11371b73 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d7659fbc5e6883579e759c40a5faf6f000326c3a |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3222/4/testReport/ |
   | Max. process+thread count | 521 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3222/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message 

[GitHub] [hadoop] steveloughran merged pull request #3222: HADOOP-17812. NPE in S3AInputStream read() after failure to reconnect to store

2021-07-30 Thread GitBox


steveloughran merged pull request #3222:
URL: https://github.com/apache/hadoop/pull/3222


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3222: HADOOP-17812. NPE in S3AInputStream read() after failure to reconnect to store

2021-07-30 Thread GitBox


hadoop-yetus removed a comment on pull request #3222:
URL: https://github.com/apache/hadoop/pull/3222#issuecomment-887930908


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 19s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 42s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 31s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 35s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 35s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  82m 22s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3222/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3222 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 8fce11371b73 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d7659fbc5e6883579e759c40a5faf6f000326c3a |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3222/4/testReport/ |
   | Max. process+thread count | 521 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3222/4/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Work logged] (HADOOP-17812) NPE in S3AInputStream read() after failure to reconnect to store

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17812?focusedWorklogId=631840=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631840
 ]

ASF GitHub Bot logged work on HADOOP-17812:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 19:02
Start Date: 30/Jul/21 19:02
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #3222:
URL: https://github.com/apache/hadoop/pull/3222#issuecomment-886131286


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 47s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 20s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  72m 53s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3222/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3222 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 7709b361eeea 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 68755aa092e84a2b8994c9d5e1f35a9c76d7e614 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3222/3/testReport/ |
   | Max. process+thread count | 641 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3222/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was 

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3222: HADOOP-17812. NPE in S3AInputStream read() after failure to reconnect to store

2021-07-30 Thread GitBox


hadoop-yetus removed a comment on pull request #3222:
URL: https://github.com/apache/hadoop/pull/3222#issuecomment-886131286


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 36s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  30m 38s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 47s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 47s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 37s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  13m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 20s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 36s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  72m 53s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3222/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3222 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 7709b361eeea 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 68755aa092e84a2b8994c9d5e1f35a9c76d7e614 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3222/3/testReport/ |
   | Max. process+thread count | 641 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3222/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Commented] (HADOOP-17812) NPE in S3AInputStream read() after failure to reconnect to store

2021-07-30 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17812?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17390751#comment-17390751
 ] 

Steve Loughran commented on HADOOP-17812:
-

Thanks. distcp test has a fix in progress

the NPE is new; we should just disable that test against a custom endpoint: 
HADOOP-17824

> NPE in S3AInputStream read() after failure to reconnect to store
> 
>
> Key: HADOOP-17812
> URL: https://issues.apache.org/jira/browse/HADOOP-17812
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.2, 3.3.1
>Reporter: Bobby Wang
>Priority: Major
>  Labels: pull-request-available
> Attachments: failsafe-report.html.gz, s3a-test.tar.gz
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> when [reading from S3a 
> storage|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450],
>  SSLException (which extends IOException) happens, which will trigger 
> [onReadFailure|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L458].
> onReadFailure calls "reopen". it will first close the original 
> *wrappedStream* and set *wrappedStream = null*, and then it will try to 
> [re-get 
> *wrappedStream*|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L184].
>  But what if the previous code [obtaining 
> S3Object|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L183]
>  throw exception, then "wrappedStream" will be null.
> And the 
> [retry|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L446]
>  mechanism may re-execute the 
> [wrappedStream.read|https://github.com/apache/hadoop/blob/rel/release-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AInputStream.java#L450]
>  and cause NPE.
>  
> For more details, please refer to 
> [https://github.com/NVIDIA/spark-rapids/issues/2915]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17824) ITestCustomSigner fails with NPE against private endpoint

2021-07-30 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-17824:
---

 Summary: ITestCustomSigner fails with NPE against private endpoint
 Key: HADOOP-17824
 URL: https://issues.apache.org/jira/browse/HADOOP-17824
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3, test
Affects Versions: 3.3.1
Reporter: Steve Loughran


ITestCustomSigner fails when the tester is pointed at a private endpoint



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17824) ITestCustomSigner fails with NPE against private endpoint

2021-07-30 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17824?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17390750#comment-17390750
 ] 

Steve Loughran commented on HADOOP-17824:
-

{code}

java.lang.NullPointerException at 
org.apache.hadoop.fs.s3a.auth.ITestCustomSigner$CustomSignerInitializer$StoreValue.access$200(ITestCustomSigner.java:255)
 at 
org.apache.hadoop.fs.s3a.auth.ITestCustomSigner$CustomSigner.sign(ITestCustomSigner.java:187)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1305)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1145)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:802)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:770)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:744)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:704)
 at 
com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:686)
 at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:550) at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:530) at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5437) at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5384) at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:5378) at 
com.amazonaws.services.s3.AmazonS3Client.listObjectsV2(AmazonS3Client.java:970) 
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.lambda$listObjects$11(S3AFileSystem.java:2490)
 at 
org.apache.hadoop.fs.statistics.impl.IOStatisticsBinding.lambda$trackDurationOfOperation$5(IOStatisticsBinding.java:499)
 at org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:414) at 
org.apache.hadoop.fs.s3a.Invoker.retryUntranslated(Invoker.java:377) at 
org.apache.hadoop.fs.s3a.S3AFileSystem.listObjects(S3AFileSystem.java:2481) at 
org.apache.hadoop.fs.s3a.S3AFileSystem.s3GetFileStatus(S3AFileSystem.java:3720) 
at 
org.apache.hadoop.fs.s3a.S3AFileSystem.innerGetFileStatus(S3AFileSystem.java:3583)
 at 
org.apache.hadoop.fs.s3a.S3AFileSystem$MkdirOperationCallbacksImpl.probePathStatus(S3AFileSystem.java:3350)
 at 
org.apache.hadoop.fs.s3a.impl.MkdirOperation.probePathStatusOrNull(MkdirOperation.java:135)
 at 
org.apache.hadoop.fs.s3a.impl.MkdirOperation.getPathStatusExpectingDir(MkdirOperation.java:150)
 at 
org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:80) at 
org.apache.hadoop.fs.s3a.impl.MkdirOperation.execute(MkdirOperation.java:45)

{code}


> ITestCustomSigner fails with NPE against private endpoint
> -
>
> Key: HADOOP-17824
> URL: https://issues.apache.org/jira/browse/HADOOP-17824
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.1
>Reporter: Steve Loughran
>Priority: Minor
>
> ITestCustomSigner fails when the tester is pointed at a private endpoint



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17139?focusedWorklogId=631836=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631836
 ]

ASF GitHub Bot logged work on HADOOP-17139:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 18:46
Start Date: 30/Jul/21 18:46
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3101:
URL: https://github.com/apache/hadoop/pull/3101#issuecomment-890084772


   Merged to trunk. Thank your for this @bogthe 
   
   it started off small, but you've now been sucked into the world of trying 
strictly define the behavior of filesystems based on what HDFS does, and then 
trying to reimplement a radically different version
   for cloud storage performance. Yes, it's hard, but means that compatibility 
is almost always guaranteed!
   
   If you cherrypick to branch-3.3. do a test run and push up the new PR I'll 
merge that without doing any more code reviews. The code is done, all we need 
is a retest.
   
   now, take the rest of the month off!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 631836)
Time Spent: 9h 50m  (was: 9h 40m)

> Re-enable optimized copyFromLocal implementation in S3AFileSystem
> -
>
> Key: HADOOP-17139
> URL: https://issues.apache.org/jira/browse/HADOOP-17139
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Sahil Takiar
>Assignee: Bogdan Stolojan
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 9h 50m
>  Remaining Estimate: 0h
>
> It looks like HADOOP-15932 disabled the optimized copyFromLocal 
> implementation in S3A for correctness reasons.  innerCopyFromLocalFile should 
> be fixed and re-enabled. The current implementation uses 
> FileSystem.copyFromLocal which will open an input stream from the local fs 
> and an output stream to the destination fs, and then call IOUtils.copyBytes. 
> With default configs, this will cause S3A to read the file into memory, write 
> it back to a file on the local fs, and then when the file is closed, upload 
> it to S3.
> The optimized version of copyFromLocal in innerCopyFromLocalFile, directly 
> creates a PutObjectRequest request with the local file as the input.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3101: HADOOP-17139 Re-enable optimized copyFromLocal implementation in S3AFileSystem

2021-07-30 Thread GitBox


steveloughran commented on pull request #3101:
URL: https://github.com/apache/hadoop/pull/3101#issuecomment-890084772


   Merged to trunk. Thank your for this @bogthe 
   
   it started off small, but you've now been sucked into the world of trying 
strictly define the behavior of filesystems based on what HDFS does, and then 
trying to reimplement a radically different version
   for cloud storage performance. Yes, it's hard, but means that compatibility 
is almost always guaranteed!
   
   If you cherrypick to branch-3.3. do a test run and push up the new PR I'll 
merge that without doing any more code reviews. The code is done, all we need 
is a retest.
   
   now, take the rest of the month off!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17139?focusedWorklogId=631835=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631835
 ]

ASF GitHub Bot logged work on HADOOP-17139:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 18:43
Start Date: 30/Jul/21 18:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #3101:
URL: https://github.com/apache/hadoop/pull/3101#issuecomment-881719216


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 45s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 45s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  27m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  23m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   4m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 58s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 29s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  21m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  18m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 43s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/10/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 29 new + 133 unchanged - 0 fixed = 162 total (was 
133)  |
   | +1 :green_heart: |  mvnsite  |   2m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   4m  9s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  8s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 21s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 220m 47s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3101 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint |
   | uname | Linux 6e63e76b6d77 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / bff3d889d5218bfe69a5d3c17bcd1db68e37eef1 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 

[jira] [Work logged] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17139?focusedWorklogId=631834=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631834
 ]

ASF GitHub Bot logged work on HADOOP-17139:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 18:43
Start Date: 30/Jul/21 18:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #3101:
URL: https://github.com/apache/hadoop/pull/3101#issuecomment-871605445






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 631834)
Time Spent: 9.5h  (was: 9h 20m)

> Re-enable optimized copyFromLocal implementation in S3AFileSystem
> -
>
> Key: HADOOP-17139
> URL: https://issues.apache.org/jira/browse/HADOOP-17139
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Sahil Takiar
>Assignee: Bogdan Stolojan
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>
> It looks like HADOOP-15932 disabled the optimized copyFromLocal 
> implementation in S3A for correctness reasons.  innerCopyFromLocalFile should 
> be fixed and re-enabled. The current implementation uses 
> FileSystem.copyFromLocal which will open an input stream from the local fs 
> and an output stream to the destination fs, and then call IOUtils.copyBytes. 
> With default configs, this will cause S3A to read the file into memory, write 
> it back to a file on the local fs, and then when the file is closed, upload 
> it to S3.
> The optimized version of copyFromLocal in innerCopyFromLocalFile, directly 
> creates a PutObjectRequest request with the local file as the input.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3101: HADOOP-17139 Re-enable optimized copyFromLocal implementation in S3AFileSystem

2021-07-30 Thread GitBox


hadoop-yetus removed a comment on pull request #3101:
URL: https://github.com/apache/hadoop/pull/3101#issuecomment-881719216


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 45s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 3 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 45s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  27m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  27m 12s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  23m 56s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   4m 18s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 44s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 48s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 31s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   3m 58s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 29s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 26s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 41s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  21m 37s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  21m 37s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m  9s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  18m  9s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   3m 43s | 
[/results-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/10/artifact/out/results-checkstyle-root.txt)
 |  root: The patch generated 29 new + 133 unchanged - 0 fixed = 162 total (was 
133)  |
   | +1 :green_heart: |  mvnsite  |   2m 34s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m 47s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   4m  9s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 57s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  17m  8s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   2m 21s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 59s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 220m 47s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/10/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3101 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell markdownlint |
   | uname | Linux 6e63e76b6d77 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / bff3d889d5218bfe69a5d3c17bcd1db68e37eef1 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/10/testReport/ |
   | Max. process+thread count | 3046 (vs. ulimit of 5500) |
   | modules | C: hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3101/10/console |
   | versions | 

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3101: HADOOP-17139 Re-enable optimized copyFromLocal implementation in S3AFileSystem

2021-07-30 Thread GitBox


hadoop-yetus removed a comment on pull request #3101:
URL: https://github.com/apache/hadoop/pull/3101#issuecomment-871605445






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17139?focusedWorklogId=631832=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631832
 ]

ASF GitHub Bot logged work on HADOOP-17139:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 18:42
Start Date: 30/Jul/21 18:42
Worklog Time Spent: 10m 
  Work Description: steveloughran merged pull request #3101:
URL: https://github.com/apache/hadoop/pull/3101


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 631832)
Time Spent: 9h 10m  (was: 9h)

> Re-enable optimized copyFromLocal implementation in S3AFileSystem
> -
>
> Key: HADOOP-17139
> URL: https://issues.apache.org/jira/browse/HADOOP-17139
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Sahil Takiar
>Assignee: Bogdan Stolojan
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 9h 10m
>  Remaining Estimate: 0h
>
> It looks like HADOOP-15932 disabled the optimized copyFromLocal 
> implementation in S3A for correctness reasons.  innerCopyFromLocalFile should 
> be fixed and re-enabled. The current implementation uses 
> FileSystem.copyFromLocal which will open an input stream from the local fs 
> and an output stream to the destination fs, and then call IOUtils.copyBytes. 
> With default configs, this will cause S3A to read the file into memory, write 
> it back to a file on the local fs, and then when the file is closed, upload 
> it to S3.
> The optimized version of copyFromLocal in innerCopyFromLocalFile, directly 
> creates a PutObjectRequest request with the local file as the input.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17139) Re-enable optimized copyFromLocal implementation in S3AFileSystem

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17139?focusedWorklogId=631833=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631833
 ]

ASF GitHub Bot logged work on HADOOP-17139:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 18:42
Start Date: 30/Jul/21 18:42
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #3101:
URL: https://github.com/apache/hadoop/pull/3101#issuecomment-884414854






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 631833)
Time Spent: 9h 20m  (was: 9h 10m)

> Re-enable optimized copyFromLocal implementation in S3AFileSystem
> -
>
> Key: HADOOP-17139
> URL: https://issues.apache.org/jira/browse/HADOOP-17139
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.0, 3.2.1
>Reporter: Sahil Takiar
>Assignee: Bogdan Stolojan
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 9h 20m
>  Remaining Estimate: 0h
>
> It looks like HADOOP-15932 disabled the optimized copyFromLocal 
> implementation in S3A for correctness reasons.  innerCopyFromLocalFile should 
> be fixed and re-enabled. The current implementation uses 
> FileSystem.copyFromLocal which will open an input stream from the local fs 
> and an output stream to the destination fs, and then call IOUtils.copyBytes. 
> With default configs, this will cause S3A to read the file into memory, write 
> it back to a file on the local fs, and then when the file is closed, upload 
> it to S3.
> The optimized version of copyFromLocal in innerCopyFromLocalFile, directly 
> creates a PutObjectRequest request with the local file as the input.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3101: HADOOP-17139 Re-enable optimized copyFromLocal implementation in S3AFileSystem

2021-07-30 Thread GitBox


hadoop-yetus removed a comment on pull request #3101:
URL: https://github.com/apache/hadoop/pull/3101#issuecomment-884414854






-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran merged pull request #3101: HADOOP-17139 Re-enable optimized copyFromLocal implementation in S3AFileSystem

2021-07-30 Thread GitBox


steveloughran merged pull request #3101:
URL: https://github.com/apache/hadoop/pull/3101


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17628) Distcp contract test is really slow with ABFS and S3A; timing out

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17628?focusedWorklogId=631827=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631827
 ]

ASF GitHub Bot logged work on HADOOP-17628:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 18:33
Start Date: 30/Jul/21 18:33
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on a change in pull request 
#3240:
URL: https://github.com/apache/hadoop/pull/3240#discussion_r680145818



##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
##
@@ -659,6 +683,41 @@ private int getTotalFiles() {
 return totalFiles;
   }
 
+  /**
+   * Override point: should direct write always be used?
+   * false by default; enable for stores where rename is slow.
+   * @return true if direct write should be used in all tests.
+   */
+  protected boolean directWriteAlways() {

Review comment:
   changed to shouldUseDirectWrite




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 631827)
Time Spent: 3h 10m  (was: 3h)

> Distcp contract test is really slow with ABFS and S3A; timing out
> -
>
> Key: HADOOP-17628
> URL: https://issues.apache.org/jira/browse/HADOOP-17628
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, fs/s3, test, tools/distcp
>Affects Versions: 3.4.0
>Reporter: Bilahari T H
>Priority: Minor
>  Labels: pull-request-available
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> The test case testDistCpWithIterator in AbstractContractDistCpTest is 
> consistently timing out.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #3240: HADOOP-17628. Distcp contract test is really slow with ABFS and S3A; timing out.

2021-07-30 Thread GitBox


steveloughran commented on a change in pull request #3240:
URL: https://github.com/apache/hadoop/pull/3240#discussion_r680145818



##
File path: 
hadoop-tools/hadoop-distcp/src/test/java/org/apache/hadoop/tools/contract/AbstractContractDistCpTest.java
##
@@ -659,6 +683,41 @@ private int getTotalFiles() {
 return totalFiles;
   }
 
+  /**
+   * Override point: should direct write always be used?
+   * false by default; enable for stores where rename is slow.
+   * @return true if direct write should be used in all tests.
+   */
+  protected boolean directWriteAlways() {

Review comment:
   changed to shouldUseDirectWrite




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17822) fs.s3a.acl.default not working after S3A Audit feature added

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17822?focusedWorklogId=631820=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631820
 ]

ASF GitHub Bot logged work on HADOOP-17822:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 18:18
Start Date: 30/Jul/21 18:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-890027714


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 12s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  98m 28s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3249 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 7dd55b86dde6 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1ed1e66ec1be4de01bbdb03004ef5ffddb857237 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/2/testReport/ |
   | Max. process+thread count | 563 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message 

[jira] [Work logged] (HADOOP-17822) fs.s3a.acl.default not working after S3A Audit feature added

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17822?focusedWorklogId=631819=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631819
 ]

ASF GitHub Bot logged work on HADOOP-17822:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 18:18
Start Date: 30/Jul/21 18:18
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus removed a comment on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-889943076


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  1s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 10s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  74m 12s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3249 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 12fff8484361 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3ffb85eadf2dbf8f7ef52bf5b362be63e1dba247 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/1/testReport/ |
   | Max. process+thread count | 691 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was 

[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3249: HADOOP-17822. fs.s3a.acl.default not working after S3A Audit feature

2021-07-30 Thread GitBox


hadoop-yetus removed a comment on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-890027714


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 12s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  98m 28s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3249 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 7dd55b86dde6 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1ed1e66ec1be4de01bbdb03004ef5ffddb857237 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/2/testReport/ |
   | Max. process+thread count | 563 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hadoop] hadoop-yetus removed a comment on pull request #3249: HADOOP-17822. fs.s3a.acl.default not working after S3A Audit feature

2021-07-30 Thread GitBox


hadoop-yetus removed a comment on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-889943076


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  1s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 10s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  74m 12s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3249 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 12fff8484361 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3ffb85eadf2dbf8f7ef52bf5b362be63e1dba247 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/1/testReport/ |
   | Max. process+thread count | 691 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hadoop] virajjasani commented on a change in pull request #3235: HDFS-16143. De-flake TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits

2021-07-30 Thread GitBox


virajjasani commented on a change in pull request #3235:
URL: https://github.com/apache/hadoop/pull/3235#discussion_r680111909



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
##
@@ -433,15 +440,28 @@ public void 
testStandbyTriggersLogRollsWhenTailInProgressEdits()
 NameNodeAdapter.mkdirs(active, getDirPath(i),
 new PermissionStatus("test", "test",
 new FsPermission((short)00755)), true);
+// reset lastRollTimeMs in EditLogTailer.
+active.getNamesystem().getEditLogTailer().resetLastRollTimeMs();

Review comment:
   FYI @xkrogen @ayushtkn if you have some bandwidth and would like to take 
a look.
   Thanks




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17822) fs.s3a.acl.default not working after S3A Audit feature added

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17822?focusedWorklogId=631794=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631794
 ]

ASF GitHub Bot logged work on HADOOP-17822:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 17:20
Start Date: 30/Jul/21 17:20
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-890039357


   I've done a full test run with 
   ```
   
 fs.s3a.acl.default
 LogDeliveryWrite
   
   ```
   breaks all assumed role tests because the roles aren't being created with 
`"s3:PutObjectAcl"` permission. Fixing this and updating the docs as 
appropriate. Will need to update the assumed role I test with too


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 631794)
Time Spent: 1h 10m  (was: 1h)

> fs.s3a.acl.default not working after S3A Audit feature added
> 
>
> Key: HADOOP-17822
> URL: https://issues.apache.org/jira/browse/HADOOP-17822
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> After HADOOP-17511 the fs.s3a.acl.default propperty isn't being passed 
> through to S3 PUT/COPY requests.
> The new RequestFactory is being given the acl values from the S3A FS 
> instance, but the factory is being created before the acl settings are loaded 
> from the configuration.
> Fix, and ideally, test (if the getXAttr lets us see this now)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3249: HADOOP-17822. fs.s3a.acl.default not working after S3A Audit feature

2021-07-30 Thread GitBox


steveloughran commented on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-890039357


   I've done a full test run with 
   ```
   
 fs.s3a.acl.default
 LogDeliveryWrite
   
   ```
   breaks all assumed role tests because the roles aren't being created with 
`"s3:PutObjectAcl"` permission. Fixing this and updating the docs as 
appropriate. Will need to update the assumed role I test with too


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17822) fs.s3a.acl.default not working after S3A Audit feature added

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17822?focusedWorklogId=631787=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631787
 ]

ASF GitHub Bot logged work on HADOOP-17822:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 16:59
Start Date: 30/Jul/21 16:59
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-890027714


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 12s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  98m 28s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3249 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 7dd55b86dde6 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1ed1e66ec1be4de01bbdb03004ef5ffddb857237 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/2/testReport/ |
   | Max. process+thread count | 563 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3249: HADOOP-17822. fs.s3a.acl.default not working after S3A Audit feature

2021-07-30 Thread GitBox


hadoop-yetus commented on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-890027714


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  17m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m  4s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 44s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 26s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 43s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 12s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 58s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 40s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 31s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  16m 56s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 22s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  98m 28s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3249 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 7dd55b86dde6 4.15.0-128-generic #131-Ubuntu SMP Wed Dec 9 
06:57:35 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 1ed1e66ec1be4de01bbdb03004ef5ffddb857237 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/2/testReport/ |
   | Max. process+thread count | 563 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-

[GitHub] [hadoop] hadoop-yetus commented on pull request #3209: HDFS-16129. Fixing the signature secret file misusage in HttpFS.

2021-07-30 Thread GitBox


hadoop-yetus commented on pull request #3209:
URL: https://github.com/apache/hadoop/pull/3209#issuecomment-889985548


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  27m 51s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 51s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  24m 29s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 51s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  19m 38s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   4m 47s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   3m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   2m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   2m 59s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   4m 24s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 43s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 29s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 56s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  27m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  27m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 41s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  23m 41s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 26s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   3m 23s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   2m 31s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   3m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   5m 38s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  19m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  19m 18s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   3m 46s |  |  hadoop-kms in the patch passed. 
 |
   | -1 :x: |  unit  |  13m 41s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3209/4/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs-httpfs.txt)
 |  hadoop-hdfs-httpfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 50s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 277m 59s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.fs.http.server.TestHttpFSServerNoACLs |
   |   | hadoop.fs.http.server.TestHttpFSAccessControlled |
   |   | hadoop.fs.http.server.TestHttpFSServer |
   |   | hadoop.fs.http.client.TestHttpFSFWithWebhdfsFileSystem |
   |   | hadoop.fs.http.client.TestHttpFSFileSystemLocalFileSystem |
   |   | hadoop.fs.http.client.TestHttpFSFWithSWebhdfsFileSystem |
   |   | hadoop.fs.http.client.TestHttpFSWithHttpFSFileSystem |
   |   | hadoop.fs.http.server.TestHttpFSServerNoXAttrs |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3209/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3209 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell xml |
   | uname | Linux c05dcdf3ca77 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 63a847c98d1829c8786efbd7e6553c427e87d93b |
   | Default Java | 

[GitHub] [hadoop] shuzirra commented on pull request #3245: Some changes

2021-07-30 Thread GitBox


shuzirra commented on pull request #3245:
URL: https://github.com/apache/hadoop/pull/3245#issuecomment-889980747


   The indentation is fine as is, if you want to contribute please read 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute .


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] shuzirra closed pull request #3245: Some changes

2021-07-30 Thread GitBox


shuzirra closed pull request #3245:
URL: https://github.com/apache/hadoop/pull/3245


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on pull request #3243: HDFS-14529. SetTimes to throw FileNotFoundException if inode is not found

2021-07-30 Thread GitBox


jojochuang commented on pull request #3243:
URL: https://github.com/apache/hadoop/pull/3243#issuecomment-889970672


   Thanks for the review. This is merged now.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang merged pull request #3243: HDFS-14529. SetTimes to throw FileNotFoundException if inode is not found

2021-07-30 Thread GitBox


jojochuang merged pull request #3243:
URL: https://github.com/apache/hadoop/pull/3243


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3248: YARN-10874. Refactor NM ContainerLaunch#getEnvDependencies's unit tests

2021-07-30 Thread GitBox


hadoop-yetus commented on pull request #3248:
URL: https://github.com/apache/hadoop/pull/3248#issuecomment-889948121


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  12m 39s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  20m 56s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  24m 18s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |  23m 14s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   4m 38s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 36s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 40s |  |  branch/hadoop-project no spotbugs 
output file (spotbugsXml.xml)  |
   | +1 :green_heart: |  shadedclient  |  20m  9s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 30s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m  3s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  27m 14s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |  27m 14s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  23m 46s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |  23m 46s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   4m 19s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 32s |  |  the patch passed  |
   | +1 :green_heart: |  xml  |   0m  2s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  javadoc  |   1m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +0 :ok: |  spotbugs  |   0m 33s |  |  hadoop-project has no data from 
spotbugs  |
   | +1 :green_heart: |  shadedclient  |  20m  8s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   0m 32s |  |  hadoop-project in the patch 
passed.  |
   | +1 :green_heart: |  unit  |  23m  9s |  |  hadoop-yarn-server-nodemanager 
in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 55s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 224m 15s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3248/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3248 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient codespell xml spotbugs checkstyle |
   | uname | Linux b2f3a39bc975 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / b0ec4d2ac4318f726419cc16f7034b8e94c7d61a |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3248/1/testReport/ |
   | Max. process+thread count | 614 (vs. ulimit of 5500) |
   | modules | C: hadoop-project 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: . |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3248/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT 

[jira] [Work logged] (HADOOP-17822) fs.s3a.acl.default not working after S3A Audit feature added

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17822?focusedWorklogId=631686=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631686
 ]

ASF GitHub Bot logged work on HADOOP-17822:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 14:48
Start Date: 30/Jul/21 14:48
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-889943076


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  1s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 10s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  74m 12s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3249 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 12fff8484361 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3ffb85eadf2dbf8f7ef52bf5b362be63e1dba247 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/1/testReport/ |
   | Max. process+thread count | 691 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was 

[GitHub] [hadoop] hadoop-yetus commented on pull request #3249: HADOOP-17822. fs.s3a.acl.default not working after S3A Audit feature

2021-07-30 Thread GitBox


hadoop-yetus commented on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-889943076


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  1s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to 
include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 45s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 30s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 35s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 10s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 40s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 39s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 32s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   0m 32s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 17s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 14s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 28s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   2m 10s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  74m 12s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3249 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 12fff8484361 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 3ffb85eadf2dbf8f7ef52bf5b362be63e1dba247 |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/1/testReport/ |
   | Max. process+thread count | 691 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3249/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To 

[jira] [Work logged] (HADOOP-17822) fs.s3a.acl.default not working after S3A Audit feature added

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17822?focusedWorklogId=631683=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631683
 ]

ASF GitHub Bot logged work on HADOOP-17822:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 14:43
Start Date: 30/Jul/21 14:43
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-889940032


   rebased onto a few patches behind on trunk, before the CSE support went in. 
that's triggering failures and I need to differentiate CSE-related regressions 
and anything from this PR


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 631683)
Time Spent: 40m  (was: 0.5h)

> fs.s3a.acl.default not working after S3A Audit feature added
> 
>
> Key: HADOOP-17822
> URL: https://issues.apache.org/jira/browse/HADOOP-17822
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> After HADOOP-17511 the fs.s3a.acl.default propperty isn't being passed 
> through to S3 PUT/COPY requests.
> The new RequestFactory is being given the acl values from the S3A FS 
> instance, but the factory is being created before the acl settings are loaded 
> from the configuration.
> Fix, and ideally, test (if the getXAttr lets us see this now)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3249: HADOOP-17822. fs.s3a.acl.default not working after S3A Audit feature

2021-07-30 Thread GitBox


steveloughran commented on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-889940032


   rebased onto a few patches behind on trunk, before the CSE support went in. 
that's triggering failures and I need to differentiate CSE-related regressions 
and anything from this PR


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #3220: YARN-10355. Refactor NM ContainerLaunch.java#orderEnvByDependencies

2021-07-30 Thread GitBox


hadoop-yetus commented on pull request #3220:
URL: https://github.com/apache/hadoop/pull/3220#issuecomment-889909629


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  12m 57s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  31m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  checkstyle  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 48s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   0m 41s |  |  trunk passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 27s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  14m 34s |  |  branch has no errors 
when building and testing our client artifacts.  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javac  |   1m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  javac  |   1m 16s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks 
issues.  |
   | -0 :warning: |  checkstyle  |   0m 24s | 
[/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3220/7/artifact/out/results-checkstyle-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager.txt)
 |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager:
 The patch generated 2 new + 35 unchanged - 0 fixed = 37 total (was 35)  |
   | +1 :green_heart: |  mvnsite  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10  |
   | +1 :green_heart: |  spotbugs  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  14m 33s |  |  patch has no errors 
when building and testing our client artifacts.  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  23m  1s |  |  hadoop-yarn-server-nodemanager 
in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 34s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 111m 10s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3220/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3220 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 726052fe5100 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / f7a369f7e97a2435026d26769ce87a80d84c1b2c |
   | Default Java | Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3220/7/testReport/ |
   | Max. process+thread count | 676 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 

[jira] [Updated] (HADOOP-17612) Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0

2021-07-30 Thread Viraj Jasani (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Viraj Jasani updated HADOOP-17612:
--
Description: 
Let's upgrade Zookeeper and Curator to 3.6.3 and 5.2.0 respectively.

Curator 5.2 also supports Zookeeper 3.5 servers.

  was:We can bump Zookeeper version to 3.7.0 for trunk.


> Upgrade Zookeeper to 3.6.3 and Curator to 5.2.0
> ---
>
> Key: HADOOP-17612
> URL: https://issues.apache.org/jira/browse/HADOOP-17612
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Viraj Jasani
>Assignee: Viraj Jasani
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 2h 50m
>  Remaining Estimate: 0h
>
> Let's upgrade Zookeeper and Curator to 3.6.3 and 5.2.0 respectively.
> Curator 5.2 also supports Zookeeper 3.5 servers.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on a change in pull request #3235: HDFS-16143. De-flake TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits

2021-07-30 Thread GitBox


virajjasani commented on a change in pull request #3235:
URL: https://github.com/apache/hadoop/pull/3235#discussion_r679753606



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
##
@@ -433,15 +440,28 @@ public void 
testStandbyTriggersLogRollsWhenTailInProgressEdits()
 NameNodeAdapter.mkdirs(active, getDirPath(i),
 new PermissionStatus("test", "test",
 new FsPermission((short)00755)), true);
+// reset lastRollTimeMs in EditLogTailer.
+active.getNamesystem().getEditLogTailer().resetLastRollTimeMs();

Review comment:
   Thanks for taking a look @jojochuang. 
   `EditLogTailer` has a thread that keeps running to identify when is the 
right time to trigger log rolling by calling Active Namenode's rollEditLog() 
API.
   ```
   private void doWork() {
 long currentSleepTimeMs = sleepTimeMs;
 while (shouldRun) {
   long editsTailed  = 0;
   try {
 // There's no point in triggering a log roll if the Standby hasn't
 // read any more transactions since the last time a roll was
 // triggered.
 boolean triggeredLogRoll = false;
 if (tooLongSinceLastLoad() &&
 lastRollTriggerTxId < lastLoadedTxnId) {
   triggerActiveLogRoll();
   triggeredLogRoll = true;
 }
   ...
   ...
   ```
   
   What happens with this test is that by the time we create new dirs in this 
for loop, this active thread would keep checking and intermittently keep 
triggering log roll by making RPC calls to Active Namenode, and hence this test 
would become flaky because the test expects Standby Namenode's last applied txn 
id to be less than active Namenode's last written txn id within a specific time 
duration (this is the only reason behind it's flakiness). When it comes to how 
long EditLogTailer's thread keeps waiting to trigger log roll, it depends on 
`lastRollTimeMs`.
   
   In the above code, tooLongSinceLastLoad() refers to:
   ```
 /**
  * @return true if the configured log roll period has elapsed.
  */
 private boolean tooLongSinceLastLoad() {
   return logRollPeriodMs >= 0 && 
 (monotonicNow() - lastRollTimeMs) > logRollPeriodMs;
 }
   ```
   Hence, until `lastRollTimeMs` worth of time is elapsed, log roll would not 
be tailed, however, this always tends to be flaky because we have no control 
over how much time mkdir calls in this for loop is going to take and in that 
meantime, `lastRollTimeMs` worth of time can be elapsed easily, hence this test 
is flaky. When we expect Standby Namenode's txnId to be less than that of 
Active Namenode, it is not the case because log is rolled by above thread in 
`EditLogTailer`.
   
   Hence, it is important for this test to keep resetting `lastRollTimeMs` 
while mkdir calls are getting executed so that we don't give chance for 
`tooLongSinceLastLoad()` to be successful until we want it to be successful.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on a change in pull request #3235: HDFS-16143. De-flake TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits

2021-07-30 Thread GitBox


virajjasani commented on a change in pull request #3235:
URL: https://github.com/apache/hadoop/pull/3235#discussion_r679753606



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
##
@@ -433,15 +440,28 @@ public void 
testStandbyTriggersLogRollsWhenTailInProgressEdits()
 NameNodeAdapter.mkdirs(active, getDirPath(i),
 new PermissionStatus("test", "test",
 new FsPermission((short)00755)), true);
+// reset lastRollTimeMs in EditLogTailer.
+active.getNamesystem().getEditLogTailer().resetLastRollTimeMs();

Review comment:
   Thanks for taking a look @jojochuang. 
   `EditLogTailer` has a thread that keeps running to identify when is the 
right time to trigger log rolling by calling Active Namenode's rollEditLog() 
API.
   ```
   private void doWork() {
 long currentSleepTimeMs = sleepTimeMs;
 while (shouldRun) {
   long editsTailed  = 0;
   try {
 // There's no point in triggering a log roll if the Standby hasn't
 // read any more transactions since the last time a roll was
 // triggered.
 boolean triggeredLogRoll = false;
 if (tooLongSinceLastLoad() &&
 lastRollTriggerTxId < lastLoadedTxnId) {
   triggerActiveLogRoll();
   triggeredLogRoll = true;
 }
   ...
   ...
   ```
   
   What happens with this test is that by the time we create new dirs in this 
for loop, this active thread would keep checking and intermittently keep 
triggering log roll by making RPC calls to Active Namenode, and hence this test 
would become flaky because the test expects Standby Namenode's last applied txn 
id to be less than active Namenode's last written txn id within a time limit 
duration. When it comes to how long EditLogTailer's thread keeps waiting to 
trigger log roll depends on `lastRollTimeMs`.
   In the above code, tooLongSinceLastLoad() refers to:
   ```
 /**
  * @return true if the configured log roll period has elapsed.
  */
 private boolean tooLongSinceLastLoad() {
   return logRollPeriodMs >= 0 && 
 (monotonicNow() - lastRollTimeMs) > logRollPeriodMs;
 }
   ```
   Hence, until `lastRollTimeMs` worth of time is elapsed, log roll would not 
be tailed, however, this always tends to be flaky because we have no control 
over how much time mkdir calls in this for loop is going to take and in that 
meantime, `lastRollTimeMs` worth of time can be elapsed easily, hence this test 
is flaky. When we expect Standby Namenode's txnId to be less than that of 
Active Namenode, it is not the case because log is rolled by above thread in 
`EditLogTailer`.
   
   Hence, it is important for this test to keep resetting `lastRollTimeMs` 
while mkdir calls are getting executed so that we don't give chance for 
`tooLongSinceLastLoad()` to be successful until we want it to be successful.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17821) Move Ozone to related projects section

2021-07-30 Thread Yi-Sheng Lien (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17821?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi-Sheng Lien updated HADOOP-17821:
---
Summary: Move Ozone to related projects section  (was: Update link of Ozone 
on Hadoop Website)

> Move Ozone to related projects section
> --
>
> Key: HADOOP-17821
> URL: https://issues.apache.org/jira/browse/HADOOP-17821
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yi-Sheng Lien
>Assignee: Yi-Sheng Lien
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Hi all, as Ozone was spun to TLP, it has individual web site.
> Now on Modules part of Hadoop [website|https://hadoop.apache.org/], the link 
> of Ozone website is old page.
> IMHO there are two ways to fix it :
> 1. update it to new page.
> 2. move Ozone to Related projects part on Hadoop website
> Please feel free to give me some feedback, thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12670) Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only

2021-07-30 Thread Hemanth Boyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemanth Boyina updated HADOOP-12670:

Attachment: HADOOP-12670-HADOOP-17800.002.patch

> Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only
> -
>
> Key: HADOOP-12670
> URL: https://issues.apache.org/jira/browse/HADOOP-12670
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Affects Versions: HADOOP-11890
>Reporter: Elliott Neil Clark
>Assignee: Elliott Neil Clark
>Priority: Major
> Attachments: HADOOP-12670-HADOOP-11890.0.patch, 
> HADOOP-12670-HADOOP-11890.2.patch, HADOOP-12670-HADOOP-11890.3.patch, 
> HADOOP-12670-HADOOP-17800.001.patch, HADOOP-12670-HADOOP-17800.002.patch
>
>
> {code}
>   TestSecurityUtil.testBuildTokenServiceSockAddr:165 
> expected:<[127.0.0.]1:123> but was:<[0:0:0:0:0:0:0:]1:123>
>   TestSecurityUtil.testBuildDTServiceName:148 expected:<[127.0.0.]1:123> but 
> was:<[0:0:0:0:0:0:0:]1:123>
>   
> TestSecurityUtil.testSocketAddrWithName:326->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   
> TestSecurityUtil.testSocketAddrWithIP:333->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   
> TestSecurityUtil.testSocketAddrWithNameToStaticName:340->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   TestNetUtils.testNormalizeHostName:639 expected:<[0:0:0:0:0:0:0:]1> but 
> was:<[127.0.0.]1>
>   TestNetUtils.testResolverLoopback:533->verifyInetAddress:496 
> expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] szilard-nemeth commented on a change in pull request #3206: YARN-10814. Fallback to RandomSecretProvider if the secret file is empty

2021-07-30 Thread GitBox


szilard-nemeth commented on a change in pull request #3206:
URL: https://github.com/apache/hadoop/pull/3206#discussion_r679913926



##
File path: 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
##
@@ -237,8 +237,8 @@ public static SignerSecretProvider constructSecretProvider(
 provider.init(config, ctx, validity);
   } catch (Exception e) {
 if (!disallowFallbackToRandomSecretProvider) {
-  LOG.info("Unable to initialize FileSignerSecretProvider, " +
-   "falling back to use random secrets.");
+  LOG.warn("Unable to initialize FileSignerSecretProvider, " +
+  "falling back to use random secrets. Reason: " + e.getMessage());

Review comment:
   Good that we have a reason string now :) 

##
File path: 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestFileSignerSecretProvider.java
##
@@ -48,4 +52,27 @@ public void testGetSecrets() throws Exception {
 Assert.assertEquals(1, allSecrets.length);
 Assert.assertArrayEquals(secretValue.getBytes(), allSecrets[0]);
   }
+
+  @Test
+  public void testEmptySecretFileThrows() throws Exception {
+File secretFile = File.createTempFile("test_empty_secret", ".txt");
+assertTrue(secretFile.exists());
+
+FileSignerSecretProvider secretProvider
+= new FileSignerSecretProvider();
+Properties secretProviderProps = new Properties();
+secretProviderProps.setProperty(
+AuthenticationFilter.SIGNATURE_SECRET_FILE,
+secretFile.getAbsolutePath());
+
+Exception exception =
+assertThrows(RuntimeException.class, new ThrowingRunnable() {
+  @Override
+  public void run() throws Throwable {
+secretProvider.init(secretProviderProps, null, -1);
+  }
+});
+assertTrue(exception.getMessage().startsWith(
+"No secret in signature secret file:"));

Review comment:
   Minor nit: No filename after colon.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17821) Update link of Ozone on Hadoop Website

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17821?focusedWorklogId=631621=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631621
 ]

ASF GitHub Bot logged work on HADOOP-17821:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 13:12
Start Date: 30/Jul/21 13:12
Worklog Time Spent: 10m 
  Work Description: cxorm commented on a change in pull request #26:
URL: https://github.com/apache/hadoop-site/pull/26#discussion_r679913701



##
File path: src/modules.md
##
@@ -20,4 +20,4 @@ The project includes these modules:
   - __Hadoop Distributed File System (HDFS™)__: A distributed file system that 
provides high-throughput access to application data.
   - __Hadoop YARN__: A framework for job scheduling and cluster resource 
management.
   - __Hadoop MapReduce__: A YARN-based system for parallel processing of large 
data sets.
-  - __[Hadoop Ozone](https://hadoop.apache.org/ozone/)__: An object store for 
Hadoop.
+  - __[Hadoop Ozone](https://ozone.apache.org)__: An object store for Hadoop.

Review comment:
   Thanks @jojochuang for issuing it.
   IMHO +1, let me change the title of this Jira and move Ozone to related 
projects section




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 631621)
Time Spent: 0.5h  (was: 20m)

> Update link of Ozone on Hadoop Website
> --
>
> Key: HADOOP-17821
> URL: https://issues.apache.org/jira/browse/HADOOP-17821
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yi-Sheng Lien
>Assignee: Yi-Sheng Lien
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Hi all, as Ozone was spun to TLP, it has individual web site.
> Now on Modules part of Hadoop [website|https://hadoop.apache.org/], the link 
> of Ozone website is old page.
> IMHO there are two ways to fix it :
> 1. update it to new page.
> 2. move Ozone to Related projects part on Hadoop website
> Please feel free to give me some feedback, thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop-site] cxorm commented on a change in pull request #26: HADOOP-17821. Update link of Ozone on Hadoop Website

2021-07-30 Thread GitBox


cxorm commented on a change in pull request #26:
URL: https://github.com/apache/hadoop-site/pull/26#discussion_r679913701



##
File path: src/modules.md
##
@@ -20,4 +20,4 @@ The project includes these modules:
   - __Hadoop Distributed File System (HDFS™)__: A distributed file system that 
provides high-throughput access to application data.
   - __Hadoop YARN__: A framework for job scheduling and cluster resource 
management.
   - __Hadoop MapReduce__: A YARN-based system for parallel processing of large 
data sets.
-  - __[Hadoop Ozone](https://hadoop.apache.org/ozone/)__: An object store for 
Hadoop.
+  - __[Hadoop Ozone](https://ozone.apache.org)__: An object store for Hadoop.

Review comment:
   Thanks @jojochuang for issuing it.
   IMHO +1, let me change the title of this Jira and move Ozone to related 
projects section




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12670) Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only

2021-07-30 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17390562#comment-17390562
 ] 

Hadoop QA commented on HADOOP-12670:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 19m 
50s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 3 
new or modified test files. {color} |
|| || || || {color:brown} HADOOP-17800 Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 24m 
57s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 
56s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m 
19s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m  2s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
7s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue} 24m 
31s{color} | {color:blue}{color} | {color:blue} Both FindBugs and SpotBugs are 
enabled, using SpotBugs. {color} |
| {color:green}+1{color} | {color:green} spotbugs {color} | {color:green}  2m 
41s{color} | {color:green}{color} | {color:green} HADOOP-17800 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 27m 
48s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.11+9-Ubuntu-0ubuntu2.20.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 27m 
48s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 25m 
55s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_292-8u292-b10-0ubuntu1~20.04-b10 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 25m 
55s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 30s{color} | 
{color:orange}https://ci-hadoop.apache.org/job/PreCommit-HADOOP-Build/223/artifact/out/diff-checkstyle-hadoop-common-project_hadoop-common.txt{color}
 | {color:orange} hadoop-common-project/hadoop-common: The patch generated 2 
new + 85 unchanged - 2 fixed = 87 total (was 87) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
17s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 13s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} 

[jira] [Work logged] (HADOOP-17822) fs.s3a.acl.default not working after S3A Audit feature added

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17822?focusedWorklogId=631598=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631598
 ]

ASF GitHub Bot logged work on HADOOP-17822:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 12:54
Start Date: 30/Jul/21 12:54
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-889872989


   + @mukund-thakur @mehakmeet @bogthe 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 631598)
Time Spent: 0.5h  (was: 20m)

> fs.s3a.acl.default not working after S3A Audit feature added
> 
>
> Key: HADOOP-17822
> URL: https://issues.apache.org/jira/browse/HADOOP-17822
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> After HADOOP-17511 the fs.s3a.acl.default propperty isn't being passed 
> through to S3 PUT/COPY requests.
> The new RequestFactory is being given the acl values from the S3A FS 
> instance, but the factory is being created before the acl settings are loaded 
> from the configuration.
> Fix, and ideally, test (if the getXAttr lets us see this now)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3249: HADOOP-17822. fs.s3a.acl.default not working after S3A Audit feature

2021-07-30 Thread GitBox


steveloughran commented on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-889872989


   + @mukund-thakur @mehakmeet @bogthe 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17822) fs.s3a.acl.default not working after S3A Audit feature added

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17822?focusedWorklogId=631597=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631597
 ]

ASF GitHub Bot logged work on HADOOP-17822:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 12:53
Start Date: 30/Jul/21 12:53
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-889872576


   Testing: S3 london; full suite in progress with acls set in auth-keys.xml
   
   Also verified via a "hadoop fs -touchz s3a://stevel-london/acl4" call
   
   ```xml
   
 fs.s3a.acl.default
 LogDeliveryWrite
   
   ```
   
   then verify through AWS CLI
   ```
   >  aws s3api get-object-acl --bucket stevel-london --key acl4
   GRANTS  FULL_CONTROL
   GRANTEE CanonicalUser
   GRANTS  WRITE
   GRANTEE Group   http://acs.amazonaws.com/groups/s3/LogDelivery
   GRANTS  READ_ACP
   GRANTEE Group   http://acs.amazonaws.com/groups/s3/LogDelivery
   OWNER   
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 631597)
Time Spent: 20m  (was: 10m)

> fs.s3a.acl.default not working after S3A Audit feature added
> 
>
> Key: HADOOP-17822
> URL: https://issues.apache.org/jira/browse/HADOOP-17822
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> After HADOOP-17511 the fs.s3a.acl.default propperty isn't being passed 
> through to S3 PUT/COPY requests.
> The new RequestFactory is being given the acl values from the S3A FS 
> instance, but the factory is being created before the acl settings are loaded 
> from the configuration.
> Fix, and ideally, test (if the getXAttr lets us see this now)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #3249: HADOOP-17822. fs.s3a.acl.default not working after S3A Audit feature

2021-07-30 Thread GitBox


steveloughran commented on pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249#issuecomment-889872576


   Testing: S3 london; full suite in progress with acls set in auth-keys.xml
   
   Also verified via a "hadoop fs -touchz s3a://stevel-london/acl4" call
   
   ```xml
   
 fs.s3a.acl.default
 LogDeliveryWrite
   
   ```
   
   then verify through AWS CLI
   ```
   >  aws s3api get-object-acl --bucket stevel-london --key acl4
   GRANTS  FULL_CONTROL
   GRANTEE CanonicalUser
   GRANTS  WRITE
   GRANTEE Group   http://acs.amazonaws.com/groups/s3/LogDelivery
   GRANTS  READ_ACP
   GRANTEE Group   http://acs.amazonaws.com/groups/s3/LogDelivery
   OWNER   
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17822) fs.s3a.acl.default not working after S3A Audit feature added

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17822:

Labels: pull-request-available  (was: )

> fs.s3a.acl.default not working after S3A Audit feature added
> 
>
> Key: HADOOP-17822
> URL: https://issues.apache.org/jira/browse/HADOOP-17822
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After HADOOP-17511 the fs.s3a.acl.default propperty isn't being passed 
> through to S3 PUT/COPY requests.
> The new RequestFactory is being given the acl values from the S3A FS 
> instance, but the factory is being created before the acl settings are loaded 
> from the configuration.
> Fix, and ideally, test (if the getXAttr lets us see this now)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #3249: HADOOP-17822. fs.s3a.acl.default not working after S3A Audit feature

2021-07-30 Thread GitBox


steveloughran opened a new pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249


   
   Fixes the regression caused by HADOOP-17511 by moving where the
   cannedACL properties are inited -so guaranteeing that they are valid
   before the RequestFactory is created.
   
   Adds
   
   * A unit test in TestRequestFactory to verify the ACLs are set
 on all file write operations
   * A new ITestS3ACannedACLs test which verifies that ACLs really
 do get all the way through.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17822) fs.s3a.acl.default not working after S3A Audit feature added

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17822?focusedWorklogId=631590=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631590
 ]

ASF GitHub Bot logged work on HADOOP-17822:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 12:50
Start Date: 30/Jul/21 12:50
Worklog Time Spent: 10m 
  Work Description: steveloughran opened a new pull request #3249:
URL: https://github.com/apache/hadoop/pull/3249


   
   Fixes the regression caused by HADOOP-17511 by moving where the
   cannedACL properties are inited -so guaranteeing that they are valid
   before the RequestFactory is created.
   
   Adds
   
   * A unit test in TestRequestFactory to verify the ACLs are set
 on all file write operations
   * A new ITestS3ACannedACLs test which verifies that ACLs really
 do get all the way through.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 631590)
Remaining Estimate: 0h
Time Spent: 10m

> fs.s3a.acl.default not working after S3A Audit feature added
> 
>
> Key: HADOOP-17822
> URL: https://issues.apache.org/jira/browse/HADOOP-17822
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.3.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> After HADOOP-17511 the fs.s3a.acl.default propperty isn't being passed 
> through to S3 PUT/COPY requests.
> The new RequestFactory is being given the acl values from the S3A FS 
> instance, but the factory is being created before the acl settings are loaded 
> from the configuration.
> Fix, and ideally, test (if the getXAttr lets us see this now)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17682) ABFS: Support FileStatus input to OpenFileWithOptions() via OpenFileParameters

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17682?focusedWorklogId=631583=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631583
 ]

ASF GitHub Bot logged work on HADOOP-17682:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 12:32
Start Date: 30/Jul/21 12:32
Worklog Time Spent: 10m 
  Work Description: sumangala-patki commented on pull request #2975:
URL: https://github.com/apache/hadoop/pull/2975#issuecomment-889861923


   TEST RESULTS
   
   HNS Account Location: East US 2
   NonHNS Account Location: East US 2, Central US
   
   ```
   HNS OAuth
   
   [ERROR] Failures: 
   [ERROR]   
TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting:171->fuzzyValidate:49
 The actual value 11 is not within the expected range: [5.60, 8.40].
   [ERROR] Tests run: 103, Failures: 1, Errors: 0, Skipped: 1
   [ERROR] Errors: 
   [ERROR]   ITestAzureBlobFileSystemLease.testWriteAfterBreakLease:240 » 
TestTimedOut test...
   [ERROR] Tests run: 558, Failures: 0, Errors: 1, Skipped: 98 
   [ERROR] Errors: 
   [ERROR]   
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut 
   [ERROR] Tests run: 271, Failures: 0, Errors: 1, Skipped: 52
   
   AppendBlob HNS-OAuth
   
   [ERROR] Failures: 
   [ERROR]   
TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting:171->fuzzyValidate:49
 The actual value 9 is not within the expected range: [5.60, 8.40].
   [ERROR] Tests run: 103, Failures: 1, Errors: 0, Skipped: 1
   [ERROR] Failures: 
   [ERROR]   
ITestAbfsStreamStatistics.testAbfsStreamOps:140->Assert.assertTrue:42->Assert.fail:89
 The actual value of 99 was not equal to the expected value
   [ERROR] Errors: 
   [ERROR]   
ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendNoInfiniteLease:178->twoWriters:166
 » IO
   [ERROR] Tests run: 558, Failures: 1, Errors: 1, Skipped: 98] 
   [ERROR] Errors: 
   [ERROR]   
ITestAzureBlobFileSystemE2EScale.testWriteHeavyBytesToFileAcrossThreads:77 » 
TestTimedOut
   [ERROR]   
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut 
   [ERROR] Tests run: 271, Failures: 0, Errors: 2, Skipped: 76
   
   HNS-SharedKey
   
   [WARNING] Tests run: 103, Failures: 0, Errors: 0, Skipped: 2
   [WARNING] Tests run: 558, Failures: 0, Errors: 0, Skipped: 54
   [ERROR] Errors: 
   [ERROR]   
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut
   [ERROR]   
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut 
   [ERROR] Tests run: 271, Failures: 0, Errors: 2, Skipped: 40
   
   NonHNS-SharedKey
   
   [WARNING] Tests run: 103, Failures: 0, Errors: 0, Skipped: 2
   [WARNING] Tests run: 558, Failures: 0, Errors: 0, Skipped: 276
   [ERROR] Errors: 
   [ERROR]   
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut
   [ERROR]   
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut
   [ERROR] Tests run: 271, Failures: 0, Errors: 2, Skipped: 40
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 631583)
Time Spent: 4.5h  (was: 4h 20m)

> ABFS: Support FileStatus input to OpenFileWithOptions() via OpenFileParameters
> --
>
> Key: HADOOP-17682
> URL: https://issues.apache.org/jira/browse/HADOOP-17682
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Sumangala Patki
>Assignee: Sumangala Patki
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> ABFS open methods require certain information (contentLength, eTag, etc) to  
> to create an InputStream for the file at the given path. This information is 
> retrieved via a GetFileStatus request to backend.
> However, client applications may often have access to the FileStatus prior to 
> invoking the open API. Providing this FileStatus to the driver through the 
> OpenFileParameters argument of openFileWithOptions() can help avoid the call 
> to Store for FileStatus.
> This PR adds handling for the FileStatus instance (if any) provided via the 
> OpenFileParameters argument.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [hadoop] sumangala-patki commented on pull request #2975: HADOOP-17682. ABFS: Support FileStatus input to OpenFileWithOptions() via OpenFileParameters

2021-07-30 Thread GitBox


sumangala-patki commented on pull request #2975:
URL: https://github.com/apache/hadoop/pull/2975#issuecomment-889861923


   TEST RESULTS
   
   HNS Account Location: East US 2
   NonHNS Account Location: East US 2, Central US
   
   ```
   HNS OAuth
   
   [ERROR] Failures: 
   [ERROR]   
TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting:171->fuzzyValidate:49
 The actual value 11 is not within the expected range: [5.60, 8.40].
   [ERROR] Tests run: 103, Failures: 1, Errors: 0, Skipped: 1
   [ERROR] Errors: 
   [ERROR]   ITestAzureBlobFileSystemLease.testWriteAfterBreakLease:240 » 
TestTimedOut test...
   [ERROR] Tests run: 558, Failures: 0, Errors: 1, Skipped: 98 
   [ERROR] Errors: 
   [ERROR]   
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut 
   [ERROR] Tests run: 271, Failures: 0, Errors: 1, Skipped: 52
   
   AppendBlob HNS-OAuth
   
   [ERROR] Failures: 
   [ERROR]   
TestAbfsClientThrottlingAnalyzer.testManySuccessAndErrorsAndWaiting:171->fuzzyValidate:49
 The actual value 9 is not within the expected range: [5.60, 8.40].
   [ERROR] Tests run: 103, Failures: 1, Errors: 0, Skipped: 1
   [ERROR] Failures: 
   [ERROR]   
ITestAbfsStreamStatistics.testAbfsStreamOps:140->Assert.assertTrue:42->Assert.fail:89
 The actual value of 99 was not equal to the expected value
   [ERROR] Errors: 
   [ERROR]   
ITestAzureBlobFileSystemLease.testTwoWritersCreateAppendNoInfiniteLease:178->twoWriters:166
 » IO
   [ERROR] Tests run: 558, Failures: 1, Errors: 1, Skipped: 98] 
   [ERROR] Errors: 
   [ERROR]   
ITestAzureBlobFileSystemE2EScale.testWriteHeavyBytesToFileAcrossThreads:77 » 
TestTimedOut
   [ERROR]   
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut 
   [ERROR] Tests run: 271, Failures: 0, Errors: 2, Skipped: 76
   
   HNS-SharedKey
   
   [WARNING] Tests run: 103, Failures: 0, Errors: 0, Skipped: 2
   [WARNING] Tests run: 558, Failures: 0, Errors: 0, Skipped: 54
   [ERROR] Errors: 
   [ERROR]   
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut
   [ERROR]   
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut 
   [ERROR] Tests run: 271, Failures: 0, Errors: 2, Skipped: 40
   
   NonHNS-SharedKey
   
   [WARNING] Tests run: 103, Failures: 0, Errors: 0, Skipped: 2
   [WARNING] Tests run: 558, Failures: 0, Errors: 0, Skipped: 276
   [ERROR] Errors: 
   [ERROR]   
ITestAbfsFileSystemContractDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut
   [ERROR]   
ITestAbfsFileSystemContractSecureDistCp>AbstractContractDistCpTest.testDistCpWithIterator:635
 » TestTimedOut
   [ERROR] Tests run: 271, Failures: 0, Errors: 2, Skipped: 40
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomicooler commented on a change in pull request #3220: YARN-10355. Refactor NM ContainerLaunch.java#orderEnvByDependencies

2021-07-30 Thread GitBox


tomicooler commented on a change in pull request #3220:
URL: https://github.com/apache/hadoop/pull/3220#discussion_r679887150



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
##
@@ -1449,6 +1451,9 @@ public void setExitOnFailure() {
   private static final class WindowsShellScriptBuilder
   extends ShellScriptBuilder {
 
+private static final String PATTERN_VARIABLE = "%(.*?)%";
+private Pattern pattern;

Review comment:
   Done. I also refactored the split by : to a pattern.

##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
##
@@ -1539,34 +1544,17 @@ public void listDebugInformation(Path output) throws 
IOException {
   if (envVal == null || envVal.isEmpty()) {
 return Collections.emptySet();
   }
+  if (pattern == null) {
+pattern = Pattern.compile(PATTERN_VARIABLE);
+  }

Review comment:
   Done.

##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
##
@@ -1539,34 +1544,17 @@ public void listDebugInformation(Path output) throws 
IOException {
   if (envVal == null || envVal.isEmpty()) {
 return Collections.emptySet();
   }
+  if (pattern == null) {
+pattern = Pattern.compile(PATTERN_VARIABLE);
+  }
+  Matcher matcher = pattern.matcher(envVal);
   final Set deps = new HashSet<>();
-  final int len = envVal.length();
-  int i = 0;
-  while (i < len) {
-i = envVal.indexOf('%', i); // find beginning of variable
-if (i < 0 || i == (len - 1)) {
-  break;
-}
-i++;
-// 3 cases: %var%, %var:...% or %%
-final int j = envVal.indexOf('%', i); // find end of variable
-if (j == i) {
-  // %% case, just skip it
-  i++;
-  continue;
-}
-if (j < 0) {
-  break; // even %var:...% syntax ends with a %, so j cannot be 
negative
-}
-final int k = envVal.indexOf(':', i);
-if (k >= 0 && k < j) {
-  // %var:...% syntax
-  deps.add(envVal.substring(i, k));
-} else {
-  // %var% syntax
-  deps.add(envVal.substring(i, j));
+  while (matcher.find()) {
+String match = matcher.group(1);
+if (match.length() > 0) {
+  String[] split = match.split(":", 2);
+  deps.add(split[0]);

Review comment:
   Done, added comments.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomicooler commented on pull request #3220: YARN-10355. Refactor NM ContainerLaunch.java#orderEnvByDependencies

2021-07-30 Thread GitBox


tomicooler commented on pull request #3220:
URL: https://github.com/apache/hadoop/pull/3220#issuecomment-889861061


   Force pushed the branch.
- Created a separate Jira for the unit test refactor (YARN-10874).
- Extracting the variable names from the bash version is not feasible with 
regexp, so that change is abandoned.

   NOTE: the new regex batch variable name extraction works for the %:% 
variable, it will extract the : as the name. The original code did not find the 
: character to be a variable name. 
(https://stackoverflow.com/questions/37973141/colon-in-batch-variable-name/37992843).
 Since the test refactor is moved to separate pull request, I will add an extra 
test case with %:% to this patch if the other pull request is merged before 
this change.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17823) S3A Tests to skip if S3Guard and S3-CSE are enabled.

2021-07-30 Thread Mehakmeet Singh (Jira)
Mehakmeet Singh created HADOOP-17823:


 Summary: S3A Tests to skip if S3Guard and S3-CSE are enabled.
 Key: HADOOP-17823
 URL: https://issues.apache.org/jira/browse/HADOOP-17823
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mehakmeet Singh
Assignee: Mehakmeet Singh


Skip S3A tests when S3Guard and S3-CSE are enabled since it causes PathIOE 
otherwise.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tomicooler opened a new pull request #3248: YARN-10874. Refactor NM ContainerLaunch#getEnvDependencies's unit tests

2021-07-30 Thread GitBox


tomicooler opened a new pull request #3248:
URL: https://github.com/apache/hadoop/pull/3248


   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] shuzirra merged pull request #3206: YARN-10814. Fallback to RandomSecretProvider if the secret file is empty

2021-07-30 Thread GitBox


shuzirra merged pull request #3206:
URL: https://github.com/apache/hadoop/pull/3206


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17822) fs.s3a.acl.default not working after S3A Audit feature added

2021-07-30 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-17822:
---

 Summary: fs.s3a.acl.default not working after S3A Audit feature 
added
 Key: HADOOP-17822
 URL: https://issues.apache.org/jira/browse/HADOOP-17822
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.3.2
Reporter: Steve Loughran
Assignee: Steve Loughran



After HADOOP-17511 the fs.s3a.acl.default propperty isn't being passed through 
to S3 PUT/COPY requests.

The new RequestFactory is being given the acl values from the S3A FS instance, 
but the factory is being created before the acl settings are loaded from the 
configuration.

Fix, and ideally, test (if the getXAttr lets us see this now)



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on a change in pull request #3247: HDFS-16146. All three replicas are lost due to not adding a new DataN…

2021-07-30 Thread GitBox


jojochuang commented on a change in pull request #3247:
URL: https://github.com/apache/hadoop/pull/3247#discussion_r679795972



##
File path: 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DataStreamer.java
##
@@ -1386,19 +1386,11 @@ private void addDatanode2ExistingPipeline() throws 
IOException {
* Case 2: Failure in Streaming
* - Append/Create:
*+ transfer RBW
-   *
-   * Case 3: Failure in Close
-   * - Append/Create:
-   *+ no transfer, let NameNode replicates the block.
*/
 if (!isAppend && lastAckedSeqno < 0
 && stage == BlockConstructionStage.PIPELINE_SETUP_CREATE) {
   //no data have been written
   return;
-} else if (stage == BlockConstructionStage.PIPELINE_CLOSE

Review comment:
   the stage is in PIPELINE_CLOSE state when when packet is the last in the 
block and all transferred data is acknowledged.
   
   with this change, the re-replication which was triggered by NameNode block 
manager periodically, is moved to client side.
   
   it looks fine to me. but HDFS re replication logic is complex. We should 
validate it with a test.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] shuzirra commented on a change in pull request #3220: YARN-10355. Refactor NM ContainerLaunch.java#orderEnvByDependencies

2021-07-30 Thread GitBox

shuzirra commented on a change in pull request #3220:
URL: https://github.com/apache/hadoop/pull/3220#discussion_r676708584



##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
##
@@ -1277,6 +1279,36 @@ void resolve() {
   }
 
   private static final class UnixShellScriptBuilder extends ShellScriptBuilder 
{
+// Visualization for the regex: https://regex101.com/r/Q7DxG1/1
+private static final String VARIABLE_NAME = "variableName";
+private static final String SHELL_EXPANSION = "shellExpansion";
+private static final String DOUBLE_QUOTED = "doubleQuoted";
+private static final String SINGLE_QUOTED = "singleQuoted";
+
+private static final String PATTERN_VARIABLE_NAME =
+"\\$(?<" + VARIABLE_NAME + ">[_a-zA-Z][_a-zA-Z0-9]*)";
+private static final String PATTERN_NO_ESCAPE = "(?[_a-zA-Z#][_a-zA-Z0-9\\[\\]\\*\\:\\-\\$#=\\?+/%^,@]*)\\}";
+private static final String PATTERN_DOUBLE_QUOTED =
+"\"(?<" + DOUBLE_QUOTED + ">.*?[^])\"";
+private static final String PATTERN_SINGLE_QUOTED =
+"'(?<" + SINGLE_QUOTED + ">.*?[^])'";
+private static final String PATTERN_SPLIT_SHELL_EXPANSION =
+"[^_a-zA-Z0-9]";

Review comment:
   Please add java doc comments where you explain what the given regex 
matches. Also examples can help. It's quite hard to follow what's going on 
here, and if we are making a refactor for readability then let's make it as 
readable as possible.

##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
##
@@ -1277,6 +1279,36 @@ void resolve() {
   }
 
   private static final class UnixShellScriptBuilder extends ShellScriptBuilder 
{
+// Visualization for the regex: https://regex101.com/r/Q7DxG1/1
+private static final String VARIABLE_NAME = "variableName";
+private static final String SHELL_EXPANSION = "shellExpansion";
+private static final String DOUBLE_QUOTED = "doubleQuoted";
+private static final String SINGLE_QUOTED = "singleQuoted";
+
+private static final String PATTERN_VARIABLE_NAME =
+"\\$(?<" + VARIABLE_NAME + ">[_a-zA-Z][_a-zA-Z0-9]*)";
+private static final String PATTERN_NO_ESCAPE = "(?[_a-zA-Z#][_a-zA-Z0-9\\[\\]\\*\\:\\-\\$#=\\?+/%^,@]*)\\}";
+private static final String PATTERN_DOUBLE_QUOTED =
+"\"(?<" + DOUBLE_QUOTED + ">.*?[^])\"";
+private static final String PATTERN_SINGLE_QUOTED =
+"'(?<" + SINGLE_QUOTED + ">.*?[^])'";
+private static final String PATTERN_SPLIT_SHELL_EXPANSION =
+"[^_a-zA-Z0-9]";
+
+private final Pattern mainPattern = Pattern
+.compile("((" + PATTERN_NO_ESCAPE + PATTERN_VARIABLE_NAME + ")|(" +
+PATTERN_NO_ESCAPE + PATTERN_SHELL_EXPANSION + ")|(" +
+PATTERN_DOUBLE_QUOTED + "))");

Review comment:
   A few examples here can also help, just to see what this regexp is 
looking for.

##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
##
@@ -1449,6 +1451,9 @@ public void setExitOnFailure() {
   private static final class WindowsShellScriptBuilder
   extends ShellScriptBuilder {
 
+private static final String PATTERN_VARIABLE = "%(.*?)%";
+private Pattern pattern;

Review comment:
   Can be static an intitialized here, so we don't have to do the null 
check each time in getEnvDependencies.

##
File path: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java
##
@@ -1382,73 +1414,68 @@ public void setExitOnFailure() {
 return Collections.emptySet();
   }
   final Set deps = new HashSet<>();
-  // env/whitelistedEnv dump values inside double quotes
-  boolean inDoubleQuotes = true;
-  char c;
-  int i = 0;
-  final int len = envVal.length();
-  while (i < len) {
-c = envVal.charAt(i);
-if (c == '"') {
-  inDoubleQuotes = !inDoubleQuotes;
-} else if (c == '\'' && !inDoubleQuotes) {

[jira] [Commented] (HADOOP-12670) Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only

2021-07-30 Thread Hemanth Boyina (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-12670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17390434#comment-17390434
 ] 

Hemanth Boyina commented on HADOOP-12670:
-

reopened and uploaded patch against trunk as the current patch has conflicts , 
please see here 
https://issues.apache.org/jira/browse/HADOOP-11890?focusedCommentId=17379845=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17379845
 for more details

> Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only
> -
>
> Key: HADOOP-12670
> URL: https://issues.apache.org/jira/browse/HADOOP-12670
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Affects Versions: HADOOP-11890
>Reporter: Elliott Neil Clark
>Assignee: Elliott Neil Clark
>Priority: Major
> Attachments: HADOOP-12670-HADOOP-11890.0.patch, 
> HADOOP-12670-HADOOP-11890.2.patch, HADOOP-12670-HADOOP-11890.3.patch, 
> HADOOP-12670-HADOOP-17800.001.patch
>
>
> {code}
>   TestSecurityUtil.testBuildTokenServiceSockAddr:165 
> expected:<[127.0.0.]1:123> but was:<[0:0:0:0:0:0:0:]1:123>
>   TestSecurityUtil.testBuildDTServiceName:148 expected:<[127.0.0.]1:123> but 
> was:<[0:0:0:0:0:0:0:]1:123>
>   
> TestSecurityUtil.testSocketAddrWithName:326->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   
> TestSecurityUtil.testSocketAddrWithIP:333->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   
> TestSecurityUtil.testSocketAddrWithNameToStaticName:340->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   TestNetUtils.testNormalizeHostName:639 expected:<[0:0:0:0:0:0:0:]1> but 
> was:<[127.0.0.]1>
>   TestNetUtils.testResolverLoopback:533->verifyInetAddress:496 
> expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12670) Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only

2021-07-30 Thread Hemanth Boyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemanth Boyina updated HADOOP-12670:

Attachment: HADOOP-12670-HADOOP-17800.001.patch
Status: Patch Available  (was: Reopened)

> Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only
> -
>
> Key: HADOOP-12670
> URL: https://issues.apache.org/jira/browse/HADOOP-12670
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Affects Versions: HADOOP-11890
>Reporter: Elliott Neil Clark
>Assignee: Elliott Neil Clark
>Priority: Major
> Attachments: HADOOP-12670-HADOOP-11890.0.patch, 
> HADOOP-12670-HADOOP-11890.2.patch, HADOOP-12670-HADOOP-11890.3.patch, 
> HADOOP-12670-HADOOP-17800.001.patch
>
>
> {code}
>   TestSecurityUtil.testBuildTokenServiceSockAddr:165 
> expected:<[127.0.0.]1:123> but was:<[0:0:0:0:0:0:0:]1:123>
>   TestSecurityUtil.testBuildDTServiceName:148 expected:<[127.0.0.]1:123> but 
> was:<[0:0:0:0:0:0:0:]1:123>
>   
> TestSecurityUtil.testSocketAddrWithName:326->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   
> TestSecurityUtil.testSocketAddrWithIP:333->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   
> TestSecurityUtil.testSocketAddrWithNameToStaticName:340->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   TestNetUtils.testNormalizeHostName:639 expected:<[0:0:0:0:0:0:0:]1> but 
> was:<[127.0.0.]1>
>   TestNetUtils.testResolverLoopback:533->verifyInetAddress:496 
> expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-12670) Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only

2021-07-30 Thread Hemanth Boyina (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-12670?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hemanth Boyina reopened HADOOP-12670:
-

> Fix TestNetUtils and TestSecurityUtil when localhost is ipv6 only
> -
>
> Key: HADOOP-12670
> URL: https://issues.apache.org/jira/browse/HADOOP-12670
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: net
>Affects Versions: HADOOP-11890
>Reporter: Elliott Neil Clark
>Assignee: Elliott Neil Clark
>Priority: Major
> Attachments: HADOOP-12670-HADOOP-11890.0.patch, 
> HADOOP-12670-HADOOP-11890.2.patch, HADOOP-12670-HADOOP-11890.3.patch, 
> HADOOP-12670-HADOOP-17800.001.patch
>
>
> {code}
>   TestSecurityUtil.testBuildTokenServiceSockAddr:165 
> expected:<[127.0.0.]1:123> but was:<[0:0:0:0:0:0:0:]1:123>
>   TestSecurityUtil.testBuildDTServiceName:148 expected:<[127.0.0.]1:123> but 
> was:<[0:0:0:0:0:0:0:]1:123>
>   
> TestSecurityUtil.testSocketAddrWithName:326->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   
> TestSecurityUtil.testSocketAddrWithIP:333->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   
> TestSecurityUtil.testSocketAddrWithNameToStaticName:340->verifyServiceAddr:304->verifyAddress:284->verifyValues:251
>  expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
>   TestNetUtils.testNormalizeHostName:639 expected:<[0:0:0:0:0:0:0:]1> but 
> was:<[127.0.0.]1>
>   TestNetUtils.testResolverLoopback:533->verifyInetAddress:496 
> expected:<[127.0.0.]1> but was:<[0:0:0:0:0:0:0:]1>
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] virajjasani commented on a change in pull request #3235: HDFS-16143. De-flake TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits

2021-07-30 Thread GitBox


virajjasani commented on a change in pull request #3235:
URL: https://github.com/apache/hadoop/pull/3235#discussion_r679753606



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
##
@@ -433,15 +440,28 @@ public void 
testStandbyTriggersLogRollsWhenTailInProgressEdits()
 NameNodeAdapter.mkdirs(active, getDirPath(i),
 new PermissionStatus("test", "test",
 new FsPermission((short)00755)), true);
+// reset lastRollTimeMs in EditLogTailer.
+active.getNamesystem().getEditLogTailer().resetLastRollTimeMs();

Review comment:
   Thanks for taking a look @jojochuang. 
   `EditLogTailer` has a thread that keeps running to identify when is the 
right time to trigger active Namenode to roll the logs.
   ```
   private void doWork() {
 long currentSleepTimeMs = sleepTimeMs;
 while (shouldRun) {
   long editsTailed  = 0;
   try {
 // There's no point in triggering a log roll if the Standby hasn't
 // read any more transactions since the last time a roll was
 // triggered.
 boolean triggeredLogRoll = false;
 if (tooLongSinceLastLoad() &&
 lastRollTriggerTxId < lastLoadedTxnId) {
   triggerActiveLogRoll();
   triggeredLogRoll = true;
 }
   ...
   ...
   ```
   
   What happens with this test is that by the time we create new dirs in this 
for loop, this active thread would keep checking and intermittently keep 
triggering log roll by making RPC calls to Active Namenode, and hence this test 
would become flaky because the test expects Standby Namenode's last applied txn 
id to be less than active Namenode's last written txn id within a time limit 
duration. When it comes to how long EditLogTailer's thread keeps waiting to 
trigger log roll depends on `lastRollTimeMs`.
   In the above code, tooLongSinceLastLoad() refers to:
   ```
 /**
  * @return true if the configured log roll period has elapsed.
  */
 private boolean tooLongSinceLastLoad() {
   return logRollPeriodMs >= 0 && 
 (monotonicNow() - lastRollTimeMs) > logRollPeriodMs;
 }
   ```
   Hence, until `lastRollTimeMs` worth of time is elapsed, log roll would not 
be tailed, however, this always tends to be flaky because we have no control 
over how much time mkdir calls in this for loop is going to take and in that 
meantime, `lastRollTimeMs` worth of time can be elapsed easily, hence this test 
is flaky. When we expect Standby Namenode's txnId to be less than that of 
Active Namenode, it is not the case because log is rolled by above thread in 
`EditLogTailer`.
   
   Hence, it is important for this test to keep resetting `lastRollTimeMs` 
while mkdir calls are getting executed so that we don't give chance for 
`tooLongSinceLastLoad()` to be successful until we want it to be successful.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang commented on a change in pull request #3235: HDFS-16143. De-flake TestEditLogTailer#testStandbyTriggersLogRollsWhenTailInProgressEdits

2021-07-30 Thread GitBox


jojochuang commented on a change in pull request #3235:
URL: https://github.com/apache/hadoop/pull/3235#discussion_r679742430



##
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/namenode/ha/TestEditLogTailer.java
##
@@ -433,15 +440,28 @@ public void 
testStandbyTriggersLogRollsWhenTailInProgressEdits()
 NameNodeAdapter.mkdirs(active, getDirPath(i),
 new PermissionStatus("test", "test",
 new FsPermission((short)00755)), true);
+// reset lastRollTimeMs in EditLogTailer.
+active.getNamesystem().getEditLogTailer().resetLastRollTimeMs();

Review comment:
   mind to explain when this is needed?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop-site] jojochuang commented on a change in pull request #26: HADOOP-17821. Update link of Ozone on Hadoop Website

2021-07-30 Thread GitBox


jojochuang commented on a change in pull request #26:
URL: https://github.com/apache/hadoop-site/pull/26#discussion_r679679922



##
File path: src/modules.md
##
@@ -20,4 +20,4 @@ The project includes these modules:
   - __Hadoop Distributed File System (HDFS™)__: A distributed file system that 
provides high-throughput access to application data.
   - __Hadoop YARN__: A framework for job scheduling and cluster resource 
management.
   - __Hadoop MapReduce__: A YARN-based system for parallel processing of large 
data sets.
-  - __[Hadoop Ozone](https://hadoop.apache.org/ozone/)__: An object store for 
Hadoop.
+  - __[Hadoop Ozone](https://ozone.apache.org)__: An object store for Hadoop.

Review comment:
   we should move Ozone to related projects section?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17821) Update link of Ozone on Hadoop Website

2021-07-30 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17821?focusedWorklogId=631482=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-631482
 ]

ASF GitHub Bot logged work on HADOOP-17821:
---

Author: ASF GitHub Bot
Created on: 30/Jul/21 06:27
Start Date: 30/Jul/21 06:27
Worklog Time Spent: 10m 
  Work Description: jojochuang commented on a change in pull request #26:
URL: https://github.com/apache/hadoop-site/pull/26#discussion_r679679922



##
File path: src/modules.md
##
@@ -20,4 +20,4 @@ The project includes these modules:
   - __Hadoop Distributed File System (HDFS™)__: A distributed file system that 
provides high-throughput access to application data.
   - __Hadoop YARN__: A framework for job scheduling and cluster resource 
management.
   - __Hadoop MapReduce__: A YARN-based system for parallel processing of large 
data sets.
-  - __[Hadoop Ozone](https://hadoop.apache.org/ozone/)__: An object store for 
Hadoop.
+  - __[Hadoop Ozone](https://ozone.apache.org)__: An object store for Hadoop.

Review comment:
   we should move Ozone to related projects section?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 631482)
Time Spent: 20m  (was: 10m)

> Update link of Ozone on Hadoop Website
> --
>
> Key: HADOOP-17821
> URL: https://issues.apache.org/jira/browse/HADOOP-17821
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yi-Sheng Lien
>Assignee: Yi-Sheng Lien
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Hi all, as Ozone was spun to TLP, it has individual web site.
> Now on Modules part of Hadoop [website|https://hadoop.apache.org/], the link 
> of Ozone website is old page.
> IMHO there are two ways to fix it :
> 1. update it to new page.
> 2. move Ozone to Related projects part on Hadoop website
> Please feel free to give me some feedback, thanks



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org