[jira] [Work logged] (HADOOP-17288) Use shaded guava from thirdparty

2020-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17288?focusedWorklogId=521557=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-521557
 ]

ASF GitHub Bot logged work on HADOOP-17288:
---

Author: ASF GitHub Bot
Created on: 08/Dec/20 06:21
Start Date: 08/Dec/20 06:21
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on pull request #2505:
URL: https://github.com/apache/hadoop/pull/2505#issuecomment-740406872


   @saintstack  Thanx for the review.
   To be precise, I didn't change anything in the pom.xml's, I just removed the 
java code from trunk and applied the patch to branch-3.3 and luckily it had no 
conflicts, Then I just compiled and the java code got regenerated wrt 
branch-3.3,
   
   There is a javadoc build failure, that is there in branch-3.3 without my 
patch as well for aws,
   Test failures doesn't look related



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 521557)
Time Spent: 4h 40m  (was: 4.5h)

> Use shaded guava from thirdparty
> 
>
> Key: HADOOP-17288
> URL: https://issues.apache.org/jira/browse/HADOOP-17288
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 4h 40m
>  Remaining Estimate: 0h
>
> Use the shaded version of guava in hadoop-thirdparty



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on pull request #2505: HADOOP-17288. Use shaded guava from thirdparty. (branch-3.3)

2020-12-07 Thread GitBox


ayushtkn commented on pull request #2505:
URL: https://github.com/apache/hadoop/pull/2505#issuecomment-740406872


   @saintstack  Thanx for the review.
   To be precise, I didn't change anything in the pom.xml's, I just removed the 
java code from trunk and applied the patch to branch-3.3 and luckily it had no 
conflicts, Then I just compiled and the java code got regenerated wrt 
branch-3.3,
   
   There is a javadoc build failure, that is there in branch-3.3 without my 
patch as well for aws,
   Test failures doesn't look related



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] iwasakims commented on pull request #2528: HDFS-15716. WaitforReplication in TestUpgradeDomainBlockPlacementPolicy

2020-12-07 Thread GitBox


iwasakims commented on pull request #2528:
URL: https://github.com/apache/hadoop/pull/2528#issuecomment-740400919


   > On branch-2.10, this was fixed by waiting for the replication to be 
complete.
   
   Does this is done outside TestUpgradeDomainBlockPlacementPolicy? Your patch 
looks applicable to branch-2.10 too. Since branch-2.10 still support Java 7, 
cherry-picking would be easy by avoiding lambda here. I think the lambda does 
not improve readability here. @amahussein 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17288) Use shaded guava from thirdparty

2020-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17288?focusedWorklogId=521551=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-521551
 ]

ASF GitHub Bot logged work on HADOOP-17288:
---

Author: ASF GitHub Bot
Created on: 08/Dec/20 06:03
Start Date: 08/Dec/20 06:03
Worklog Time Spent: 10m 
  Work Description: ayushtkn commented on a change in pull request #2505:
URL: https://github.com/apache/hadoop/pull/2505#discussion_r538058287



##
File path: Jenkinsfile
##
@@ -23,7 +23,7 @@ pipeline {
 
 options {
 buildDiscarder(logRotator(numToKeepStr: '5'))
-timeout (time: 20, unit: 'HOURS')
+timeout (time: 35, unit: 'HOURS')

Review comment:
   I increased it, Luckily got the result this time





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 521551)
Time Spent: 4.5h  (was: 4h 20m)

> Use shaded guava from thirdparty
> 
>
> Key: HADOOP-17288
> URL: https://issues.apache.org/jira/browse/HADOOP-17288
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.4.0
>
>  Time Spent: 4.5h
>  Remaining Estimate: 0h
>
> Use the shaded version of guava in hadoop-thirdparty



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on a change in pull request #2505: HADOOP-17288. Use shaded guava from thirdparty. (branch-3.3)

2020-12-07 Thread GitBox


ayushtkn commented on a change in pull request #2505:
URL: https://github.com/apache/hadoop/pull/2505#discussion_r538058287



##
File path: Jenkinsfile
##
@@ -23,7 +23,7 @@ pipeline {
 
 options {
 buildDiscarder(logRotator(numToKeepStr: '5'))
-timeout (time: 20, unit: 'HOURS')
+timeout (time: 35, unit: 'HOURS')

Review comment:
   I increased it, Luckily got the result this time





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17288) Use shaded guava from thirdparty

2020-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17288?focusedWorklogId=521547=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-521547
 ]

ASF GitHub Bot logged work on HADOOP-17288:
---

Author: ASF GitHub Bot
Created on: 08/Dec/20 05:55
Start Date: 08/Dec/20 05:55
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2505:
URL: https://github.com/apache/hadoop/pull/2505#issuecomment-740396279


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 49s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   1m 13s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  1s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  pathlen  |   0m  0s |  The patch appears to contain 5 files with 
names longer than 240  |
   | +1 :green_heart: |  test4tests  |   0m  1s |  The patch appears to include 
413 new or modified test files.  |
   ||| _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |   4m  1s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 14s |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  15m 40s |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |  27m 21s |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |  22m  7s |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  15m  0s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   5m 27s |  root in branch-3.3 failed.  |
   | +0 :ok: |  spotbugs  |   0m 46s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 26s |  branch/hadoop-project no findbugs 
output file (findbugsXml.xml)  |
   | -1 :x: |  findbugs  |  29m 46s |  root in branch-3.3 has 3 extant findbugs 
warnings.  |
   | +0 :ok: |  findbugs  |   0m 32s |  
branch/hadoop-client-modules/hadoop-client-minicluster no findbugs output file 
(findbugsXml.xml)  |
   | -1 :x: |  findbugs  |   0m 44s |  hadoop-cloud-storage-project/hadoop-cos 
in branch-3.3 has 1 extant findbugs warnings.  |
   | +0 :ok: |  findbugs  |   0m 36s |  
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
 no findbugs output file (findbugsXml.xml)  |
   | -1 :x: |  findbugs  |   0m 49s |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 in branch-3.3 has 2 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 38s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  59m 41s |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 29s |  the patch passed  |
   | +1 :green_heart: |  javac  |  15m 29s |  the patch passed  |
   | -0 :warning: |  checkstyle  |  27m 20s |  root: The patch generated 191 
new + 25239 unchanged - 13 fixed = 25430 total (was 25252)  |
   | +1 :green_heart: |  mvnsite  |  20m 32s |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 22s |  There were no new shelldocs 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m 49s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 51s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   5m 28s |  root in the patch failed.  |
   | +0 :ok: |  findbugs  |   0m 25s |  hadoop-project has no data from 
findbugs  |
   | +0 :ok: |  findbugs  |   0m 31s |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests has 
no data from findbugs  |
   | +0 :ok: |  findbugs  |   0m 29s |  
hadoop-client-modules/hadoop-client-minicluster has no data from findbugs  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 555m 38s |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 45s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 1029m 57s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.applications.distributedshell.TestDistributedShell |
   |   | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption |
   |   | hadoop.yarn.server.resourcemanager.TestRMHATimelineCollectors |
   |   | hadoop.tools.dynamometer.TestDynamometerInfra |
   |   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2505: HADOOP-17288. Use shaded guava from thirdparty. (branch-3.3)

2020-12-07 Thread GitBox


hadoop-yetus commented on pull request #2505:
URL: https://github.com/apache/hadoop/pull/2505#issuecomment-740396279


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | +0 :ok: |  reexec  |   0m 49s |  Docker mode activated.  |
   ||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   1m 13s |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  1s |  The patch does not contain any 
@author tags.  |
   | -1 :x: |  pathlen  |   0m  0s |  The patch appears to contain 5 files with 
names longer than 240  |
   | +1 :green_heart: |  test4tests  |   0m  1s |  The patch appears to include 
413 new or modified test files.  |
   ||| _ branch-3.3 Compile Tests _ |
   | +0 :ok: |  mvndep  |   4m  1s |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  28m 14s |  branch-3.3 passed  |
   | +1 :green_heart: |  compile  |  15m 40s |  branch-3.3 passed  |
   | +1 :green_heart: |  checkstyle  |  27m 21s |  branch-3.3 passed  |
   | +1 :green_heart: |  mvnsite  |  22m  7s |  branch-3.3 passed  |
   | +1 :green_heart: |  shadedclient  |  15m  0s |  branch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   5m 27s |  root in branch-3.3 failed.  |
   | +0 :ok: |  spotbugs  |   0m 46s |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +0 :ok: |  findbugs  |   0m 26s |  branch/hadoop-project no findbugs 
output file (findbugsXml.xml)  |
   | -1 :x: |  findbugs  |  29m 46s |  root in branch-3.3 has 3 extant findbugs 
warnings.  |
   | +0 :ok: |  findbugs  |   0m 32s |  
branch/hadoop-client-modules/hadoop-client-minicluster no findbugs output file 
(findbugsXml.xml)  |
   | -1 :x: |  findbugs  |   0m 44s |  hadoop-cloud-storage-project/hadoop-cos 
in branch-3.3 has 1 extant findbugs warnings.  |
   | +0 :ok: |  findbugs  |   0m 36s |  
branch/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests
 no findbugs output file (findbugsXml.xml)  |
   | -1 :x: |  findbugs  |   0m 49s |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-timelineservice-hbase-tests
 in branch-3.3 has 2 extant findbugs warnings.  |
   ||| _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 38s |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |  59m 41s |  the patch passed  |
   | +1 :green_heart: |  compile  |  15m 29s |  the patch passed  |
   | +1 :green_heart: |  javac  |  15m 29s |  the patch passed  |
   | -0 :warning: |  checkstyle  |  27m 20s |  root: The patch generated 191 
new + 25239 unchanged - 13 fixed = 25430 total (was 25252)  |
   | +1 :green_heart: |  mvnsite  |  20m 32s |  the patch passed  |
   | +1 :green_heart: |  shellcheck  |   0m  0s |  There were no new shellcheck 
issues.  |
   | +1 :green_heart: |  shelldocs  |   0m 22s |  There were no new shelldocs 
issues.  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  The patch has no whitespace 
issues.  |
   | +1 :green_heart: |  xml  |   0m 49s |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  15m 51s |  patch has no errors when 
building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   5m 28s |  root in the patch failed.  |
   | +0 :ok: |  findbugs  |   0m 25s |  hadoop-project has no data from 
findbugs  |
   | +0 :ok: |  findbugs  |   0m 31s |  
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-tests has 
no data from findbugs  |
   | +0 :ok: |  findbugs  |   0m 29s |  
hadoop-client-modules/hadoop-client-minicluster has no data from findbugs  |
   ||| _ Other Tests _ |
   | -1 :x: |  unit  | 555m 38s |  root in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   1m 45s |  The patch does not generate 
ASF License warnings.  |
   |  |   | 1029m 57s |   |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.yarn.applications.distributedshell.TestDistributedShell |
   |   | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption |
   |   | hadoop.yarn.server.resourcemanager.TestRMHATimelineCollectors |
   |   | hadoop.tools.dynamometer.TestDynamometerInfra |
   |   | hadoop.hdfs.server.namenode.ha.TestHAAppend |
   |   | hadoop.hdfs.server.datanode.TestBPOfferService |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2505/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2505 |
   | Optional Tests | dupname asflicense shellcheck shelldocs compile javac 
javadoc mvninstall mvnsite unit shadedclient xml findbugs checkstyle |
   | uname | Linux 948f20ecfece 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | 

[jira] [Updated] (HADOOP-17389) KMS should log full UGI principal

2020-12-07 Thread Xiaoqiao He (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoqiao He updated HADOOP-17389:
-
Fix Version/s: 3.2.2

cherry-pick to branch-3.2.2

> KMS should log full UGI principal
> -
>
> Key: HADOOP-17389
> URL: https://issues.apache.org/jira/browse/HADOOP-17389
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5, 3.2.3
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> [~daryn] reported that the kms-audit log only logs the short username:
> {{OK[op=GENERATE_EEK, key=key1, user=hdfs, accessCount=4206, 
> interval=10427ms]}}
> In this example, it's impossible to tell which NN(s) requested EDEKs when 
> they are all lumped together.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] tangzhankun commented on pull request #2494: YARN-10380: Import logic of multi-node allocation in CapacityScheduler

2020-12-07 Thread GitBox


tangzhankun commented on pull request #2494:
URL: https://github.com/apache/hadoop/pull/2494#issuecomment-740361831


   @qizhu-lucas Thanks a lot! I'll merge it if no more comments. @jiwq 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13571) ServerSocketUtil.getPort() should use loopback address, not 0.0.0.0

2020-12-07 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17245654#comment-17245654
 ] 

Hadoop QA commented on HADOOP-13571:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime ||  Logfile || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
33s{color} | {color:blue}{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} || ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} No case conflicting files 
found. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green}{color} | {color:green} The patch does not contain any 
@author tags. {color} |
| {color:green}+1{color} | {color:green} {color} | {color:green}  0m  0s{color} 
| {color:green}test4tests{color} | {color:green} The patch appears to include 1 
new or modified test files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 35m 
54s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 22m 
11s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m  
9s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
46s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green}{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
19m 14s{color} | {color:green}{color} | {color:green} branch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green}{color} | {color:green} trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green}{color} | {color:green} trunk passed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:blue}0{color} | {color:blue} spotbugs {color} | {color:blue}  2m 
19s{color} | {color:blue}{color} | {color:blue} Used deprecated FindBugs 
config; considering switching to SpotBugs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
17s{color} | {color:green}{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} || ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 21m  
2s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 21m  
2s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 18m 
17s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 18m 
17s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
44s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green}{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green}{color} | {color:green} The patch has no whitespace 
issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m  3s{color} | {color:green}{color} | {color:green} patch has no errors when 
building and testing our client artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green}{color} | {color:green} the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green}{color} | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2529: HDFS-15717. Improve fsck logging.

2020-12-07 Thread GitBox


hadoop-yetus commented on pull request #2529:
URL: https://github.com/apache/hadoop/pull/2529#issuecomment-740349080


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 34s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 54s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 27s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 56s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  5s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  1s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  6s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 44s |  |  
hadoop-hdfs-project/hadoop-hdfs: The patch generated 0 new + 270 unchanged - 1 
fixed = 270 total (was 271)  |
   | +1 :green_heart: |  mvnsite  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m  0s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m  2s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  97m 49s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2529/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 187m  2s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.TestDFSStripedOutputStreamWithRandomECPolicy |
   |   | hadoop.hdfs.TestRollingUpgrade |
   |   | hadoop.hdfs.server.datanode.TestBlockScanner |
   |   | hadoop.hdfs.TestDecommissionWithStriped |
   |   | hadoop.hdfs.TestFileChecksumCompositeCrc |
   |   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2529/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2529 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0027ea038624 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 40f7543a6d5 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2529/2/testReport/ |
   | Max. process+thread count | 4326 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 

[GitHub] [hadoop] qizhu-lucas commented on pull request #2494: YARN-10380: Import logic of multi-node allocation in CapacityScheduler

2020-12-07 Thread GitBox


qizhu-lucas commented on pull request #2494:
URL: https://github.com/apache/hadoop/pull/2494#issuecomment-740337104


   
![image](https://user-images.githubusercontent.com/12184649/101432821-882ad580-3944-11eb-8d5a-06d59dbeac87.png)
   @tangzhankun 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] qizhu-lucas commented on pull request #2494: YARN-10380: Import logic of multi-node allocation in CapacityScheduler

2020-12-07 Thread GitBox


qizhu-lucas commented on pull request #2494:
URL: https://github.com/apache/hadoop/pull/2494#issuecomment-740334523


   @tangzhankun Thanks for your review, it passed in my local test, and it is 
unrelated to this change.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2528: HDFS-15716. WaitforReplication in TestUpgradeDomainBlockPlacementPolicy

2020-12-07 Thread GitBox


hadoop-yetus commented on pull request #2528:
URL: https://github.com/apache/hadoop/pull/2528#issuecomment-740323672


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 33s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 20s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 20s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 24s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  8s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  6s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  5s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 58s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 49s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 22s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m  4s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  64m 48s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2528/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 41s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 153m 41s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | 
hadoop.hdfs.server.namenode.snapshot.TestFSImageWithOrderedSnapshotDeletion |
   |   | hadoop.hdfs.server.namenode.TestFSImageWithAcl |
   |   | hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks |
   |   | hadoop.hdfs.server.namenode.snapshot.TestRandomOpsWithSnapshots |
   |   | hadoop.hdfs.server.namenode.snapshot.TestGetContentSummaryWithSnapshot 
|
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2528/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2528 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux b8affc6392c7 4.15.0-60-generic #67-Ubuntu SMP Thu Aug 22 
16:55:30 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 40f7543a6d5 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2528/2/testReport/ |
   | Max. process+thread count | 4478 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console output | 

[GitHub] [hadoop] tangzhankun commented on pull request #2494: YARN-10380: Import logic of multi-node allocation in CapacityScheduler

2020-12-07 Thread GitBox


tangzhankun commented on pull request #2494:
URL: https://github.com/apache/hadoop/pull/2494#issuecomment-740320479


   @jiwq Thanks for the review. @qizhu-lucas Thanks for the hard work!
   From the Yetus result, there's a unit test failure which seems not related 
to this changes. I'm +1 to the latest patch.
   ```
   [ERROR] Tests run: 15, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 
23.822 s <<< FAILURE! - in 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerAutoQueueCreation
   [ERROR] 
testDynamicAutoQueueCreationWithTags(org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerAutoQueueCreation)
  Time elapsed: 0.858 s  <<< ERROR!
   org.apache.hadoop.service.ServiceStateException: 
org.apache.hadoop.yarn.exceptions.YarnException: Failed to initialize queues
at 
org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:105)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:174)
at 
org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:110)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceInit(ResourceManager.java:884)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:165)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.createAndInitActiveServices(ResourceManager.java:1296)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceInit(ResourceManager.java:339)
at 
org.apache.hadoop.yarn.server.resourcemanager.MockRM.serviceInit(MockRM.java:1018)
at 
org.apache.hadoop.service.AbstractService.init(AbstractService.java:165)
at 
org.apache.hadoop.yarn.server.resourcemanager.MockRM.(MockRM.java:158)
at 
org.apache.hadoop.yarn.server.resourcemanager.MockRM.(MockRM.java:134)
at 
org.apache.hadoop.yarn.server.resourcemanager.MockRM.(MockRM.java:130)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerAutoQueueCreation$5.(TestCapacitySchedulerAutoQueueCreation.java:873)
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.TestCapacitySchedulerAutoQueueCreation.testDynamicAutoQueueCreationWithTags(TestCapacitySchedulerAutoQueueCreation.java:873)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
at 
org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:418)
   Caused by: org.apache.hadoop.yarn.exceptions.YarnException: Failed to 
initialize queues
at 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.initializeQueues(CapacityScheduler.java:798)
at 

[GitHub] [hadoop] PHILO-HE edited a comment on pull request #1726: [WIP] Syncservice rebased onto HDFS-12090

2020-12-07 Thread GitBox


PHILO-HE edited a comment on pull request #1726:
URL: https://github.com/apache/hadoop/pull/1726#issuecomment-740304229


   Based on this patch from @ehiggs, we developed the feature to support 
provided storage write. Please review our patch in 
https://issues.apache.org/jira/browse/HDFS-15714. Thanks @ehiggs so much!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] PHILO-HE commented on pull request #1726: [WIP] Syncservice rebased onto HDFS-12090

2020-12-07 Thread GitBox


PHILO-HE commented on pull request #1726:
URL: https://github.com/apache/hadoop/pull/1726#issuecomment-740304229


   Based on this patch from @ehiggs, we developed the feature to support 
provided storage write. Please review our patch in 
https://issues.apache.org/jira/browse/HDFS-15714.  



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17412) When `fs.s3a.connection.ssl.enabled=true`, Error when visit S3A with AKSK

2020-12-07 Thread angerszhu (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17245602#comment-17245602
 ] 

angerszhu commented on HADOOP-17412:


Sorry, AKSK means access key and secret key.

 

Yea, same exception with https://issues.apache.org/jira/browse/HADOOP-17017,

Old test bucket name we use `xxx-xxx-xxx`, but in new hadoop cluster we use 
`xxx.xxx.xxx` as bucket name. Then meet this exception.   And i have check the 
version off httpclient since some issue say some version (4.5.9) httpclient 
have similar problem, but both version's  aws-java-bundle jar's httpclient is 
4.5.6.

Thanks a lot for your reply. Helps a lot.

 

> When `fs.s3a.connection.ssl.enabled=true`,   Error when visit S3A with AKSK
> ---
>
> Key: HADOOP-17412
> URL: https://issues.apache.org/jira/browse/HADOOP-17412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
> Environment: jdk 1.8
> hadoop-3.3.0
>Reporter: angerszhu
>Priority: Major
> Attachments: image-2020-12-07-10-25-51-908.png
>
>
> When we update hadoop version from hadoop-3.2.1 to hadoop-3.3.0, Use AKSK 
> access s3a with ssl enabled, then this error happen
> {code:java}
>    
> ipc.client.connection.maxidletime 
>    2 
>  
> 
> fs.s3a.secret.key 
>  
>  
>    
> fs.s3a.access.key 
> 
>  
>  
> fs.s3a.aws.credentials.provider 
> org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
> 
> {code}
> !image-2020-12-07-10-25-51-908.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13571) ServerSocketUtil.getPort() should use loopback address, not 0.0.0.0

2020-12-07 Thread Eric Badger (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17245599#comment-17245599
 ] 

Eric Badger commented on HADOOP-13571:
--

Hey, [~ahussein], I uploaded a new patch to fix the checkstyle

> ServerSocketUtil.getPort() should use loopback address, not 0.0.0.0
> ---
>
> Key: HADOOP-13571
> URL: https://issues.apache.org/jira/browse/HADOOP-13571
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: HADOOP-13571.001.patch, HADOOP-13571.002.patch
>
>
> Using 0.0.0.0 to check for a free port will succeed even if there's something 
> bound to that same port on the loopback interface. Since this function is 
> used primarily in testing, it should be checking the loopback interface for 
> free ports.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13571) ServerSocketUtil.getPort() should use loopback address, not 0.0.0.0

2020-12-07 Thread Eric Badger (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Badger updated HADOOP-13571:
-
Attachment: HADOOP-13571.002.patch

> ServerSocketUtil.getPort() should use loopback address, not 0.0.0.0
> ---
>
> Key: HADOOP-13571
> URL: https://issues.apache.org/jira/browse/HADOOP-13571
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Eric Badger
>Assignee: Eric Badger
>Priority: Major
> Attachments: HADOOP-13571.001.patch, HADOOP-13571.002.patch
>
>
> Using 0.0.0.0 to check for a free port will succeed even if there's something 
> bound to that same port on the loopback interface. Since this function is 
> used primarily in testing, it should be checking the loopback interface for 
> free ports.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jojochuang merged pull request #2518: HDFS-15709. Socket file descriptor leak in StripedBlockChecksumRecons…

2020-12-07 Thread GitBox


jojochuang merged pull request #2518:
URL: https://github.com/apache/hadoop/pull/2518


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17414) Magic committer files don't have the count of bytes written collected by spark

2020-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17414?focusedWorklogId=521458=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-521458
 ]

ASF GitHub Bot logged work on HADOOP-17414:
---

Author: ASF GitHub Bot
Created on: 07/Dec/20 23:43
Start Date: 07/Dec/20 23:43
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2530:
URL: https://github.com/apache/hadoop/pull/2530#issuecomment-740249672


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 14s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 57s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |  18m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 53s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   2m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 10s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 26s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |  20m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |  18m 17s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 47s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2530/1/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 2 new + 15 unchanged - 0 fixed = 17 total (was 
15)  |
   | +1 :green_heart: |  mvnsite  |   2m 12s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2530/1/artifact/out/whitespace-eol.txt)
 |  The patch has 5 line(s) that end in whitespace. Use git apply 
--whitespace=fix <>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  17m 18s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   2m  3s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 44s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m  1s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 25s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 196m 32s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2530/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2530 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle markdownlint |
   | uname | Linux c05b3b4b492a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2530: HADOOP-17414. Magic committer files don't have the count of bytes written collected by spark

2020-12-07 Thread GitBox


hadoop-yetus commented on pull request #2530:
URL: https://github.com/apache/hadoop/pull/2530#issuecomment-740249672


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 14s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +0 :ok: |  markdownlint  |   0m  0s |  |  markdownlint was not available.  
|
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +0 :ok: |  mvndep  |  13m 57s |  |  Maven dependency ordering for branch  |
   | +1 :green_heart: |  mvninstall  |  23m 39s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |  21m 57s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |  18m 57s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   2m 53s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   2m 16s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 30s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   2m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 10s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m 26s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +0 :ok: |  mvndep  |   0m 23s |  |  Maven dependency ordering for patch  |
   | +1 :green_heart: |  mvninstall  |   1m 26s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  20m 43s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |  20m 43s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |  18m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |  18m 17s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   2m 47s | 
[/diff-checkstyle-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2530/1/artifact/out/diff-checkstyle-root.txt)
 |  root: The patch generated 2 new + 15 unchanged - 0 fixed = 17 total (was 
15)  |
   | +1 :green_heart: |  mvnsite  |   2m 12s |  |  the patch passed  |
   | -1 :x: |  whitespace  |   0m  0s | 
[/whitespace-eol.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2530/1/artifact/out/whitespace-eol.txt)
 |  The patch has 5 line(s) that end in whitespace. Use git apply 
--whitespace=fix <>. Refer https://git-scm.com/docs/git-apply  |
   | +1 :green_heart: |  xml  |   0m  1s |  |  The patch has no ill-formed XML 
file.  |
   | +1 :green_heart: |  shadedclient  |  17m 18s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   1m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   2m  3s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 44s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  10m  1s |  |  hadoop-common in the patch 
passed.  |
   | +1 :green_heart: |  unit  |   1m 25s |  |  hadoop-aws in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 49s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 196m 32s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2530/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2530 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient xml findbugs checkstyle markdownlint |
   | uname | Linux c05b3b4b492a 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 32099e36dda |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 

[jira] [Work logged] (HADOOP-17414) Magic committer files don't have the count of bytes written collected by spark

2020-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17414?focusedWorklogId=521456=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-521456
 ]

ASF GitHub Bot logged work on HADOOP-17414:
---

Author: ASF GitHub Bot
Created on: 07/Dec/20 23:36
Start Date: 07/Dec/20 23:36
Worklog Time Spent: 10m 
  Work Description: dongjoon-hyun commented on pull request #2530:
URL: https://github.com/apache/hadoop/pull/2530#issuecomment-740247128


   Thank you for pinging me.  



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 521456)
Time Spent: 0.5h  (was: 20m)

> Magic committer files don't have the count of bytes written collected by spark
> --
>
> Key: HADOOP-17414
> URL: https://issues.apache.org/jira/browse/HADOOP-17414
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> The spark statistics tracking doesn't correctly assess the size of the 
> uploaded files as it only calls getFileStatus on the zero byte objects -not 
> the yet-to-manifest files.
> Everything works with the staging committer purely because it's measuring the 
> length of the files staged to the local FS, not the unmaterialized output.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] dongjoon-hyun commented on pull request #2530: HADOOP-17414. Magic committer files don't have the count of bytes written collected by spark

2020-12-07 Thread GitBox


dongjoon-hyun commented on pull request #2530:
URL: https://github.com/apache/hadoop/pull/2530#issuecomment-740247128


   Thank you for pinging me.  



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2529: HDFS-15717. Improve fsck logging.

2020-12-07 Thread GitBox


hadoop-yetus commented on pull request #2529:
URL: https://github.com/apache/hadoop/pull/2529#issuecomment-740236102


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 30s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 19s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   1m 16s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 55s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 54s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 55s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  6s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  3s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 13s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  5s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 43s | 
[/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2529/1/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 270 unchanged 
- 1 fixed = 271 total (was 271)  |
   | +1 :green_heart: |  mvnsite  |   1m  9s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  15m 17s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m 50s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 126m 39s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2529/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 40s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 217m 57s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.TestBlockTokenWrappingQOP |
   |   | hadoop.hdfs.TestStateAlignmentContextWithHA |
   |   | hadoop.hdfs.TestBlockStoragePolicy |
   |   | hadoop.hdfs.TestHDFSFileSystemContract |
   |   | hadoop.hdfs.server.balancer.TestBalancer |
   |   | hadoop.hdfs.TestDFSStartupVersions |
   |   | hadoop.hdfs.TestSetrepIncreasing |
   |   | hadoop.hdfs.TestDFSInotifyEventInputStream |
   |   | hadoop.hdfs.TestDatanodeReport |
   |   | hadoop.hdfs.TestDistributedFileSystemWithECFileWithRandomECPolicy |
   |   | hadoop.hdfs.TestReadStripedFileWithDNFailure |
   |   | hadoop.hdfs.server.namenode.TestFsck |
   |   | hadoop.hdfs.TestWriteReadStripedFile |
   |   | hadoop.hdfs.TestClientProtocolForPipelineRecovery |
   |   | hadoop.hdfs.TestLeaseRecoveryStriped |
   |   | hadoop.hdfs.TestDFSStripedInputStreamWithRandomECPolicy |
   |   | hadoop.hdfs.TestDecommission |
   |   | hadoop.hdfs.web.TestWebHdfsWithRestCsrfPreventionFilter |
   |   | hadoop.hdfs.TestRollingUpgradeRollback |
   |   | hadoop.hdfs.TestErasureCodingPolicies |
   |   | hadoop.hdfs.web.TestWebHDFS |
   |   | hadoop.hdfs.TestReconstructStripedFileWithRandomECPolicy |
   |   | hadoop.hdfs.TestDFSStripedOutputStreamWithFailure |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2528: HDFS-15716. WaitforReplication in TestUpgradeDomainBlockPlacementPolicy

2020-12-07 Thread GitBox


hadoop-yetus commented on pull request #2528:
URL: https://github.com/apache/hadoop/pull/2528#issuecomment-740184212


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  30m 48s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  1s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  32m 43s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 49s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 19s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 33s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 54s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   3m  5s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   3m  3s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m  8s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   1m 10s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  5s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  5s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 39s | 
[/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2528/1/artifact/out/diff-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 6 unchanged - 
0 fixed = 8 total (was 6)  |
   | +1 :green_heart: |  mvnsite  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 49s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 48s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   1m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   3m  4s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  | 119m 51s | 
[/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2528/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt)
 |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 43s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 239m  1s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.hdfs.server.namenode.ha.TestEditLogTailer |
   |   | hadoop.hdfs.TestDFSInotifyEventInputStreamKerberized |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2528/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2528 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 3abc7ecdf74d 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / da1ea2530fa |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2528/1/testReport/ |
   | Max. process+thread count | 3784 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: 
hadoop-hdfs-project/hadoop-hdfs |
   | Console 

[jira] [Commented] (HADOOP-17412) When `fs.s3a.connection.ssl.enabled=true`, Error when visit S3A with AKSK

2020-12-07 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17245525#comment-17245525
 ] 

Steve Loughran commented on HADOOP-17412:
-

+are you using wildfly + openssl, 

if so, 
# which openssl version?
# change your settings to use JDK only
# if JDK, you using oracle, openjdk or Amazon corretto?

> When `fs.s3a.connection.ssl.enabled=true`,   Error when visit S3A with AKSK
> ---
>
> Key: HADOOP-17412
> URL: https://issues.apache.org/jira/browse/HADOOP-17412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
> Environment: jdk 1.8
> hadoop-3.3.0
>Reporter: angerszhu
>Priority: Major
> Attachments: image-2020-12-07-10-25-51-908.png
>
>
> When we update hadoop version from hadoop-3.2.1 to hadoop-3.3.0, Use AKSK 
> access s3a with ssl enabled, then this error happen
> {code:java}
>    
> ipc.client.connection.maxidletime 
>    2 
>  
> 
> fs.s3a.secret.key 
>  
>  
>    
> fs.s3a.access.key 
> 
>  
>  
> fs.s3a.aws.credentials.provider 
> org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
> 
> {code}
> !image-2020-12-07-10-25-51-908.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17414) Magic committer files don't have the count of bytes written collected by spark

2020-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17414?focusedWorklogId=521369=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-521369
 ]

ASF GitHub Bot logged work on HADOOP-17414:
---

Author: ASF GitHub Bot
Created on: 07/Dec/20 20:29
Start Date: 07/Dec/20 20:29
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2530:
URL: https://github.com/apache/hadoop/pull/2530#issuecomment-740161845


   + @sunchao @dongjoon-hyun 
   
   This is not for merging, just for run through yetus and discussion.
   
   Tested S3A London (consistent!) with/without S3guard
   
   ```
   mvit -Dparallel-tests -DtestsThreadCount=4 -Dmarkers=keep  
-Dfs.s3a.directory.marker.audit=true
   mvit -Dparallel-tests -DtestsThreadCount=4 -Dmarkers=delete -Ds3guard 
-Ddynamo  -Dfs.s3a.directory.marker.audit=true
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 521369)
Time Spent: 20m  (was: 10m)

> Magic committer files don't have the count of bytes written collected by spark
> --
>
> Key: HADOOP-17414
> URL: https://issues.apache.org/jira/browse/HADOOP-17414
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> The spark statistics tracking doesn't correctly assess the size of the 
> uploaded files as it only calls getFileStatus on the zero byte objects -not 
> the yet-to-manifest files.
> Everything works with the staging committer purely because it's measuring the 
> length of the files staged to the local FS, not the unmaterialized output.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2530: HADOOP-17414. Magic committer files don't have the count of bytes written collected by spark

2020-12-07 Thread GitBox


steveloughran commented on pull request #2530:
URL: https://github.com/apache/hadoop/pull/2530#issuecomment-740161845


   + @sunchao @dongjoon-hyun 
   
   This is not for merging, just for run through yetus and discussion.
   
   Tested S3A London (consistent!) with/without S3guard
   
   ```
   mvit -Dparallel-tests -DtestsThreadCount=4 -Dmarkers=keep  
-Dfs.s3a.directory.marker.audit=true
   mvit -Dparallel-tests -DtestsThreadCount=4 -Dmarkers=delete -Ds3guard 
-Ddynamo  -Dfs.s3a.directory.marker.audit=true
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17414) Magic committer files don't have the count of bytes written collected by spark

2020-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17414?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17414:

Labels: pull-request-available  (was: )

> Magic committer files don't have the count of bytes written collected by spark
> --
>
> Key: HADOOP-17414
> URL: https://issues.apache.org/jira/browse/HADOOP-17414
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The spark statistics tracking doesn't correctly assess the size of the 
> uploaded files as it only calls getFileStatus on the zero byte objects -not 
> the yet-to-manifest files.
> Everything works with the staging committer purely because it's measuring the 
> length of the files staged to the local FS, not the unmaterialized output.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17414) Magic committer files don't have the count of bytes written collected by spark

2020-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17414?focusedWorklogId=521367=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-521367
 ]

ASF GitHub Bot logged work on HADOOP-17414:
---

Author: ASF GitHub Bot
Created on: 07/Dec/20 20:26
Start Date: 07/Dec/20 20:26
Worklog Time Spent: 10m 
  Work Description: steveloughran opened a new pull request #2530:
URL: https://github.com/apache/hadoop/pull/2530


   …tten collected by spark
   
   This is a PoC which, having implemented, I don't think is viable.
   
   Yes, we can fix up getFileStatus so it reads the header. It even knows
   to always bypass S3Guard (no inconsistencies to worry about any more).
   
   But: the blast radius of the change is too big. I'm worried about
   distcp or any other code which goes
   len =getFileStatus(path).getLen()
   open(path).readFully(0, len, dest)
   
   You'll get an EOF here. Find the file through a listing and you'll be OK
   provided S3Guard isn't updated with that GetFileStatus result, which I
   have seen.
   
   The ordering of probes in 
ITestMagicCommitProtocol.validateTaskAttemptPathAfterWrite
   need to be list before getFileStatus, so the S3Guard table is updated from
   the list.
   
   overall: danger. Even without S3Guard there's risk.
   
   Anyway, shown it can be done. And I think there's a merit in a leaner patch
   which attaches the marker but doesn't do any fixup. This would let us add
   an API call "getObjectHeaders(path) -> Future> and
   then use that to do the lookup. We can implement the probe for
   ABFS and S3, add a hasPathCapabilities for it as well as an interface
   the FS can implement (which passthrough filesystems would need to do).
   
   Change-Id: If56213c0c5d8ab696d2d89b48ad52874960b0920
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 521367)
Remaining Estimate: 0h
Time Spent: 10m

> Magic committer files don't have the count of bytes written collected by spark
> --
>
> Key: HADOOP-17414
> URL: https://issues.apache.org/jira/browse/HADOOP-17414
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> The spark statistics tracking doesn't correctly assess the size of the 
> uploaded files as it only calls getFileStatus on the zero byte objects -not 
> the yet-to-manifest files.
> Everything works with the staging committer purely because it's measuring the 
> length of the files staged to the local FS, not the unmaterialized output.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran opened a new pull request #2530: HADOOP-17414. Magic committer files don't have the count of bytes wri…

2020-12-07 Thread GitBox


steveloughran opened a new pull request #2530:
URL: https://github.com/apache/hadoop/pull/2530


   …tten collected by spark
   
   This is a PoC which, having implemented, I don't think is viable.
   
   Yes, we can fix up getFileStatus so it reads the header. It even knows
   to always bypass S3Guard (no inconsistencies to worry about any more).
   
   But: the blast radius of the change is too big. I'm worried about
   distcp or any other code which goes
   len =getFileStatus(path).getLen()
   open(path).readFully(0, len, dest)
   
   You'll get an EOF here. Find the file through a listing and you'll be OK
   provided S3Guard isn't updated with that GetFileStatus result, which I
   have seen.
   
   The ordering of probes in 
ITestMagicCommitProtocol.validateTaskAttemptPathAfterWrite
   need to be list before getFileStatus, so the S3Guard table is updated from
   the list.
   
   overall: danger. Even without S3Guard there's risk.
   
   Anyway, shown it can be done. And I think there's a merit in a leaner patch
   which attaches the marker but doesn't do any fixup. This would let us add
   an API call "getObjectHeaders(path) -> Future> and
   then use that to do the lookup. We can implement the probe for
   ABFS and S3, add a hasPathCapabilities for it as well as an interface
   the FS can implement (which passthrough filesystems would need to do).
   
   Change-Id: If56213c0c5d8ab696d2d89b48ad52874960b0920
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] jbrennan333 merged pull request #2516: HDFS-15707. NNTop counts don't add up as expected.

2020-12-07 Thread GitBox


jbrennan333 merged pull request #2516:
URL: https://github.com/apache/hadoop/pull/2516


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein opened a new pull request #2529: HDFS-15717. Improve fsck logging.

2020-12-07 Thread GitBox


amahussein opened a new pull request #2529:
URL: https://github.com/apache/hadoop/pull/2529


   Fsck always logs success and logs blockid checks as "/".
   Thanks @kihwal for providing the fix.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17389) KMS should log full UGI principal

2020-12-07 Thread Jim Brennan (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17245383#comment-17245383
 ] 

Jim Brennan commented on HADOOP-17389:
--

I cherry-picked this to branch-3.3, branch-3.2, and branch-3.1.

> KMS should log full UGI principal
> -
>
> Key: HADOOP-17389
> URL: https://issues.apache.org/jira/browse/HADOOP-17389
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> [~daryn] reported that the kms-audit log only logs the short username:
> {{OK[op=GENERATE_EEK, key=key1, user=hdfs, accessCount=4206, 
> interval=10427ms]}}
> In this example, it's impossible to tell which NN(s) requested EDEKs when 
> they are all lumped together.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17389) KMS should log full UGI principal

2020-12-07 Thread Jim Brennan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Brennan updated HADOOP-17389:
-
Fix Version/s: 3.2.3
   3.1.5
   3.3.1

> KMS should log full UGI principal
> -
>
> Key: HADOOP-17389
> URL: https://issues.apache.org/jira/browse/HADOOP-17389
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ahmed Hussein
>Assignee: Ahmed Hussein
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0, 3.1.5, 3.2.3
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> [~daryn] reported that the kms-audit log only logs the short username:
> {{OK[op=GENERATE_EEK, key=key1, user=hdfs, accessCount=4206, 
> interval=10427ms]}}
> In this example, it's impossible to tell which NN(s) requested EDEKs when 
> they are all lumped together.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] amahussein opened a new pull request #2528: HDFS-15716. WaitforReplication in TestUpgradeDomainBlockPlacementPolicy

2020-12-07 Thread GitBox


amahussein opened a new pull request #2528:
URL: https://github.com/apache/hadoop/pull/2528


   In some slow runs `TestUpgradeDomainBlockPlacementPolicy#testPlacement` and 
`TestUpgradeDomainBlockPlacementPolicy#testPlacementAfterDecommission` fail.
   On branch-2.10, this was fixed by waiting for the replication to be complete.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2521: HDFS-15711. Add Metrics to HttpFS Server.

2020-12-07 Thread GitBox


hadoop-yetus commented on pull request #2521:
URL: https://github.com/apache/hadoop/pull/2521#issuecomment-740032080


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |  31m 18s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 14s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m  7s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 22s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 45s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 42s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 22s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 18s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 13s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 39s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   0m 49s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   5m  6s |  |  hadoop-hdfs-httpfs in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 116m 18s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2521/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2521 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 2fdc2afda5a0 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / da1ea2530fa |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2521/3/testReport/ |
   | Max. process+thread count | 708 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-httpfs U: 
hadoop-hdfs-project/hadoop-hdfs-httpfs |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2521/3/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hadoop] hadoop-yetus commented on pull request #2527: YARN-10520. Deprecated the residual nested class for the LCEResourceHandler

2020-12-07 Thread GitBox


hadoop-yetus commented on pull request #2527:
URL: https://github.com/apache/hadoop/pull/2527#issuecomment-740024364


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 32s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 2 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 11s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 15s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   1m 10s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 32s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 46s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 25s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m 24s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   1m 22s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 39s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m  6s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   1m  6s |  |  
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
 with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 generated 0 new + 55 
unchanged - 1 fixed = 55 total (was 56)  |
   | +1 :green_heart: |  compile  |   1m  1s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   1m  1s |  |  
hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-nodemanager-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 0 new 
+ 34 unchanged - 1 fixed = 34 total (was 35)  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 34s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 33s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 21s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  22m 35s |  |  hadoop-yarn-server-nodemanager 
in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 102m 29s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2527/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2527 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux e2d215c6d117 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / da1ea2530fa |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2527/1/testReport/ |
   | Max. process+thread count | 609 (vs. ulimit of 5500) |
   | modules | C: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 U: 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager
 |
   | Console output | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2519: YARN-10491: Fix deprecation warnings in SLSWebApp.java

2020-12-07 Thread GitBox


hadoop-yetus commented on pull request #2519:
URL: https://github.com/apache/hadoop/pull/2519#issuecomment-739968197


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   2m  5s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  41m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 37s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 24s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 36s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  21m 48s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 34s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   1m  1s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 56s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 26s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 26s |  |  
hadoop-tools_hadoop-sls-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 generated 0 new + 0 unchanged - 6 fixed 
= 0 total (was 6)  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 23s |  |  
hadoop-tools_hadoop-sls-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 
with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 0 new + 
0 unchanged - 6 fixed = 0 total (was 6)  |
   | +1 :green_heart: |  checkstyle  |   0m 14s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  19m 32s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m 16s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  12m 20s | 
[/patch-unit-hadoop-tools_hadoop-sls.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2519/4/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt)
 |  hadoop-sls in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 39s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   | 108m 48s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.yarn.sls.appmaster.TestAMSimulator |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2519/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2519 |
   | JIRA Issue | YARN-10491 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 7b11eb280ca9 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / da1ea2530fa |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2519/4/testReport/ |
   | Max. process+thread count | 619 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
   | Console 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2519: YARN-10491: Fix deprecation warnings in SLSWebApp.java

2020-12-07 Thread GitBox


hadoop-yetus commented on pull request #2519:
URL: https://github.com/apache/hadoop/pull/2519#issuecomment-739965371


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 41s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  36m  1s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 20s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 47s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 45s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 44s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 21s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 21s |  |  
hadoop-tools_hadoop-sls-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 generated 0 new + 0 unchanged - 6 fixed 
= 0 total (was 6)  |
   | +1 :green_heart: |  compile  |   0m 17s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 17s |  |  
hadoop-tools_hadoop-sls-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 
with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 0 new + 
0 unchanged - 6 fixed = 0 total (was 6)  |
   | -0 :warning: |  checkstyle  |   0m 12s | 
[/diff-checkstyle-hadoop-tools_hadoop-sls.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2519/5/artifact/out/diff-checkstyle-hadoop-tools_hadoop-sls.txt)
 |  hadoop-tools/hadoop-sls: The patch generated 6 new + 14 unchanged - 0 fixed 
= 20 total (was 14)  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 13s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   0m 46s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |  11m 52s |  |  hadoop-sls in the patch passed. 
 |
   | +1 :green_heart: |  asflicense  |   0m 28s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  93m 22s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2519/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2519 |
   | JIRA Issue | YARN-10491 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 79ba89100251 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / da1ea2530fa |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2519/5/testReport/ |
   | Max. process+thread count | 511 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-sls U: hadoop-tools/hadoop-sls |
   | Console output | 

[GitHub] [hadoop] jiwq opened a new pull request #2527: YARN-10520. Deprecated the residual nested class for the LCEResourceHandler

2020-12-07 Thread GitBox


jiwq opened a new pull request #2527:
URL: https://github.com/apache/hadoop/pull/2527


   https://issues.apache.org/jira/browse/YARN-10520



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17414) Magic committer files don't have the count of bytes written collected by spark

2020-12-07 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17414?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17245237#comment-17245237
 ] 

Steve Loughran commented on HADOOP-17414:
-

Proposed:

When we create the 0-byte marker file, we know the length of the final file, so 
declare it in a custom HTTP header "x-s3a-magic-marker"
..which gets returned on a HEAD request of the object
we modify the S3A FS so that when it does a HEAD on an object under a __magic 
path which isn't a special (.pending/.pendingset) file, it looks for this 
header and returns it instead of the actual length.

* This doesn't show in a LIST; it MUST be HEAD
* So the Spark tracker must be doing exactly that.
* And it mustn't go through S3Guard, as that will skip the HEAD request if 
there's a record in DDB which hasn't expired yet.

There's more complexity here than it seems as we'd need to restrict this to 
getFileStatus() API calls and not other probes for objects, e.g those for 
copy/rename/open, and on a copy we'd need to strip out the "x-s3a-magic-marker" 
field to stop it contaminating other things. 

> Magic committer files don't have the count of bytes written collected by spark
> --
>
> Key: HADOOP-17414
> URL: https://issues.apache.org/jira/browse/HADOOP-17414
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>
> The spark statistics tracking doesn't correctly assess the size of the 
> uploaded files as it only calls getFileStatus on the zero byte objects -not 
> the yet-to-manifest files.
> Everything works with the staging committer purely because it's measuring the 
> length of the files staged to the local FS, not the unmaterialized output.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17414) Magic committer files don't have the count of bytes written collected by spark

2020-12-07 Thread Steve Loughran (Jira)
Steve Loughran created HADOOP-17414:
---

 Summary: Magic committer files don't have the count of bytes 
written collected by spark
 Key: HADOOP-17414
 URL: https://issues.apache.org/jira/browse/HADOOP-17414
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.2.0
Reporter: Steve Loughran
Assignee: Steve Loughran


The spark statistics tracking doesn't correctly assess the size of the uploaded 
files as it only calls getFileStatus on the zero byte objects -not the 
yet-to-manifest files.

Everything works with the staging committer purely because it's measuring the 
length of the files staged to the local FS, not the unmaterialized output.




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17412) When `fs.s3a.connection.ssl.enabled=true`, Error when visit S3A with AKSK

2020-12-07 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17245220#comment-17245220
 ] 

Steve Loughran commented on HADOOP-17412:
-

also: what do you mean by "AKSK"?

> When `fs.s3a.connection.ssl.enabled=true`,   Error when visit S3A with AKSK
> ---
>
> Key: HADOOP-17412
> URL: https://issues.apache.org/jira/browse/HADOOP-17412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
> Environment: jdk 1.8
> hadoop-3.3.0
>Reporter: angerszhu
>Priority: Major
> Attachments: image-2020-12-07-10-25-51-908.png
>
>
> When we update hadoop version from hadoop-3.2.1 to hadoop-3.3.0, Use AKSK 
> access s3a with ssl enabled, then this error happen
> {code:java}
>    
> ipc.client.connection.maxidletime 
>    2 
>  
> 
> fs.s3a.secret.key 
>  
>  
>    
> fs.s3a.access.key 
> 
>  
>  
> fs.s3a.aws.credentials.provider 
> org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
> 
> {code}
> !image-2020-12-07-10-25-51-908.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17412) When `fs.s3a.connection.ssl.enabled=true`, Error when visit S3A with AKSK

2020-12-07 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17245216#comment-17245216
 ] 

Steve Loughran commented on HADOOP-17412:
-

see also HADOOP-17017

> When `fs.s3a.connection.ssl.enabled=true`,   Error when visit S3A with AKSK
> ---
>
> Key: HADOOP-17412
> URL: https://issues.apache.org/jira/browse/HADOOP-17412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
> Environment: jdk 1.8
> hadoop-3.3.0
>Reporter: angerszhu
>Priority: Major
> Attachments: image-2020-12-07-10-25-51-908.png
>
>
> When we update hadoop version from hadoop-3.2.1 to hadoop-3.3.0, Use AKSK 
> access s3a with ssl enabled, then this error happen
> {code:java}
>    
> ipc.client.connection.maxidletime 
>    2 
>  
> 
> fs.s3a.secret.key 
>  
>  
>    
> fs.s3a.access.key 
> 
>  
>  
> fs.s3a.aws.credentials.provider 
> org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
> 
> {code}
> !image-2020-12-07-10-25-51-908.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17413) ABFS: Release Elastic ByteBuffer pool memory at outputStream close

2020-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17413?focusedWorklogId=521160=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-521160
 ]

ASF GitHub Bot logged work on HADOOP-17413:
---

Author: ASF GitHub Bot
Created on: 07/Dec/20 13:38
Start Date: 07/Dec/20 13:38
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2526:
URL: https://github.com/apache/hadoop/pull/2526#issuecomment-739923344


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 14s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 20s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 56s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m  1s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 28s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  84m 19s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2526/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2526 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux df51935fa806 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / da1ea2530fa |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2526/1/testReport/ |
   | Max. process+thread count | 572 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2526/1/console |
   | versions | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2526: HADOOP-17413. ABFS: Release elastic byte buffer pool at close

2020-12-07 Thread GitBox


hadoop-yetus commented on pull request #2526:
URL: https://github.com/apache/hadoop/pull/2526#issuecomment-739923344


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 14s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 20s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 29s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 35s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m 13s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 26s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 56s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 16s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 26s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  17m 42s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   1m  1s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 28s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 32s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  84m 19s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2526/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2526 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux df51935fa806 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / da1ea2530fa |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2526/1/testReport/ |
   | Max. process+thread count | 572 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2526/1/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this 

[jira] [Updated] (HADOOP-17413) ABFS: Release Elastic ByteBuffer pool memory at outputStream close

2020-12-07 Thread Sneha Vijayarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Vijayarajan updated HADOOP-17413:
---
Status: Patch Available  (was: Open)

> ABFS: Release Elastic ByteBuffer pool memory at outputStream close
> --
>
> Key: HADOOP-17413
> URL: https://issues.apache.org/jira/browse/HADOOP-17413
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Each AbfsOutputStream holds on to an instance of elastic bytebuffer pool. 
> This instance needs to be released so that the memory can be given back to 
> JVM's available memory pool. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17413) ABFS: Release Elastic ByteBuffer pool memory at outputStream close

2020-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17413?focusedWorklogId=521158=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-521158
 ]

ASF GitHub Bot logged work on HADOOP-17413:
---

Author: ASF GitHub Bot
Created on: 07/Dec/20 13:31
Start Date: 07/Dec/20 13:31
Worklog Time Spent: 10m 
  Work Description: snvijaya commented on pull request #2526:
URL: https://github.com/apache/hadoop/pull/2526#issuecomment-739919929


   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 448, Failures: 0, Errors: 0, Skipped: 68
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 141
   
   NonHNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 460, Failures: 0, Errors: 0, Skipped: 251
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 141
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 460, Failures: 0, Errors: 0, Skipped: 24
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 24
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 429, Failures: 0, Errors: 0, Skipped: 244
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 24
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 521158)
Time Spent: 20m  (was: 10m)

> ABFS: Release Elastic ByteBuffer pool memory at outputStream close
> --
>
> Key: HADOOP-17413
> URL: https://issues.apache.org/jira/browse/HADOOP-17413
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.0
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Each AbfsOutputStream holds on to an instance of elastic bytebuffer pool. 
> This instance needs to be released so that the memory can be given back to 
> JVM's available memory pool. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on pull request #2526: HADOOP-17413. ABFS: Release elastic byte buffer pool at close

2020-12-07 Thread GitBox


snvijaya commented on pull request #2526:
URL: https://github.com/apache/hadoop/pull/2526#issuecomment-739919929


   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 448, Failures: 0, Errors: 0, Skipped: 68
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 141
   
   NonHNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 460, Failures: 0, Errors: 0, Skipped: 251
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 206, Failures: 0, Errors: 0, Skipped: 141
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 460, Failures: 0, Errors: 0, Skipped: 24
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 24
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 429, Failures: 0, Errors: 0, Skipped: 244
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 24
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ayushtkn commented on pull request #2502: Update RemoteException.java

2020-12-07 Thread GitBox


ayushtkn commented on pull request #2502:
URL: https://github.com/apache/hadoop/pull/2502#issuecomment-739914059


   Is there a jira for this?
   Please read - 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   
   You need to create a jira here- 
https://issues.apache.org/jira/projects/HADOOP
   Add details, what the issue is? How to repro, what problem you are facing 
and stuff



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] ankit-kumar-25 commented on pull request #2519: YARN-10491: Fix deprecation warnings in SLSWebApp.java

2020-12-07 Thread GitBox


ankit-kumar-25 commented on pull request #2519:
URL: https://github.com/apache/hadoop/pull/2519#issuecomment-739912005


   Hey @aajisaka, Suggested change has been done. Thank you for the review.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17407) ABFS: Delete Idempotency handling can lead to NPE

2020-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17407?focusedWorklogId=521145=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-521145
 ]

ASF GitHub Bot logged work on HADOOP-17407:
---

Author: ASF GitHub Bot
Created on: 07/Dec/20 13:06
Start Date: 07/Dec/20 13:06
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2525:
URL: https://github.com/apache/hadoop/pull/2525#issuecomment-739906835


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 56s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   0m 58s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 24s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  82m 43s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2525/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2525 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux bed5d8f2b33c 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / da1ea2530fa |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2525/2/testReport/ |
   | Max. process+thread count | 517 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2525/2/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   

[GitHub] [hadoop] ankit-kumar-25 commented on a change in pull request #2519: YARN-10491: Fix deprecation warnings in SLSWebApp.java

2020-12-07 Thread GitBox


ankit-kumar-25 commented on a change in pull request #2519:
URL: https://github.com/apache/hadoop/pull/2519#discussion_r537492256



##
File path: 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/web/SLSWebApp.java
##
@@ -24,6 +24,7 @@
 import java.util.HashMap;
 import java.util.Map;
 import java.util.Set;
+import java.nio.charset.StandardCharsets;

Review comment:
   Change Done. Thanks!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2525: HADOOP-17407. ABFS: Fix NPE on delete idempotency flow

2020-12-07 Thread GitBox


hadoop-yetus commented on pull request #2525:
URL: https://github.com/apache/hadoop/pull/2525#issuecomment-739906835


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  8s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 26s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 35s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 22s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 34s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  18m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 27s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 56s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 54s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 28s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 24s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  checkstyle  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 37s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 21s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   0m 58s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 24s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 29s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  82m 43s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2525/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2525 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux bed5d8f2b33c 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / da1ea2530fa |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2525/2/testReport/ |
   | Max. process+thread count | 517 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2525/2/console |
   | versions | git=2.17.1 maven=3.6.0 findbugs=4.0.6 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [hadoop] aajisaka commented on a change in pull request #2519: YARN-10491: Fix deprecation warnings in SLSWebApp.java

2020-12-07 Thread GitBox


aajisaka commented on a change in pull request #2519:
URL: https://github.com/apache/hadoop/pull/2519#discussion_r537481365



##
File path: 
hadoop-tools/hadoop-sls/src/main/java/org/apache/hadoop/yarn/sls/web/SLSWebApp.java
##
@@ -24,6 +24,7 @@
 import java.util.HashMap;
 import java.util.Map;
 import java.util.Set;
+import java.nio.charset.StandardCharsets;

Review comment:
   Would you add the import between `java.io.ObjectInputStream` and 
`java.text.MessageFormat` to keep the order?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17412) When `fs.s3a.connection.ssl.enabled=true`, Error when visit S3A with AKSK

2020-12-07 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17245162#comment-17245162
 ] 

Steve Loughran edited comment on HADOOP-17412 at 12/7/20, 12:21 PM:


* please, no screenshots, paste a stack trace in
* grab the latest cloudstore relase and run the storediag command against your 
bucket; attache it. this will strip out security info 
https://github.com/steveloughran/cloudstore
* do you have a proxy?
* is this a recent regression? I'm curious as we've hit some openssl TLS setup 
issues in the last week and I'm wondering if S3 has changed things

I see you want to hide the bucket name. Does it have any "." in it? if so: 
WONTFIX. switch to path 

This is your setup, afraid you get to debug it. sorry


was (Author: ste...@apache.org):
* please, no screenshots, paste a stack trace in
* grab the latest cloudstore relase and run the storediag command against your 
bucket; this will strip out security info 
https://github.com/steveloughran/cloudstore

I see you want to hide the bucket name. Does it have any "." in it? if so: 
WONTFIX. switch to path 

This is your setup, afraid you get to debug it. sorry

> When `fs.s3a.connection.ssl.enabled=true`,   Error when visit S3A with AKSK
> ---
>
> Key: HADOOP-17412
> URL: https://issues.apache.org/jira/browse/HADOOP-17412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
> Environment: jdk 1.8
> hadoop-3.3.0
>Reporter: angerszhu
>Priority: Major
> Attachments: image-2020-12-07-10-25-51-908.png
>
>
> When we update hadoop version from hadoop-3.2.1 to hadoop-3.3.0, Use AKSK 
> access s3a with ssl enabled, then this error happen
> {code:java}
>    
> ipc.client.connection.maxidletime 
>    2 
>  
> 
> fs.s3a.secret.key 
>  
>  
>    
> fs.s3a.access.key 
> 
>  
>  
> fs.s3a.aws.credentials.provider 
> org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
> 
> {code}
> !image-2020-12-07-10-25-51-908.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17412) When `fs.s3a.connection.ssl.enabled=true`, Error when visit S3A with AKSK

2020-12-07 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17245162#comment-17245162
 ] 

Steve Loughran commented on HADOOP-17412:
-

* please, no screenshots, paste a stack trace in
* grab the latest cloudstore relase and run the storediag command against your 
bucket; this will strip out security info 
https://github.com/steveloughran/cloudstore

I see you want to hide the bucket name. Does it have any "." in it? if so: 
WONTFIX. switch to path 

This is your setup, afraid you get to debug it. sorry

> When `fs.s3a.connection.ssl.enabled=true`,   Error when visit S3A with AKSK
> ---
>
> Key: HADOOP-17412
> URL: https://issues.apache.org/jira/browse/HADOOP-17412
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
> Environment: jdk 1.8
> hadoop-3.3.0
>Reporter: angerszhu
>Priority: Major
> Attachments: image-2020-12-07-10-25-51-908.png
>
>
> When we update hadoop version from hadoop-3.2.1 to hadoop-3.3.0, Use AKSK 
> access s3a with ssl enabled, then this error happen
> {code:java}
>    
> ipc.client.connection.maxidletime 
>    2 
>  
> 
> fs.s3a.secret.key 
>  
>  
>    
> fs.s3a.access.key 
> 
>  
>  
> fs.s3a.aws.credentials.provider 
> org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
> 
> {code}
> !image-2020-12-07-10-25-51-908.png!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16202) Stabilize openFile() and adopt internally

2020-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16202?focusedWorklogId=521118=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-521118
 ]

ASF GitHub Bot logged work on HADOOP-16202:
---

Author: ASF GitHub Bot
Created on: 07/Dec/20 12:12
Start Date: 07/Dec/20 12:12
Worklog Time Spent: 10m 
  Work Description: steveloughran commented on pull request #2168:
URL: https://github.com/apache/hadoop/pull/2168#issuecomment-739880426


   The latest patch rounds things off. This thing is ready to go in. 
   * We now have the option to specify the start and end of splits; the input 
formats in the MR client do this.
   * everywhere in the code where we explicitly download sequential datasets 
request sequential IO. (actually, I've just realised `hadoop fs -head ` 
should request random IO as well as declare split lengths...we don't want a 
full GET).
   
   its important that FS implementations don't rely on split length to set max 
file len, because splits are allowed to overrun to ensure a whole record/block 
is read. Apps which pass split info down to worker processes (hive ) need to 
pass in file size too if they want to save the HEAD request. It could still be 
used by the input streams if they can think of a way 
   
   1. For sequential IO: end of content length = min(split-end, file-length) 
for that initial request,
   2 For random IO, assume it's the initial EOF. 
   
   because openFile() declares FNFEs can be delayed until reads, we could also 
see if we could do an async HEAD request while processing that first GET/HEAD, 
so have the final file length without blocking. That would make streams more 
complex —at least now we have the option.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 521118)
Time Spent: 3h 40m  (was: 3.5h)

> Stabilize openFile() and adopt internally
> -
>
> Key: HADOOP-16202
> URL: https://issues.apache.org/jira/browse/HADOOP-16202
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/s3, tools/distcp
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 3h 40m
>  Remaining Estimate: 0h
>
> The {{openFile()}} builder API lets us add new options when reading a file
> Add an option {{"fs.s3a.open.option.length"}} which takes a long and allows 
> the length of the file to be declared. If set, *no check for the existence of 
> the file is issued when opening the file*
> Also: withFileStatus() to take any FileStatus implementation, rather than 
> only S3AFileStatus -and not check that the path matches the path being 
> opened. Needed to support viewFS-style wrapping and mounting.
> and Adopt where appropriate to stop clusters with S3A reads switched to 
> random IO from killing download/localization
> * fs shell copyToLocal
> * distcp
> * IOUtils.copy



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17413) ABFS: Release Elastic ByteBuffer pool memory at outputStream close

2020-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17413?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17413:

Labels: pull-request-available  (was: )

> ABFS: Release Elastic ByteBuffer pool memory at outputStream close
> --
>
> Key: HADOOP-17413
> URL: https://issues.apache.org/jira/browse/HADOOP-17413
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Each AbfsOutputStream holds on to an instance of elastic bytebuffer pool. 
> This instance needs to be released so that the memory can be given back to 
> JVM's available memory pool. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17413) ABFS: Release Elastic ByteBuffer pool memory at outputStream close

2020-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17413?focusedWorklogId=521116=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-521116
 ]

ASF GitHub Bot logged work on HADOOP-17413:
---

Author: ASF GitHub Bot
Created on: 07/Dec/20 12:12
Start Date: 07/Dec/20 12:12
Worklog Time Spent: 10m 
  Work Description: snvijaya opened a new pull request #2526:
URL: https://github.com/apache/hadoop/pull/2526


   Each AbfsOutputStream holds on to an instance of elastic bytebuffer pool. 
This instance needs to be released so that the memory can be given back to 
JVM's available memory pool. 
   
   For testing, existing tests were re-run. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 521116)
Remaining Estimate: 0h
Time Spent: 10m

> ABFS: Release Elastic ByteBuffer pool memory at outputStream close
> --
>
> Key: HADOOP-17413
> URL: https://issues.apache.org/jira/browse/HADOOP-17413
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
> Fix For: 3.3.0
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Each AbfsOutputStream holds on to an instance of elastic bytebuffer pool. 
> This instance needs to be released so that the memory can be given back to 
> JVM's available memory pool. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on pull request #2168: HADOOP-16202. Enhance/Stabilize openFile()

2020-12-07 Thread GitBox


steveloughran commented on pull request #2168:
URL: https://github.com/apache/hadoop/pull/2168#issuecomment-739880426


   The latest patch rounds things off. This thing is ready to go in. 
   * We now have the option to specify the start and end of splits; the input 
formats in the MR client do this.
   * everywhere in the code where we explicitly download sequential datasets 
request sequential IO. (actually, I've just realised `hadoop fs -head ` 
should request random IO as well as declare split lengths...we don't want a 
full GET).
   
   its important that FS implementations don't rely on split length to set max 
file len, because splits are allowed to overrun to ensure a whole record/block 
is read. Apps which pass split info down to worker processes (hive ) need to 
pass in file size too if they want to save the HEAD request. It could still be 
used by the input streams if they can think of a way 
   
   1. For sequential IO: end of content length = min(split-end, file-length) 
for that initial request,
   2 For random IO, assume it's the initial EOF. 
   
   because openFile() declares FNFEs can be delayed until reads, we could also 
see if we could do an async HEAD request while processing that first GET/HEAD, 
so have the final file length without blocking. That would make streams more 
complex —at least now we have the option.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya opened a new pull request #2526: HADOOP-17413. ABFS: Release elastic byte buffer pool at close

2020-12-07 Thread GitBox


snvijaya opened a new pull request #2526:
URL: https://github.com/apache/hadoop/pull/2526


   Each AbfsOutputStream holds on to an instance of elastic bytebuffer pool. 
This instance needs to be released so that the memory can be given back to 
JVM's available memory pool. 
   
   For testing, existing tests were re-run. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17407) ABFS: Delete Idempotency handling can lead to NPE

2020-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17407?focusedWorklogId=521106=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-521106
 ]

ASF GitHub Bot logged work on HADOOP-17407:
---

Author: ASF GitHub Bot
Created on: 07/Dec/20 11:44
Start Date: 07/Dec/20 11:44
Worklog Time Spent: 10m 
  Work Description: snvijaya commented on pull request #2525:
URL: https://github.com/apache/hadoop/pull/2525#issuecomment-739866803


   Test Results from accounts in East US2.
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 460, Failures: 0, Errors: 0, Skipped: 68
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 141
   
   HNS-AppendBlob
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [ERROR] Failures: 
   [ERROR]   
ITestAbfsNetworkStatistics.testAbfsHttpResponseStatistics:233->AbstractAbfsIntegrationTest.assertAbfsStatistics:453->Assert.assertEquals:645->Assert.failNotEquals:834->Assert.fail:88
 Mismatch in get_responses expected:<6> but was:<7>
   [ERROR]   ITestAbfsOutputStream.testMaxRequestsAndQueueCapacity:72 
[maxConcurrentRequests should be 6] expected:<[6]> but was:<[1]>
   [ERROR]   ITestAbfsOutputStream.testMaxRequestsAndQueueCapacityDefaults:50 
[maxConcurrentRequests should be 32] expected:<[32]> but was:<[1]>
   [INFO] 
   [ERROR] Tests run: 460, Failures: 3, Errors: 0, Skipped: 68
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 141
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 460, Failures: 0, Errors: 0, Skipped: 24
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 24
   
   NonHNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 460, Failures: 0, Errors: 0, Skipped: 251
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 141
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 460, Failures: 0, Errors: 0, Skipped: 247
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 24
   
   Failures in AppendBlob test combination will be addressed in 
https://issues.apache.org/jira/browse/HADOOP-17404.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 521106)
Time Spent: 0.5h  (was: 20m)

> ABFS: Delete Idempotency handling can lead to NPE
> -
>
> Key: HADOOP-17407
> URL: https://issues.apache.org/jira/browse/HADOOP-17407
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Delete idempotency code returns success with a dummy success HttpOperation. 
> the calling code that checks continuation token throws NPE as the dummy 
> success instance does not have any response headers.
> In case of non-HNS account, server coulf return continuation token.  Dummy 
> success response code is modified to not fail while accessing response 
> headers.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17407) ABFS: Delete Idempotency handling can lead to NPE

2020-12-07 Thread Sneha Vijayarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Vijayarajan updated HADOOP-17407:
---
Status: Patch Available  (was: Open)

> ABFS: Delete Idempotency handling can lead to NPE
> -
>
> Key: HADOOP-17407
> URL: https://issues.apache.org/jira/browse/HADOOP-17407
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> Delete idempotency code returns success with a dummy success HttpOperation. 
> the calling code that checks continuation token throws NPE as the dummy 
> success instance does not have any response headers.
> In case of non-HNS account, server coulf return continuation token.  Dummy 
> success response code is modified to not fail while accessing response 
> headers.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya commented on pull request #2525: HADOOP-17407. ABFS: Fix NPE on delete idempotency flow

2020-12-07 Thread GitBox


snvijaya commented on pull request #2525:
URL: https://github.com/apache/hadoop/pull/2525#issuecomment-739866803


   Test Results from accounts in East US2.
   
   HNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 460, Failures: 0, Errors: 0, Skipped: 68
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 141
   
   HNS-AppendBlob
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [ERROR] Failures: 
   [ERROR]   
ITestAbfsNetworkStatistics.testAbfsHttpResponseStatistics:233->AbstractAbfsIntegrationTest.assertAbfsStatistics:453->Assert.assertEquals:645->Assert.failNotEquals:834->Assert.fail:88
 Mismatch in get_responses expected:<6> but was:<7>
   [ERROR]   ITestAbfsOutputStream.testMaxRequestsAndQueueCapacity:72 
[maxConcurrentRequests should be 6] expected:<[6]> but was:<[1]>
   [ERROR]   ITestAbfsOutputStream.testMaxRequestsAndQueueCapacityDefaults:50 
[maxConcurrentRequests should be 32] expected:<[32]> but was:<[1]>
   [INFO] 
   [ERROR] Tests run: 460, Failures: 3, Errors: 0, Skipped: 68
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 141
   
   HNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 460, Failures: 0, Errors: 0, Skipped: 24
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 24
   
   NonHNS-OAuth
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 460, Failures: 0, Errors: 0, Skipped: 251
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 141
   
   NonHNS-SharedKey
   
   [INFO] Results:
   [INFO] 
   [INFO] Tests run: 90, Failures: 0, Errors: 0, Skipped: 0
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 460, Failures: 0, Errors: 0, Skipped: 247
   [INFO] Results:
   [INFO] 
   [WARNING] Tests run: 208, Failures: 0, Errors: 0, Skipped: 24
   
   Failures in AppendBlob test combination will be addressed in 
https://issues.apache.org/jira/browse/HADOOP-17404.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17413) ABFS: Release Elastic ByteBuffer pool memory at outputStream close

2020-12-07 Thread Sneha Vijayarajan (Jira)
Sneha Vijayarajan created HADOOP-17413:
--

 Summary: ABFS: Release Elastic ByteBuffer pool memory at 
outputStream close
 Key: HADOOP-17413
 URL: https://issues.apache.org/jira/browse/HADOOP-17413
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/azure
Affects Versions: 3.3.0
Reporter: Sneha Vijayarajan
Assignee: Sneha Vijayarajan
 Fix For: 3.3.0


Each AbfsOutputStream holds on to an instance of elastic bytebuffer pool. This 
instance needs to be released so that the memory can be given back to JVM's 
available memory pool. 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17290) ABFS: Add Identifiers to Client Request Header

2020-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17290?focusedWorklogId=521101=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-521101
 ]

ASF GitHub Bot logged work on HADOOP-17290:
---

Author: ASF GitHub Bot
Created on: 07/Dec/20 11:30
Start Date: 07/Dec/20 11:30
Worklog Time Spent: 10m 
  Work Description: sumangala-patki commented on pull request #2520:
URL: https://github.com/apache/hadoop/pull/2520#issuecomment-739860111


   TEST RESULTS
   
   HNS Account Location: East US 2
   NonHNS Account Location: East US 2, Central US
   
   ```
   HNS OAuth
   
   [WARNING] Tests run: 93, Failures: 0, Errors: 0, Skipped: 1
   [WARNING] Tests run: 464, Failures: 0, Errors: 0, Skipped: 68
   [WARNING] Tests run: 212, Failures: 0, Errors: 0, Skipped: 24
   
   HNS SharedKey
   
   [WARNING] Tests run: 93, Failures: 0, Errors: 0, Skipped: 2
   [WARNING] Tests run: 464, Failures: 0, Errors: 0, Skipped: 24
   [WARNING] Tests run: 212, Failures: 0, Errors: 0, Skipped: 16
   
   Non-HNS SharedKey
   
   [WARNING] Tests run: 93, Failures: 0, Errors: 0, Skipped: 2
   [WARNING] Tests run: 464, Failures: 0, Errors: 0, Skipped: 249
   [WARNING] Tests run: 212, Failures: 0, Errors: 0, Skipped: 16
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 521101)
Time Spent: 40m  (was: 0.5h)

> ABFS: Add Identifiers to Client Request Header
> --
>
> Key: HADOOP-17290
> URL: https://issues.apache.org/jira/browse/HADOOP-17290
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sumangala Patki
>Priority: Major
>  Labels: abfsactive, pull-request-available
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Adding unique values to the client request header to assist in correlating 
> requests



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sumangala-patki commented on pull request #2520: HADOOP-17290. ABFS: Add Identifiers to Client Request Header

2020-12-07 Thread GitBox


sumangala-patki commented on pull request #2520:
URL: https://github.com/apache/hadoop/pull/2520#issuecomment-739860111


   TEST RESULTS
   
   HNS Account Location: East US 2
   NonHNS Account Location: East US 2, Central US
   
   ```
   HNS OAuth
   
   [WARNING] Tests run: 93, Failures: 0, Errors: 0, Skipped: 1
   [WARNING] Tests run: 464, Failures: 0, Errors: 0, Skipped: 68
   [WARNING] Tests run: 212, Failures: 0, Errors: 0, Skipped: 24
   
   HNS SharedKey
   
   [WARNING] Tests run: 93, Failures: 0, Errors: 0, Skipped: 2
   [WARNING] Tests run: 464, Failures: 0, Errors: 0, Skipped: 24
   [WARNING] Tests run: 212, Failures: 0, Errors: 0, Skipped: 16
   
   Non-HNS SharedKey
   
   [WARNING] Tests run: 93, Failures: 0, Errors: 0, Skipped: 2
   [WARNING] Tests run: 464, Failures: 0, Errors: 0, Skipped: 249
   [WARNING] Tests run: 212, Failures: 0, Errors: 0, Skipped: 16
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2519: YARN-10491: Fix deprecation warnings in SLSWebApp.java

2020-12-07 Thread GitBox


hadoop-yetus commented on pull request #2519:
URL: https://github.com/apache/hadoop/pull/2519#issuecomment-739851518


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m 12s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 1 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 23s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 19s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  17m 55s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 25s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 23s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 47s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 44s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 24s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 20s |  |  
hadoop-tools_hadoop-sls-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 generated 0 new + 0 unchanged - 6 fixed 
= 0 total (was 6)  |
   | +1 :green_heart: |  compile  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 18s |  |  
hadoop-tools_hadoop-sls-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 
with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 0 new + 
0 unchanged - 6 fixed = 0 total (was 6)  |
   | -0 :warning: |  checkstyle  |   0m 13s | 
[/diff-checkstyle-hadoop-tools_hadoop-sls.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2519/3/artifact/out/diff-checkstyle-hadoop-tools_hadoop-sls.txt)
 |  hadoop-tools/hadoop-sls: The patch generated 6 new + 14 unchanged - 0 fixed 
= 20 total (was 14)  |
   | +1 :green_heart: |  mvnsite  |   0m 20s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 51s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 20s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  findbugs  |   0m 47s |  |  the patch passed  |
    _ Other Tests _ |
   | -1 :x: |  unit  |  10m 31s | 
[/patch-unit-hadoop-tools_hadoop-sls.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2519/3/artifact/out/patch-unit-hadoop-tools_hadoop-sls.txt)
 |  hadoop-sls in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 28s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  90m 52s |  |  |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.yarn.sls.appmaster.TestAMSimulator |
   |   | hadoop.yarn.sls.TestReservationSystemInvariants |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2519/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2519 |
   | JIRA Issue | YARN-10491 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 0165b7a53559 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / ad40715690c |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 

[GitHub] [hadoop] aajisaka commented on pull request #2523: HDFS-15712. Upgrade googletest to 1.10.0

2020-12-07 Thread GitBox


aajisaka commented on pull request #2523:
URL: https://github.com/apache/hadoop/pull/2523#issuecomment-739846299


   I'll commit this tomorrow if there are no objections.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-17407) ABFS: Delete Idempotency handling can lead to NPE

2020-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17407?focusedWorklogId=521061=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-521061
 ]

ASF GitHub Bot logged work on HADOOP-17407:
---

Author: ASF GitHub Bot
Created on: 07/Dec/20 10:09
Start Date: 07/Dec/20 10:09
Worklog Time Spent: 10m 
  Work Description: hadoop-yetus commented on pull request #2525:
URL: https://github.com/apache/hadoop/pull/2525#issuecomment-739816123


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 21s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 58s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 55s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 16s | 
[/diff-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2525/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 1 new + 3 unchanged - 0 
fixed = 4 total (was 3)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 25s | 
[/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2525/1/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt)
 |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.  |
   | -1 :x: |  javadoc  |   0m 24s | 
[/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2525/1/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt)
 |  hadoop-azure in the patch failed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.  |
   | +1 :green_heart: |  findbugs  |   0m 57s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 31s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  78m 47s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2525/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2525 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 5d4b333937ce 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2525: HADOOP-17407. ABFS: Fix NPE on delete idempotency flow

2020-12-07 Thread GitBox


hadoop-yetus commented on pull request #2525:
URL: https://github.com/apache/hadoop/pull/2525#issuecomment-739816123


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 40s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | +1 :green_heart: |   |   0m  0s | [test4tests](test4tests) |  The patch 
appears to include 4 new or modified test files.  |
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  34m 28s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   0m 38s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   0m 33s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  checkstyle  |   0m 27s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   0m 40s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  16m 21s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 30s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +0 :ok: |  spotbugs  |   0m 58s |  |  Used deprecated FindBugs config; 
considering switching to SpotBugs.  |
   | +1 :green_heart: |  findbugs  |   0m 55s |  |  trunk passed  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 29s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 30s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javac  |   0m 30s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   0m 25s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  javac  |   0m 25s |  |  the patch passed  |
   | -0 :warning: |  checkstyle  |   0m 16s | 
[/diff-checkstyle-hadoop-tools_hadoop-azure.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2525/1/artifact/out/diff-checkstyle-hadoop-tools_hadoop-azure.txt)
 |  hadoop-tools/hadoop-azure: The patch generated 1 new + 3 unchanged - 0 
fixed = 4 total (was 3)  |
   | +1 :green_heart: |  mvnsite  |   0m 28s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 47s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | -1 :x: |  javadoc  |   0m 25s | 
[/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2525/1/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt)
 |  hadoop-azure in the patch failed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.  |
   | -1 :x: |  javadoc  |   0m 24s | 
[/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2525/1/artifact/out/patch-javadoc-hadoop-tools_hadoop-azure-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt)
 |  hadoop-azure in the patch failed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.  |
   | +1 :green_heart: |  findbugs  |   0m 57s |  |  the patch passed  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   1m 31s |  |  hadoop-azure in the patch 
passed.  |
   | +1 :green_heart: |  asflicense  |   0m 33s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  78m 47s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2525/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2525 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 5d4b333937ce 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7dda804a1a7 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 

[GitHub] [hadoop] hadoop-yetus commented on pull request #2523: HDFS-15712. Upgrade googletest to 1.10.0

2020-12-07 Thread GitBox


hadoop-yetus commented on pull request #2523:
URL: https://github.com/apache/hadoop/pull/2523#issuecomment-739791493


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   1m  6s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  1s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  35m 12s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   2m  0s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  compile  |   2m  6s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  mvnsite  |   0m 21s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  57m  1s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 59s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  cc  |   1m 59s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   1m 59s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 59s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   2m  0s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | +1 :green_heart: |  cc  |   2m  0s |  |  the patch passed  |
   | +1 :green_heart: |  golang  |   2m  0s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   2m  0s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 15s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  16m 40s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 15s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
    _ Other Tests _ |
   | +1 :green_heart: |  unit  |   7m 21s |  |  hadoop-hdfs-native-client in 
the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 30s |  |  The patch does not 
generate ASF License warnings.  |
   |  |   |  91m 28s |  |  |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2523/5/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/2523 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient cc golang |
   | uname | Linux 7c759b5fb45c 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 
23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 7dda804a1a7 |
   | Default Java | Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   | Multi-JDK versions | 
/usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 
/usr/lib/jvm/java-8-openjdk-amd64:Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 |
   |  Test Results | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2523/5/testReport/ |
   | Max. process+thread count | 600 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs-native-client U: 
hadoop-hdfs-project/hadoop-hdfs-native-client |
   | Console output | 
https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2523/5/console |
   | versions | git=2.17.1 maven=3.6.0 |
   | Powered by | Apache Yetus 0.13.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[jira] [Work logged] (HADOOP-16080) hadoop-aws does not work with hadoop-client-api

2020-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16080?focusedWorklogId=521050=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-521050
 ]

ASF GitHub Bot logged work on HADOOP-16080:
---

Author: ASF GitHub Bot
Created on: 07/Dec/20 09:18
Start Date: 07/Dec/20 09:18
Worklog Time Spent: 10m 
  Work Description: sunchao commented on pull request #2522:
URL: https://github.com/apache/hadoop/pull/2522#issuecomment-739787641


   @aajisaka yes whatever done here is only a short-term fix to make hadoop-aws 
work with hadoop-client-api (it is more urgent for 3.2.2 since it is already in 
the process of release). Eventually we should do something similar to #2134 .



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 521050)
Time Spent: 2.5h  (was: 2h 20m)

> hadoop-aws does not work with hadoop-client-api
> ---
>
> Key: HADOOP-16080
> URL: https://issues.apache.org/jira/browse/HADOOP-16080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Keith Turner
>Assignee: Chao Sun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.2
>
>  Time Spent: 2.5h
>  Remaining Estimate: 0h
>
> I attempted to use Accumulo and S3a with the following jars on the classpath.
>  * hadoop-client-api-3.1.1.jar
>  * hadoop-client-runtime-3.1.1.jar
>  * hadoop-aws-3.1.1.jar
> This failed with the following exception.
> {noformat}
> Exception in thread "init" java.lang.NoSuchMethodError: 
> org.apache.hadoop.util.SemaphoredDelegatingExecutor.(Lcom/google/common/util/concurrent/ListeningExecutorService;IZ)V
> at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:769)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1108)
> at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1413)
> at 
> org.apache.accumulo.server.fs.VolumeManagerImpl.createNewFile(VolumeManagerImpl.java:184)
> at 
> org.apache.accumulo.server.init.Initialize.initDirs(Initialize.java:479)
> at 
> org.apache.accumulo.server.init.Initialize.initFileSystem(Initialize.java:487)
> at 
> org.apache.accumulo.server.init.Initialize.initialize(Initialize.java:370)
> at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:348)
> at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:967)
> at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The problem is that {{S3AFileSystem.create()}} looks for 
> {{SemaphoredDelegatingExecutor(com.google.common.util.concurrent.ListeningExecutorService)}}
>  which does not exist in hadoop-client-api-3.1.1.jar.  What does exist is 
> {{SemaphoredDelegatingExecutor(org.apache.hadoop.shaded.com.google.common.util.concurrent.ListeningExecutorService)}}.
> To work around this issue I created a version of hadoop-aws-3.1.1.jar that 
> relocated references to Guava.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work logged] (HADOOP-16080) hadoop-aws does not work with hadoop-client-api

2020-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16080?focusedWorklogId=521051=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-521051
 ]

ASF GitHub Bot logged work on HADOOP-16080:
---

Author: ASF GitHub Bot
Created on: 07/Dec/20 09:18
Start Date: 07/Dec/20 09:18
Worklog Time Spent: 10m 
  Work Description: sunchao commented on a change in pull request #2522:
URL: https://github.com/apache/hadoop/pull/2522#discussion_r537346576



##
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
##
@@ -41,10 +41,7 @@
 import java.util.Optional;
 import java.util.Set;
 import java.util.Objects;
-import java.util.concurrent.CompletableFuture;
-import java.util.concurrent.LinkedBlockingQueue;
-import java.util.concurrent.ThreadPoolExecutor;
-import java.util.concurrent.TimeUnit;
+import java.util.concurrent.*;

Review comment:
   oops will do - I forgot to set this up on my new laptop





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 521051)
Time Spent: 2h 40m  (was: 2.5h)

> hadoop-aws does not work with hadoop-client-api
> ---
>
> Key: HADOOP-16080
> URL: https://issues.apache.org/jira/browse/HADOOP-16080
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 3.2.0, 3.1.1
>Reporter: Keith Turner
>Assignee: Chao Sun
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.2.2
>
>  Time Spent: 2h 40m
>  Remaining Estimate: 0h
>
> I attempted to use Accumulo and S3a with the following jars on the classpath.
>  * hadoop-client-api-3.1.1.jar
>  * hadoop-client-runtime-3.1.1.jar
>  * hadoop-aws-3.1.1.jar
> This failed with the following exception.
> {noformat}
> Exception in thread "init" java.lang.NoSuchMethodError: 
> org.apache.hadoop.util.SemaphoredDelegatingExecutor.(Lcom/google/common/util/concurrent/ListeningExecutorService;IZ)V
> at org.apache.hadoop.fs.s3a.S3AFileSystem.create(S3AFileSystem.java:769)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1169)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1149)
> at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1108)
> at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1413)
> at 
> org.apache.accumulo.server.fs.VolumeManagerImpl.createNewFile(VolumeManagerImpl.java:184)
> at 
> org.apache.accumulo.server.init.Initialize.initDirs(Initialize.java:479)
> at 
> org.apache.accumulo.server.init.Initialize.initFileSystem(Initialize.java:487)
> at 
> org.apache.accumulo.server.init.Initialize.initialize(Initialize.java:370)
> at org.apache.accumulo.server.init.Initialize.doInit(Initialize.java:348)
> at org.apache.accumulo.server.init.Initialize.execute(Initialize.java:967)
> at org.apache.accumulo.start.Main.lambda$execKeyword$0(Main.java:129)
> at java.lang.Thread.run(Thread.java:748)
> {noformat}
> The problem is that {{S3AFileSystem.create()}} looks for 
> {{SemaphoredDelegatingExecutor(com.google.common.util.concurrent.ListeningExecutorService)}}
>  which does not exist in hadoop-client-api-3.1.1.jar.  What does exist is 
> {{SemaphoredDelegatingExecutor(org.apache.hadoop.shaded.com.google.common.util.concurrent.ListeningExecutorService)}}.
> To work around this issue I created a version of hadoop-aws-3.1.1.jar that 
> relocated references to Guava.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao commented on pull request #2522: HADOOP-16080. hadoop-aws does not work with hadoop-client-api

2020-12-07 Thread GitBox


sunchao commented on pull request #2522:
URL: https://github.com/apache/hadoop/pull/2522#issuecomment-739787641


   @aajisaka yes whatever done here is only a short-term fix to make hadoop-aws 
work with hadoop-client-api (it is more urgent for 3.2.2 since it is already in 
the process of release). Eventually we should do something similar to #2134 .



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] sunchao commented on a change in pull request #2522: HADOOP-16080. hadoop-aws does not work with hadoop-client-api

2020-12-07 Thread GitBox


sunchao commented on a change in pull request #2522:
URL: https://github.com/apache/hadoop/pull/2522#discussion_r537346576



##
File path: 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java
##
@@ -41,10 +41,7 @@
 import java.util.Optional;
 import java.util.Set;
 import java.util.Objects;
-import java.util.concurrent.CompletableFuture;
-import java.util.concurrent.LinkedBlockingQueue;
-import java.util.concurrent.ThreadPoolExecutor;
-import java.util.concurrent.TimeUnit;
+import java.util.concurrent.*;

Review comment:
   oops will do - I forgot to set this up on my new laptop





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GauthamBanasandra commented on pull request #2523: HDFS-15712. Upgrade googletest to 1.10.0

2020-12-07 Thread GitBox


GauthamBanasandra commented on pull request #2523:
URL: https://github.com/apache/hadoop/pull/2523#issuecomment-739783245


   My changes are only in HDFS Native client and it seems to have compiled 
successfully in the above run -
   
   **mvn install**
   ```
   [INFO] Apache Hadoop HDFS Native Client ... SUCCESS [  0.463 
s]
   ```
   
   The other failures in the above run seem to be due to resource 
unavailability while compiling an unrelated component -
   **compile**
   ```
   [WARNING] c++: error: vfork: Resource temporarily unavailable
   [WARNING] make[2]: *** 
[main/native/libhdfspp/lib/proto/CMakeFiles/proto_obj.dir/ClientNamenodeProtocol.pb.cc.o]
 Error 1
   ```
   
   **mvnsite**
   ```
   Mon Dec  7 08:06:34 UTC 2020
   cd 
/home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2523/src/hadoop-hdfs-project/hadoop-hdfs-native-client
   /usr/bin/mvn --batch-mode 
-Dmaven.repo.local=/home/jenkins/jenkins-agent/workspace/hadoop-multibranch_PR-2523/yetus-m2/hadoop-trunk-patch-1
 -Ptest-patch clean site site:stage
   /usr/bin/mvn: 45: /usr/bin/mvn: Cannot fork
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on pull request #2523: HDFS-15712. Upgrade googletest to 1.10.0

2020-12-07 Thread GitBox


hadoop-yetus commented on pull request #2523:
URL: https://github.com/apache/hadoop/pull/2523#issuecomment-739774329


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |::|--:|:|::|:---:|
   | +0 :ok: |  reexec  |   0m 41s |  |  Docker mode activated.  |
    _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files 
found.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain 
any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include 
any new or modified tests. Please justify why no new tests are needed for this 
patch. Also please list what manual steps were performed to verify this patch.  
|
    _ trunk Compile Tests _ |
   | -1 :x: |  mvninstall  |  14m 44s | 
[/branch-mvninstall-root.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2523/4/artifact/out/branch-mvninstall-root.txt)
 |  root in trunk failed.  |
   | -1 :x: |  compile  |   1m 54s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2523/4/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt)
 |  hadoop-hdfs-native-client in trunk failed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.  |
   | -1 :x: |  compile  |   0m 26s | 
[/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2523/4/artifact/out/branch-compile-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt)
 |  hadoop-hdfs-native-client in trunk failed with JDK Private 
Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.  |
   | -1 :x: |  mvnsite  |   0m 10s | 
[/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-native-client.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2523/4/artifact/out/branch-mvnsite-hadoop-hdfs-project_hadoop-hdfs-native-client.txt)
 |  hadoop-hdfs-native-client in trunk failed.  |
   | +1 :green_heart: |  shadedclient  |  33m 39s |  |  branch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 24s |  |  trunk passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 19s |  |  trunk passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
    _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   0m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 55s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | -1 :x: |  cc  |   1m 55s | 
[/diff-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2523/4/artifact/out/diff-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04.txt)
 |  
hadoop-hdfs-project_hadoop-hdfs-native-client-jdkUbuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04
 with JDK Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04 generated 10 new + 8 
unchanged - 0 fixed = 18 total (was 8)  |
   | +1 :green_heart: |  golang  |   1m 55s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 55s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 56s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
   | -1 :x: |  cc  |   1m 56s | 
[/diff-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-2523/4/artifact/out/diff-compile-cc-hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01.txt)
 |  
hadoop-hdfs-project_hadoop-hdfs-native-client-jdkPrivateBuild-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01
 with JDK Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01 generated 18 new 
+ 0 unchanged - 0 fixed = 18 total (was 0)  |
   | +1 :green_heart: |  golang  |   1m 56s |  |  the patch passed  |
   | +1 :green_heart: |  javac  |   1m 56s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   0m 21s |  |  the patch passed  |
   | +1 :green_heart: |  whitespace  |   0m  0s |  |  The patch has no 
whitespace issues.  |
   | +1 :green_heart: |  shadedclient  |  14m 52s |  |  patch has no errors 
when building and testing our client artifacts.  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Ubuntu-11.0.9.1+1-Ubuntu-0ubuntu1.18.04  |
   | +1 :green_heart: |  javadoc  |   0m 18s |  |  the patch passed with JDK 
Private Build-1.8.0_275-8u275-b01-0ubuntu1~18.04-b01  |
    _ Other Tests _ |
   | +1 

[jira] [Work logged] (HADOOP-17407) ABFS: Delete Idempotency handling can lead to NPE

2020-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17407?focusedWorklogId=521036=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-521036
 ]

ASF GitHub Bot logged work on HADOOP-17407:
---

Author: ASF GitHub Bot
Created on: 07/Dec/20 08:49
Start Date: 07/Dec/20 08:49
Worklog Time Spent: 10m 
  Work Description: snvijaya opened a new pull request #2525:
URL: https://github.com/apache/hadoop/pull/2525


   Delete idempotency code returns success with a dummy success HttpOperation. 
the calling code that checks continuation token throws NPE as the dummy success 
instance does not have any response headers.
   
   In case of non-HNS account, server could return continuation token.  Dummy 
success response code is modified to not fail while accessing response headers.
   
   Existing test for delete idempotency was modified to reproduce the NPE and 
test the fix.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 521036)
Remaining Estimate: 0h
Time Spent: 10m

> ABFS: Delete Idempotency handling can lead to NPE
> -
>
> Key: HADOOP-17407
> URL: https://issues.apache.org/jira/browse/HADOOP-17407
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
> Fix For: 3.3.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Delete idempotency code returns success with a dummy success HttpOperation. 
> the calling code that checks continuation token throws NPE as the dummy 
> success instance does not have any response headers.
> In case of non-HNS account, server coulf return continuation token.  Dummy 
> success response code is modified to not fail while accessing response 
> headers.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17407) ABFS: Delete Idempotency handling can lead to NPE

2020-12-07 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ASF GitHub Bot updated HADOOP-17407:

Labels: pull-request-available  (was: )

> ABFS: Delete Idempotency handling can lead to NPE
> -
>
> Key: HADOOP-17407
> URL: https://issues.apache.org/jira/browse/HADOOP-17407
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1
>
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Delete idempotency code returns success with a dummy success HttpOperation. 
> the calling code that checks continuation token throws NPE as the dummy 
> success instance does not have any response headers.
> In case of non-HNS account, server coulf return continuation token.  Dummy 
> success response code is modified to not fail while accessing response 
> headers.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] snvijaya opened a new pull request #2525: HADOOP-17407. ABFS: Fix NPE on delete idempotency flow

2020-12-07 Thread GitBox


snvijaya opened a new pull request #2525:
URL: https://github.com/apache/hadoop/pull/2525


   Delete idempotency code returns success with a dummy success HttpOperation. 
the calling code that checks continuation token throws NPE as the dummy success 
instance does not have any response headers.
   
   In case of non-HNS account, server could return continuation token.  Dummy 
success response code is modified to not fail while accessing response headers.
   
   Existing test for delete idempotency was modified to reproduce the NPE and 
test the fix.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] GauthamBanasandra commented on pull request #2523: HDFS-15712. Upgrade googletest to 1.10.0

2020-12-07 Thread GitBox


GauthamBanasandra commented on pull request #2523:
URL: https://github.com/apache/hadoop/pull/2523#issuecomment-739759827


   Thanks @aajisaka 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17407) ABFS: Delete Idempotency handling can lead to NPE

2020-12-07 Thread Sneha Vijayarajan (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17407?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sneha Vijayarajan updated HADOOP-17407:
---
Description: 
Delete idempotency code returns success with a dummy success HttpOperation. the 
calling code that checks continuation token throws NPE as the dummy success 
instance does not have any response headers.

In case of non-HNS account, server coulf return continuation token.  Dummy 
success response code is modified to not fail while accessing response headers.

 

  was:
Delete idempotency code returns success with a dummy success HttpOperation. the 
calling code that checks continuation token throws NPE as the dummy success 
instance does not have any response headers.

 

ABFS server endpoint doesnt utilize continuation token concept for delete and 
hence that code needs to be removed.


> ABFS: Delete Idempotency handling can lead to NPE
> -
>
> Key: HADOOP-17407
> URL: https://issues.apache.org/jira/browse/HADOOP-17407
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Sneha Vijayarajan
>Assignee: Sneha Vijayarajan
>Priority: Major
> Fix For: 3.3.1
>
>
> Delete idempotency code returns success with a dummy success HttpOperation. 
> the calling code that checks continuation token throws NPE as the dummy 
> success instance does not have any response headers.
> In case of non-HNS account, server coulf return continuation token.  Dummy 
> success response code is modified to not fail while accessing response 
> headers.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org