[jira] [Updated] (HADOOP-17056) shelldoc fails in hadoop-common

2020-06-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17056:
---
Fix Version/s: 3.4.0
   3.3.1

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, 
> HADOOP-17056-addendum.01.patch, HADOOP-17056-test-01.patch, 
> HADOOP-17056-test-02.patch, HADOOP-17056-test-03.patch, HADOOP-17056.01.patch
>
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17056) shelldoc fails in hadoop-common

2020-06-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17056:
---
Attachment: HADOOP-17056.01.patch

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, 
> HADOOP-17056-test-01.patch, HADOOP-17056-test-02.patch, 
> HADOOP-17056-test-03.patch, HADOOP-17056.01.patch
>
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17056) shelldoc fails in hadoop-common

2020-06-03 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124679#comment-17124679
 ] 

Akira Ajisaka commented on HADOOP-17056:


Removed the empty line change in hadoop-funcions.sh in 
[https://github.com/apache/hadoop/pull/2045]. I'll upload the final patch here 
to help tracking this issue.

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, 
> HADOOP-17056-test-01.patch, HADOOP-17056-test-02.patch, 
> HADOOP-17056-test-03.patch
>
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17062) Fix "shelldocs was not available" warning in the precommit job

2020-06-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17062:
---
Status: Patch Available  (was: Open)

> Fix "shelldocs was not available" warning in the precommit job
> --
>
> Key: HADOOP-17062
> URL: https://issues.apache.org/jira/browse/HADOOP-17062
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> Shelldocs check is not enabled in the precommit jobs.
> |{color:#FF}0{color}|{color:#FF}shelldocs{color}|{color:#FF}0m 
> 1s{color}|{color:#FF}Shelldocs was not available.{color}|
> Console log 
> https://builds.apache.org/job/hadoop-multibranch/job/PR-2045/1/console
> {noformat}
> WARNING: shellcheck needs UTF-8 locale support. Forcing C.UTF-8.
> executable '/testptch/hadoop/dev-support/bin/shelldocs' for 'shelldocs' does 
> not exist.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17062) Fix "shelldocs was not available" warning in the precommit job

2020-06-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-17062:
--

Assignee: Akira Ajisaka

> Fix "shelldocs was not available" warning in the precommit job
> --
>
> Key: HADOOP-17062
> URL: https://issues.apache.org/jira/browse/HADOOP-17062
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> Shelldocs check is not enabled in the precommit jobs.
> |{color:#FF}0{color}|{color:#FF}shelldocs{color}|{color:#FF}0m 
> 1s{color}|{color:#FF}Shelldocs was not available.{color}|
> Console log 
> https://builds.apache.org/job/hadoop-multibranch/job/PR-2045/1/console
> {noformat}
> WARNING: shellcheck needs UTF-8 locale support. Forcing C.UTF-8.
> executable '/testptch/hadoop/dev-support/bin/shelldocs' for 'shelldocs' does 
> not exist.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17062) Fix "shelldocs was not available" warning in the precommit job

2020-06-03 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17062:
---
Description: 
Shelldocs check is not enabled in the precommit jobs.
|{color:#FF}0{color}|{color:#FF}shelldocs{color}|{color:#FF}0m 
1s{color}|{color:#FF}Shelldocs was not available.{color}|

Console log 
https://builds.apache.org/job/hadoop-multibranch/job/PR-2045/1/console
{noformat}
WARNING: shellcheck needs UTF-8 locale support. Forcing C.UTF-8.
executable '/testptch/hadoop/dev-support/bin/shelldocs' for 'shelldocs' does 
not exist.
{noformat}

  was:
Shelldocs check is not enabled in the precommit jobs.
|{color:#FF}0{color}|{color:#FF}shelldocs{color}|{color:#FF}0m 
1s{color}|{color:#FF}Shelldocs was not available.{color}|

Console log:
*17:33:05*  WARNING: shellcheck needs UTF-8 locale support. Forcing 
C.UTF-8.*17:33:05*  executable '/testptch/hadoop/dev-support/bin/shelldocs' for 
'shelldocs' does not exist.


> Fix "shelldocs was not available" warning in the precommit job
> --
>
> Key: HADOOP-17062
> URL: https://issues.apache.org/jira/browse/HADOOP-17062
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> Shelldocs check is not enabled in the precommit jobs.
> |{color:#FF}0{color}|{color:#FF}shelldocs{color}|{color:#FF}0m 
> 1s{color}|{color:#FF}Shelldocs was not available.{color}|
> Console log 
> https://builds.apache.org/job/hadoop-multibranch/job/PR-2045/1/console
> {noformat}
> WARNING: shellcheck needs UTF-8 locale support. Forcing C.UTF-8.
> executable '/testptch/hadoop/dev-support/bin/shelldocs' for 'shelldocs' does 
> not exist.
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17062) Fix "shelldocs was not available" warning in the precommit job

2020-06-03 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17062:
--

 Summary: Fix "shelldocs was not available" warning in the 
precommit job
 Key: HADOOP-17062
 URL: https://issues.apache.org/jira/browse/HADOOP-17062
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akira Ajisaka


Shelldocs check is not enabled in the precommit jobs.
|{color:#FF}0{color}|{color:#FF}shelldocs{color}|{color:#FF}0m 
1s{color}|{color:#FF}Shelldocs was not available.{color}|

Console log:
*17:33:05*  WARNING: shellcheck needs UTF-8 locale support. Forcing 
C.UTF-8.*17:33:05*  executable '/testptch/hadoop/dev-support/bin/shelldocs' for 
'shelldocs' does not exist.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17056) shelldoc fails in hadoop-common

2020-06-03 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124629#comment-17124629
 ] 

Akira Ajisaka edited comment on HADOOP-17056 at 6/3/20, 6:11 AM:
-

Deleted the output of #16962 precommit job. The job first used test-02 patch 
for the build environment (i.e. the environment variable is not set in the 
docker image) and verified test-03 patch, therefore mvn site failed.


was (Author: ajisakaa):
Deleted the output of #16962 precommit job.

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, 
> HADOOP-17056-test-01.patch, HADOOP-17056-test-02.patch, 
> HADOOP-17056-test-03.patch
>
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17056) shelldoc fails in hadoop-common

2020-06-02 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124630#comment-17124630
 ] 

Akira Ajisaka commented on HADOOP-17056:


bq.  +1 mvnsite 17m 59s the patch passed
The precommit job looks good

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, 
> HADOOP-17056-test-01.patch, HADOOP-17056-test-02.patch, 
> HADOOP-17056-test-03.patch
>
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17056) shelldoc fails in hadoop-common

2020-06-02 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17124629#comment-17124629
 ] 

Akira Ajisaka commented on HADOOP-17056:


Deleted the output of #16962 precommit job.

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, 
> HADOOP-17056-test-01.patch, HADOOP-17056-test-02.patch, 
> HADOOP-17056-test-03.patch
>
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Issue Comment Deleted] (HADOOP-17056) shelldoc fails in hadoop-common

2020-06-02 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17056:
---
Comment: was deleted

(was: | (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 28m 
57s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} dupname {color} | {color:green}  0m  
0s{color} | {color:green} No case conflicting files found. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
0s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  3m 
25s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 32m 
 7s{color} | {color:green} trunk passed {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red} 11m 
10s{color} | {color:red} root in trunk failed. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 23s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 21m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} hadolint {color} | {color:green}  0m  
5s{color} | {color:green} There were no new hadolint issues. {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  6m 
21s{color} | {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 6s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
13m 54s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 19m 
39s{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
 3s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}156m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | ClientAPI=1.40 ServerAPI=1.40 base: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16962/artifact/out/Dockerfile
 |
| JIRA Issue | HADOOP-17056 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13004581/HADOOP-17056-test-03.patch
 |
| Optional Tests | dupname asflicense shellcheck shelldocs hadolint mvnsite 
unit |
| uname | Linux 76c888767720 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | personality/hadoop.sh |
| git revision | trunk / 9fe4c37c25b |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16962/artifact/out/branch-mvnsite-root.txt
 |
| mvnsite | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16962/artifact/out/patch-mvnsite-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16962/testReport/ |
| Max. process+thread count | 414 (vs. ulimit of 5500) |
| modules | C: hadoop-common-project/hadoop-common . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16962/console |
| versions | git=2.17.1 maven=3.6.0 shellcheck=0.4.6 hadolint=1.11.1-0-g0e692dd 
|
| Powered by | Apache Yetus 0.12.0 https://yetus.apache.org |


This message was automatically generated.

)

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: 

[jira] [Assigned] (HADOOP-17056) shelldoc fails in hadoop-common

2020-06-02 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-17056:
--

Assignee: Akira Ajisaka

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, 
> HADOOP-17056-test-01.patch, HADOOP-17056-test-02.patch, 
> HADOOP-17056-test-03.patch
>
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17056) shelldoc fails in hadoop-common

2020-06-02 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123504#comment-17123504
 ] 

Akira Ajisaka commented on HADOOP-17056:


test-02 patch: Create gpg homedir under the project root

However, the path length is still too long for the qbt jobs. I'd like to 
disable gpg verification in the docker build image.
PR: https://github.com/apache/hadoop/pull/2045

test-03 patch: PR #2045 and modified hadoop-functions.sh to kick the shelldoc.

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, 
> HADOOP-17056-test-01.patch, HADOOP-17056-test-02.patch, 
> HADOOP-17056-test-03.patch
>
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17056) shelldoc fails in hadoop-common

2020-06-02 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123504#comment-17123504
 ] 

Akira Ajisaka edited comment on HADOOP-17056 at 6/2/20, 8:34 AM:
-

test-02 patch: Create gpg homedir under the project root

However, the path length is still too long for the qbt jobs. I'd like to 
disable gpg verification if the environment is the docker build image.
PR: https://github.com/apache/hadoop/pull/2045

test-03 patch: PR #2045 and modified hadoop-functions.sh to kick the shelldoc.


was (Author: ajisakaa):
test-02 patch: Create gpg homedir under the project root

However, the path length is still too long for the qbt jobs. I'd like to 
disable gpg verification in the docker build image.
PR: https://github.com/apache/hadoop/pull/2045

test-03 patch: PR #2045 and modified hadoop-functions.sh to kick the shelldoc.

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, 
> HADOOP-17056-test-01.patch, HADOOP-17056-test-02.patch, 
> HADOOP-17056-test-03.patch
>
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17056) shelldoc fails in hadoop-common

2020-06-02 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17056:
---
Attachment: HADOOP-17056-test-03.patch

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, 
> HADOOP-17056-test-01.patch, HADOOP-17056-test-02.patch, 
> HADOOP-17056-test-03.patch
>
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17056) shelldoc fails in hadoop-common

2020-06-02 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17056:
---
Attachment: HADOOP-17056-test-02.patch

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, 
> HADOOP-17056-test-01.patch, HADOOP-17056-test-02.patch
>
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17056) shelldoc fails in hadoop-common

2020-06-02 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17056:
---
Attachment: HADOOP-17056-test-01.patch

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: 2040.02.patch, 2040.03.patch, 2040.patch, 
> HADOOP-17056-test-01.patch
>
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17056) shelldoc fails in hadoop-common

2020-06-02 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17123445#comment-17123445
 ] 

Akira Ajisaka commented on HADOOP-17056:


Thanks [~iwasakims] for your comment.

Applied 03 patch to run gpg-agent explicitly, and got more detailed error log:
{noformat}
[INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
gpg-agent[3808]: directory 
'/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/.gpg/private-keys-v1.d'
 created
gpg-agent[3808]: listening on socket 
'/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/.gpg/S.gpg-agent'
gpg-agent[3808]: listening on socket 
'/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/.gpg/S.gpg-agent.extra'
gpg-agent[3808]: socket name 
'/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/.gpg/S.gpg-agent.browser'
 is too long
gpg: keybox 
'/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/.gpg/pubring.kbx'
 created
{noformat}
https://builds.apache.org/job/PreCommit-HADOOP-Build/16961/artifact/out/patch-mvnsite-root.txt

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: 2040.02.patch, 2040.03.patch, 2040.patch
>
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17056) shelldoc fails in hadoop-common

2020-06-01 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17056:
---
Attachment: 2040.03.patch

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: 2040.02.patch, 2040.03.patch, 2040.patch
>
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17056) shelldoc fails in hadoop-common

2020-06-01 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17121469#comment-17121469
 ] 

Akira Ajisaka commented on HADOOP-17056:


The detailed error message:
{noformat}
[INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
gpg: keybox 
'/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/.gpg/pubring.kbx'
 created
gpg: armor header: Version: GnuPG v1
gpg: armor header: Version: GnuPG v1
gpg: pub  rsa4096/E65E11D40D80DB7C 2015-08-20  Sean Busbey (CODE SIGNING KEY) 

gpg: 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/.gpg/trustdb.gpg:
 trustdb created
gpg: using pgp trust model
gpg: key E65E11D40D80DB7C: public key "Sean Busbey (CODE SIGNING KEY) 
" imported
gpg: no running gpg-agent - starting '/usr/bin/gpg-agent'
gpg: waiting for the agent to come up ... (5s)
gpg: waiting for the agent to come up ... (4s)
gpg: waiting for the agent to come up ... (3s)
gpg: waiting for the agent to come up ... (2s)
gpg: waiting for the agent to come up ... (1s)
gpg: waiting for the agent to come up ... (0s)
gpg: can't connect to the agent: IPC connect call failed
{noformat}
https://builds.apache.org/job/PreCommit-HADOOP-Build/16960/artifact/out/patch-mvnsite-root.txt

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: 2040.02.patch, 2040.patch
>
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17056) shelldoc fails in hadoop-common

2020-05-31 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17056:
---
Attachment: 2040.02.patch

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: 2040.02.patch, 2040.patch
>
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17056) shelldoc fails in hadoop-common

2020-05-30 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120230#comment-17120230
 ] 

Akira Ajisaka commented on HADOOP-17056:


Reproduced the error in the precommit job.
https://builds.apache.org/job/PreCommit-HADOOP-Build/16959/artifact/out/patch-mvnsite-root.txt
{noformat}
[INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
gpg: keybox 
'/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/.gpg/pubring.kbx'
 created
gpg: 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/.gpg/trustdb.gpg:
 trustdb created
gpg: key E65E11D40D80DB7C: public key "Sean Busbey (CODE SIGNING KEY) 
" imported
gpg: can't connect to the agent: IPC connect call failed
gpg: key E01B34FBE846DF38: public key "Allen Wittenauer (CODE SIGNING KEY) 
" imported
gpg: can't connect to the agent: IPC connect call failed
gpg: key 2922A48261524827: public key "Kengo Seki (CODE SIGNING KEY) 
" imported
gpg: can't connect to the agent: IPC connect call failed
gpg: key 0411512D61CD1D5F: 4 signatures not checked due to missing keys
gpg: key 0411512D61CD1D5F: public key "Ajay Yadava (CODE SIGNING KEY) 
" imported
gpg: can't connect to the agent: IPC connect call failed
gpg: Total number processed: 4
gpg:   imported: 4
gpg: no ultimately trusted keys found
{noformat}

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: 2040.patch
>
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17056) shelldoc fails in hadoop-common

2020-05-30 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17056:
---
Attachment: 2040.patch

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: 2040.patch
>
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17056) shelldoc fails in hadoop-common

2020-05-30 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17056:
---
Status: Patch Available  (was: Open)

Attaching a patch for the debug in Hadoop-Precommit-Build job.

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
> Attachments: 2040.patch
>
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17056) shelldoc fails in hadoop-common

2020-05-30 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120147#comment-17120147
 ] 

Akira Ajisaka edited comment on HADOOP-17056 at 5/30/20, 8:18 AM:
--

I could not reproduce this error even in the docker build image via 
start-build-env.sh.
Created a PR for debug: https://github.com/apache/hadoop/pull/2040


was (Author: ajisakaa):
Created a PR for debug: https://github.com/apache/hadoop/pull/2040

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17056) shelldoc fails in hadoop-common

2020-05-30 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17120147#comment-17120147
 ] 

Akira Ajisaka commented on HADOOP-17056:


Created a PR for debug: https://github.com/apache/hadoop/pull/2040

> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17056) shelldoc fails in hadoop-common

2020-05-30 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17056:
---
Description: 
{noformat}
[INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> ERROR: yetus-dl: gpg unable to import
> /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> [INFO]
> 
> [INFO] BUILD FAILURE
> [INFO]
> 
> [INFO] Total time:  9.377 s
> [INFO] Finished at: 2020-05-28T17:37:41Z
> [INFO]
> 
> [ERROR] Failed to execute goal
> org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> hadoop-common: Command execution failed. Process exited with an error: 1
> (Exit value: 1) -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the
> -e switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions,
> please read the following articles:
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
{noformat}
* 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
* 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
* 
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt

  was:
{noformat}
[INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> ERROR: yetus-dl: gpg unable to import
> /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> [INFO]
> 
> [INFO] BUILD FAILURE
> [INFO]
> 
> [INFO] Total time:  9.377 s
> [INFO] Finished at: 2020-05-28T17:37:41Z
> [INFO]
> 
> [ERROR] Failed to execute goal
> org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> hadoop-common: Command execution failed. Process exited with an error: 1
> (Exit value: 1) -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the
> -e switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions,
> please read the following articles:
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
{noformat}


> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}
> * 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/16957/artifact/out/branch-mvnsite-hadoop-common-project_hadoop-common.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/155/artifact/out/patch-mvnsite-root.txt
> * 
> https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/157/artifact/out/patch-mvnsite-root.txt



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-

[jira] [Updated] (HADOOP-17056) shelldoc fails in hadoop-common

2020-05-30 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17056:
---
External issue ID:   (was: Automatic)
  Description: 
{noformat}
[INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> ERROR: yetus-dl: gpg unable to import
> /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> [INFO]
> 
> [INFO] BUILD FAILURE
> [INFO]
> 
> [INFO] Total time:  9.377 s
> [INFO] Finished at: 2020-05-28T17:37:41Z
> [INFO]
> 
> [ERROR] Failed to execute goal
> org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> hadoop-common: Command execution failed. Process exited with an error: 1
> (Exit value: 1) -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the
> -e switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions,
> please read the following articles:
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
{noformat}

  was:

{noformat}
[INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> ERROR: yetus-dl: gpg unable to import
> /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> [INFO]
> 
> [INFO] BUILD FAILURE
> [INFO]
> 
> [INFO] Total time:  9.377 s
> [INFO] Finished at: 2020-05-28T17:37:41Z
> [INFO]
> 
> [ERROR] Failed to execute goal
> org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> hadoop-common: Command execution failed. Process exited with an error: 1
> (Exit value: 1) -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the
> -e switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions,
> please read the following articles:
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
{noformat}


> shelldoc fails in hadoop-common
> ---
>
> Key: HADOOP-17056
> URL: https://issues.apache.org/jira/browse/HADOOP-17056
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> {noformat}
> [INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> > ERROR: yetus-dl: gpg unable to import
> > /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> > [INFO]
> > 
> > [INFO] BUILD FAILURE
> > [INFO]
> > 
> > [INFO] Total time:  9.377 s
> > [INFO] Finished at: 2020-05-28T17:37:41Z
> > [INFO]
> > 
> > [ERROR] Failed to execute goal
> > org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> > hadoop-common: Command execution failed. Process exited with an error: 1
> > (Exit value: 1) -> [Help 1]
> > [ERROR]
> > [ERROR] To see the full stack trace of the errors, re-run Maven with the
> > -e switch.
> > [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> > [ERROR]
> > [ERROR] For more information about the errors and possible solutions,
> > please read the following articles:
> > [ERROR] [Help 1]
> > http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17056) shelldoc fails in hadoop-common

2020-05-30 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17056:
--

 Summary: shelldoc fails in hadoop-common
 Key: HADOOP-17056
 URL: https://issues.apache.org/jira/browse/HADOOP-17056
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akira Ajisaka



{noformat}
[INFO] --- exec-maven-plugin:1.3.1:exec (shelldocs) @ hadoop-common ---
> ERROR: yetus-dl: gpg unable to import
> /home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/sourcedir/patchprocess/KEYS_YETUS
> [INFO]
> 
> [INFO] BUILD FAILURE
> [INFO]
> 
> [INFO] Total time:  9.377 s
> [INFO] Finished at: 2020-05-28T17:37:41Z
> [INFO]
> 
> [ERROR] Failed to execute goal
> org.codehaus.mojo:exec-maven-plugin:1.3.1:exec (shelldocs) on project
> hadoop-common: Command execution failed. Process exited with an error: 1
> (Exit value: 1) -> [Help 1]
> [ERROR]
> [ERROR] To see the full stack trace of the errors, re-run Maven with the
> -e switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR]
> [ERROR] For more information about the errors and possible solutions,
> please read the following articles:
> [ERROR] [Help 1]
> http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17055) Remove residual code of Ozone

2020-05-29 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17055:
---
Fix Version/s: 3.4.0
   3.3.1
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Merged the PR into trunk and branch-3.3. Thank you [~jiwq] for your 
contribution.

> Remove residual code of Ozone
> -
>
> Key: HADOOP-17055
> URL: https://issues.apache.org/jira/browse/HADOOP-17055
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Wanqiang Ji
>Assignee: Wanqiang Ji
>Priority: Major
>  Labels: pull-request-available
> Fix For: 3.3.1, 3.4.0
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17046) Support downstreams' existing Hadoop-rpc implementations using non-shaded protobuf classes.

2020-05-29 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17046?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17046:
---
Status: Patch Available  (was: Open)

> Support downstreams' existing Hadoop-rpc implementations using non-shaded 
> protobuf classes.
> ---
>
> Key: HADOOP-17046
> URL: https://issues.apache.org/jira/browse/HADOOP-17046
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: rpc-server
>Affects Versions: 3.3.0
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Major
>
> After upgrade/shade of protobuf to 3.7 version, existing Hadoop-Rpc 
> client-server implementations using ProtobufRpcEngine will not work.
> So, this Jira proposes to keep existing ProtobuRpcEngine as-is (without 
> shading and with protobuf-2.5.0 implementation) to support downstream 
> implementations.
> Use new ProtobufRpcEngine2 to use shaded protobuf classes within Hadoop and 
> later projects who wish to upgrade their protobufs to 3.x.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15743) Jetty and SSL tunings to stabilize KMS performance

2020-05-28 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17118578#comment-17118578
 ] 

Akira Ajisaka commented on HADOOP-15743:


Thank you Daryn for the great summary. I think the tuning options are effective 
for HttpFS as well.

{quote}where did you find the config 
{{javax.net.ssl.sessionCacheTimeout}}?{quote}

I found the config in [https://bugs.openjdk.java.net/browse/JDK-8210985]

{quote}The session cache size can be set via 
SSLSessionContext.setSessionCacheSize() or via the 
javax.net.ssl.sessionCachSize{quote}



> Jetty and SSL tunings to stabilize KMS performance 
> ---
>
> Key: HADOOP-15743
> URL: https://issues.apache.org/jira/browse/HADOOP-15743
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Affects Versions: 2.8.0
>Reporter: Daryn Sharp
>Priority: Major
>
> The KMS has very low throughput with high client failure rates.  The 
> following config options will "stabilize" the KMS under load:
>  # Disable ECDH algos because java's SSL engine is inexplicably HORRIBLE.
>  # Reduce SSL session cache size (unlimited) and ttl (24h).  The memory cache 
> has very poor performance and causes extreme GC collection pressure. Load 
> balancing diminishes the effectiveness of the cache to 1/N-hosts anyway.
>  ** -Djavax.net.ssl.sessionCacheSize=1000
>  ** -Djavax.net.ssl.sessionCacheTimeout=6
>  # Completely disable thread LowResourceMonitor to stop jetty from 
> immediately closing incoming connections during connection bursts.  Client 
> retries cause jetty to remain in a low resource state until many clients fail 
> and cause thousands of sockets to linger in various close related states.
>  # Set min/max threads to 4x processors.   Jetty recommends only 50 to 500 
> threads.  Java's SSL engine has excessive synchronization that limits 
> performance anyway.
>  # Set https idle timeout to 6s.
>  # Significantly increase max fds to at least 128k.  Recommend using a VIP 
> load balancer with a lower limit.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17049) javax.activation-api and jakarta.activation-api define overlapping classes

2020-05-21 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17049:
---
Fix Version/s: 3.4.0
   3.3.1
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Merged the PR into trunk and branch-3.3. Thanks [~weichiu] for your review.

> javax.activation-api and jakarta.activation-api define overlapping classes
> --
>
> Key: HADOOP-17049
> URL: https://issues.apache.org/jira/browse/HADOOP-17049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
>
> There are some warnings in hadoop-client-runtime module.
> {noformat}
> [WARNING] javax.activation-api-1.2.0.jar, jakarta.activation-api-1.2.1.jar 
> define 31 overlapping classes: 
> [WARNING]   - javax.activation.CommandInfo$Beans$1
> [WARNING]   - javax.activation.ObjectDataContentHandler
> [WARNING]   - javax.activation.DataContentHandlerFactory
> [WARNING]   - javax.activation.DataContentHandler
> [WARNING]   - javax.activation.CommandObject
> [WARNING]   - javax.activation.SecuritySupport$2
> [WARNING]   - javax.activation.FileTypeMap
> [WARNING]   - javax.activation.CommandInfo
> [WARNING]   - javax.activation.MailcapCommandMap
> [WARNING]   - javax.activation.DataHandler$1
> [WARNING]   - 21 more...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17047) TODO comments exist in trunk while the related issues are already fixed.

2020-05-20 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111843#comment-17111843
 ] 

Akira Ajisaka commented on HADOOP-17047:


{quote}So for fixing it now, what is the proposal? By "so we need to update the 
document in a separate jira" you mean a Jira for next major release (Hadoop 4)?
{quote}
I think It needs discussion. There are 3 choices:
 * Drop MRv1 binary compatibility in 3.4.0 or 3.5.0
 * Drop MRv1 binary compatibility in 4.0.0
 * Keep MRv1 binary compatibility forever

> TODO comments exist in trunk while the related issues are already fixed.
> 
>
> Key: HADOOP-17047
> URL: https://issues.apache.org/jira/browse/HADOOP-17047
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Rungroj Maipradit
>Assignee: Rungroj Maipradit
>Priority: Trivial
> Attachments: HADOOP-17047.001.patch
>
>
> In a research project, we analyzed the source code of Hadoop looking for 
> comments with on-hold SATDs (self-admitted technical debt) that could be 
> fixed already. An on-hold SATD is a TODO/FIXME comment blocked by an issue. 
> If this blocking issue is already resolved, the related todo can be 
> implemented (or sometimes it is already implemented, but the comment is left 
> in the code causing confusions). As we found a few instances of these in 
> Hadoop, we decided to collect them in a ticket, so they are documented and 
> can be addressed sooner or later.
> A list of code comments that mention already closed issues.
>  * A code comment suggests making the setJobConf method deprecated along with 
> a mapred package HADOOP-1230. HADOOP-1230 has been closed a long time ago, 
> but the method is still not annotated as deprecated.
> {code:java}
>  /**
>* This code is to support backward compatibility and break the compile  
>* time dependency of core on mapred.
>* This should be made deprecated along with the mapred package 
> HADOOP-1230. 
>* Should be removed when mapred package is removed.
>*/ {code}
> Comment location: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java#L88]
>  * A comment mentions that the return type of the getDefaultFileSystem method 
> should be changed to AFS when HADOOP-6223 is completed.
>  Indeed, this change was done in the related commit of HADOOP-6223: 
> ([https://github.com/apache/hadoop/commit/3f371a0a644181b204111ee4e12c995fc7b5e5f5#diff-cd86a2b9ce3efd2232c2ace0e9084508L395)]
>  Thus, the comment could be removed.
> {code:java}
> @InterfaceStability.Unstable /* return type will change to AFS once
> HADOOP-6223 is completed */
> {code}
> Comment location: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java#L512]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-17047) TODO comments exist in trunk while the related issues are already fixed.

2020-05-19 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111702#comment-17111702
 ] 

Akira Ajisaka edited comment on HADOOP-17047 at 5/20/20, 2:49 AM:
--

Thanks [~rungroj] for reporting this and providing a patch, and thanks 
[~liuml07] for pinging me.

For the first case, now 'compile time dependency' is not correct. It is 
'runtime dependency' and this method is still required if we ensure binary 
compatibility with MRv1. If we are going to drop binary compatibility with 
MRv1, the method itself can be removed.

[http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html#Binary_Compatibility]
{quote}First, we ensure binary compatibility to the applications that use old 
mapred APIs. This means that applications which were built against MRv1 mapred 
APIs can run directly on YARN without recompilation, merely by pointing them to 
an Apache Hadoop 2.x cluster via configuration.
{quote}

Now we are on Hadoop 3, so we need to update the document in a separate jira.

For the second one, I think {{@VisibleForTesting}} can be used instead of {{/* 
This method is needed for tests. */}}.


was (Author: ajisakaa):
For the first case, now 'compile time dependency' is not correct. It is 
'runtime dependency' and this method is still required if we ensure binary 
compatibility with MRv1. If we are going to drop binary compatibility with 
MRv1, the method itself can be removed.

[http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html#Binary_Compatibility]
{quote}First, we ensure binary compatibility to the applications that use old 
mapred APIs. This means that applications which were built against MRv1 mapred 
APIs can run directly on YARN without recompilation, merely by pointing them to 
an Apache Hadoop 2.x cluster via configuration.
{quote}

Now we are on Hadoop 3, so we need to update the document in a separate jira.

For the second one, I think {{@VisibleForTesting}} can be used instead of {{/* 
This method is needed for tests. */}}.

> TODO comments exist in trunk while the related issues are already fixed.
> 
>
> Key: HADOOP-17047
> URL: https://issues.apache.org/jira/browse/HADOOP-17047
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Rungroj Maipradit
>Assignee: Rungroj Maipradit
>Priority: Trivial
> Attachments: HADOOP-17047.001.patch
>
>
> In a research project, we analyzed the source code of Hadoop looking for 
> comments with on-hold SATDs (self-admitted technical debt) that could be 
> fixed already. An on-hold SATD is a TODO/FIXME comment blocked by an issue. 
> If this blocking issue is already resolved, the related todo can be 
> implemented (or sometimes it is already implemented, but the comment is left 
> in the code causing confusions). As we found a few instances of these in 
> Hadoop, we decided to collect them in a ticket, so they are documented and 
> can be addressed sooner or later.
> A list of code comments that mention already closed issues.
>  * A code comment suggests making the setJobConf method deprecated along with 
> a mapred package HADOOP-1230. HADOOP-1230 has been closed a long time ago, 
> but the method is still not annotated as deprecated.
> {code:java}
>  /**
>* This code is to support backward compatibility and break the compile  
>* time dependency of core on mapred.
>* This should be made deprecated along with the mapred package 
> HADOOP-1230. 
>* Should be removed when mapred package is removed.
>*/ {code}
> Comment location: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java#L88]
>  * A comment mentions that the return type of the getDefaultFileSystem method 
> should be changed to AFS when HADOOP-6223 is completed.
>  Indeed, this change was done in the related commit of HADOOP-6223: 
> ([https://github.com/apache/hadoop/commit/3f371a0a644181b204111ee4e12c995fc7b5e5f5#diff-cd86a2b9ce3efd2232c2ace0e9084508L395)]
>  Thus, the comment could be removed.
> {code:java}
> @InterfaceStability.Unstable /* return type will change to AFS once
> HADOOP-6223 is completed */
> {code}
> Comment location: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java#L512]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: 

[jira] [Commented] (HADOOP-17047) TODO comments exist in trunk while the related issues are already fixed.

2020-05-19 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17047?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111702#comment-17111702
 ] 

Akira Ajisaka commented on HADOOP-17047:


For the first case, now 'compile time dependency' is not correct. It is 
'runtime dependency' and this method is still required if we ensure binary 
compatibility with MRv1. If we are going to drop binary compatibility with 
MRv1, the method itself can be removed.

[http://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduce_Compatibility_Hadoop1_Hadoop2.html#Binary_Compatibility]
{quote}First, we ensure binary compatibility to the applications that use old 
mapred APIs. This means that applications which were built against MRv1 mapred 
APIs can run directly on YARN without recompilation, merely by pointing them to 
an Apache Hadoop 2.x cluster via configuration.
{quote}

Now we are on Hadoop 3, so we need to update the document in a separate jira.

For the second one, I think {{@VisibleForTesting}} can be used instead of {{/* 
This method is needed for tests. */}}.

> TODO comments exist in trunk while the related issues are already fixed.
> 
>
> Key: HADOOP-17047
> URL: https://issues.apache.org/jira/browse/HADOOP-17047
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Rungroj Maipradit
>Assignee: Rungroj Maipradit
>Priority: Trivial
> Attachments: HADOOP-17047.001.patch
>
>
> In a research project, we analyzed the source code of Hadoop looking for 
> comments with on-hold SATDs (self-admitted technical debt) that could be 
> fixed already. An on-hold SATD is a TODO/FIXME comment blocked by an issue. 
> If this blocking issue is already resolved, the related todo can be 
> implemented (or sometimes it is already implemented, but the comment is left 
> in the code causing confusions). As we found a few instances of these in 
> Hadoop, we decided to collect them in a ticket, so they are documented and 
> can be addressed sooner or later.
> A list of code comments that mention already closed issues.
>  * A code comment suggests making the setJobConf method deprecated along with 
> a mapred package HADOOP-1230. HADOOP-1230 has been closed a long time ago, 
> but the method is still not annotated as deprecated.
> {code:java}
>  /**
>* This code is to support backward compatibility and break the compile  
>* time dependency of core on mapred.
>* This should be made deprecated along with the mapred package 
> HADOOP-1230. 
>* Should be removed when mapred package is removed.
>*/ {code}
> Comment location: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ReflectionUtils.java#L88]
>  * A comment mentions that the return type of the getDefaultFileSystem method 
> should be changed to AFS when HADOOP-6223 is completed.
>  Indeed, this change was done in the related commit of HADOOP-6223: 
> ([https://github.com/apache/hadoop/commit/3f371a0a644181b204111ee4e12c995fc7b5e5f5#diff-cd86a2b9ce3efd2232c2ace0e9084508L395)]
>  Thus, the comment could be removed.
> {code:java}
> @InterfaceStability.Unstable /* return type will change to AFS once
> HADOOP-6223 is completed */
> {code}
> Comment location: 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileContext.java#L512]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17049) javax.activation-api and jakarta.activation-api define overlapping classes

2020-05-19 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17111041#comment-17111041
 ] 

Akira Ajisaka commented on HADOOP-17049:


bq. it was required to support JDK11

Yes, I think it's true. Since JDK 9, javax.activation.* have been removed from 
JDK by default. Therefore Jackson needs to include some library that provides 
javax.activation.* as dependency. We hit the same issue in Apache Hadoop and we 
added javax.activation-api (HADOOP-15775).

javax.activation-api project is moved to jakarta.activation-api, so I created a 
PR to remove javax.activation-api.

> javax.activation-api and jakarta.activation-api define overlapping classes
> --
>
> Key: HADOOP-17049
> URL: https://issues.apache.org/jira/browse/HADOOP-17049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> There are some warnings in hadoop-client-runtime module.
> {noformat}
> [WARNING] javax.activation-api-1.2.0.jar, jakarta.activation-api-1.2.1.jar 
> define 31 overlapping classes: 
> [WARNING]   - javax.activation.CommandInfo$Beans$1
> [WARNING]   - javax.activation.ObjectDataContentHandler
> [WARNING]   - javax.activation.DataContentHandlerFactory
> [WARNING]   - javax.activation.DataContentHandler
> [WARNING]   - javax.activation.CommandObject
> [WARNING]   - javax.activation.SecuritySupport$2
> [WARNING]   - javax.activation.FileTypeMap
> [WARNING]   - javax.activation.CommandInfo
> [WARNING]   - javax.activation.MailcapCommandMap
> [WARNING]   - javax.activation.DataHandler$1
> [WARNING]   - 21 more...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17049) javax.activation-api and jakarta.activation-api define overlapping classes

2020-05-19 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17049:
---
Status: Patch Available  (was: Open)

> javax.activation-api and jakarta.activation-api define overlapping classes
> --
>
> Key: HADOOP-17049
> URL: https://issues.apache.org/jira/browse/HADOOP-17049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> There are some warnings in hadoop-client-runtime module.
> {noformat}
> [WARNING] javax.activation-api-1.2.0.jar, jakarta.activation-api-1.2.1.jar 
> define 31 overlapping classes: 
> [WARNING]   - javax.activation.CommandInfo$Beans$1
> [WARNING]   - javax.activation.ObjectDataContentHandler
> [WARNING]   - javax.activation.DataContentHandlerFactory
> [WARNING]   - javax.activation.DataContentHandler
> [WARNING]   - javax.activation.CommandObject
> [WARNING]   - javax.activation.SecuritySupport$2
> [WARNING]   - javax.activation.FileTypeMap
> [WARNING]   - javax.activation.CommandInfo
> [WARNING]   - javax.activation.MailcapCommandMap
> [WARNING]   - javax.activation.DataHandler$1
> [WARNING]   - 21 more...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-17049) javax.activation-api and jakarta.activation-api define overlapping classes

2020-05-19 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-17049:
--

Assignee: Akira Ajisaka

> javax.activation-api and jakarta.activation-api define overlapping classes
> --
>
> Key: HADOOP-17049
> URL: https://issues.apache.org/jira/browse/HADOOP-17049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> There are some warnings in hadoop-client-runtime module.
> {noformat}
> [WARNING] javax.activation-api-1.2.0.jar, jakarta.activation-api-1.2.1.jar 
> define 31 overlapping classes: 
> [WARNING]   - javax.activation.CommandInfo$Beans$1
> [WARNING]   - javax.activation.ObjectDataContentHandler
> [WARNING]   - javax.activation.DataContentHandlerFactory
> [WARNING]   - javax.activation.DataContentHandler
> [WARNING]   - javax.activation.CommandObject
> [WARNING]   - javax.activation.SecuritySupport$2
> [WARNING]   - javax.activation.FileTypeMap
> [WARNING]   - javax.activation.CommandInfo
> [WARNING]   - javax.activation.MailcapCommandMap
> [WARNING]   - javax.activation.DataHandler$1
> [WARNING]   - 21 more...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17049) javax.activation-api and jakarta.activation-api define overlapping classes

2020-05-19 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17110943#comment-17110943
 ] 

Akira Ajisaka commented on HADOOP-17049:


Thank you for your information, [~weichiu].
It seems that javax.activation-api was moved to jakarta.activation-api.
https://eclipse-ee4j.github.io/jaf/

> javax.activation-api and jakarta.activation-api define overlapping classes
> --
>
> Key: HADOOP-17049
> URL: https://issues.apache.org/jira/browse/HADOOP-17049
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Priority: Major
>
> There are some warnings in hadoop-client-runtime module.
> {noformat}
> [WARNING] javax.activation-api-1.2.0.jar, jakarta.activation-api-1.2.1.jar 
> define 31 overlapping classes: 
> [WARNING]   - javax.activation.CommandInfo$Beans$1
> [WARNING]   - javax.activation.ObjectDataContentHandler
> [WARNING]   - javax.activation.DataContentHandlerFactory
> [WARNING]   - javax.activation.DataContentHandler
> [WARNING]   - javax.activation.CommandObject
> [WARNING]   - javax.activation.SecuritySupport$2
> [WARNING]   - javax.activation.FileTypeMap
> [WARNING]   - javax.activation.CommandInfo
> [WARNING]   - javax.activation.MailcapCommandMap
> [WARNING]   - javax.activation.DataHandler$1
> [WARNING]   - 21 more...
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17049) javax.activation-api and jakarta.activation-api define overlapping classes

2020-05-18 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17049:
--

 Summary: javax.activation-api and jakarta.activation-api define 
overlapping classes
 Key: HADOOP-17049
 URL: https://issues.apache.org/jira/browse/HADOOP-17049
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Akira Ajisaka


There are some warnings in hadoop-client-runtime module.
{noformat}
[WARNING] javax.activation-api-1.2.0.jar, jakarta.activation-api-1.2.1.jar 
define 31 overlapping classes: 
[WARNING]   - javax.activation.CommandInfo$Beans$1
[WARNING]   - javax.activation.ObjectDataContentHandler
[WARNING]   - javax.activation.DataContentHandlerFactory
[WARNING]   - javax.activation.DataContentHandler
[WARNING]   - javax.activation.CommandObject
[WARNING]   - javax.activation.SecuritySupport$2
[WARNING]   - javax.activation.FileTypeMap
[WARNING]   - javax.activation.CommandInfo
[WARNING]   - javax.activation.MailcapCommandMap
[WARNING]   - javax.activation.DataHandler$1
[WARNING]   - 21 more...
{noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16322) FileNotFoundException for checksum file from hadoop-maven-plugins

2020-05-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16322:
---
Target Version/s: 2.8.6  (was: 2.8.5)

> FileNotFoundException for checksum file from hadoop-maven-plugins
> -
>
> Key: HADOOP-16322
> URL: https://issues.apache.org/jira/browse/HADOOP-16322
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.5
>Reporter: Dongwook Kwon
>Assignee: Dongwook Kwon
>Priority: Minor
>  Labels: easyfix
> Attachments: HADOOP-16322-branch-2.8.5.001.patch, 
> HADOOP-16322.001.patch, HADOOP-16322.patch
>
>
> I found hadoop-maven-plugins has an issue with checksum file creation which 
> was updated by https://issues.apache.org/jira/browse/HADOOP-12194 
> Since 
> [checksumFile|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L63]
>  is expected to be 
> "${project.build.directory}/hadoop-maven-plugins-protoc-checksums.json", when 
> ${project.build.directory} doesn't exist yet, writing [checksum 
> file|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L167],
>  throws exception.
> such as the following from HBase which rely on Hadoop-maven-plugins to 
> generate Protoc
>  
> {{[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
> hbase: failed to get report for 
> org.apache.maven.plugins:maven-javadoc-plugin: Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:2.8.5:protoc (compile-protoc) on 
> project hbase-examples: java.io.FileNotFoundException: 
> /Users/dongwook/devrepo/apache-git/hbase/hbase-examples/target/hadoop-maven-plugins-protoc-checksums.json
>  (No such file or directory) -> [Help 1]}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16322) FileNotFoundException for checksum file from hadoop-maven-plugins

2020-05-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16322?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16322:
---
Fix Version/s: (was: 2.8.5)

> FileNotFoundException for checksum file from hadoop-maven-plugins
> -
>
> Key: HADOOP-16322
> URL: https://issues.apache.org/jira/browse/HADOOP-16322
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.8.5
>Reporter: Dongwook Kwon
>Assignee: Dongwook Kwon
>Priority: Minor
>  Labels: easyfix
> Attachments: HADOOP-16322-branch-2.8.5.001.patch, 
> HADOOP-16322.001.patch, HADOOP-16322.patch
>
>
> I found hadoop-maven-plugins has an issue with checksum file creation which 
> was updated by https://issues.apache.org/jira/browse/HADOOP-12194 
> Since 
> [checksumFile|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L63]
>  is expected to be 
> "${project.build.directory}/hadoop-maven-plugins-protoc-checksums.json", when 
> ${project.build.directory} doesn't exist yet, writing [checksum 
> file|https://github.com/apache/hadoop/blob/branch-2.8.5/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/protoc/ProtocMojo.java#L167],
>  throws exception.
> such as the following from HBase which rely on Hadoop-maven-plugins to 
> generate Protoc
>  
> {{[ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-site-plugin:3.4:site (default-site) on project 
> hbase: failed to get report for 
> org.apache.maven.plugins:maven-javadoc-plugin: Failed to execute goal 
> org.apache.hadoop:hadoop-maven-plugins:2.8.5:protoc (compile-protoc) on 
> project hbase-examples: java.io.FileNotFoundException: 
> /Users/dongwook/devrepo/apache-git/hbase/hbase-examples/target/hadoop-maven-plugins-protoc-checksums.json
>  (No such file or directory) -> [Help 1]}}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16750) Backport HADOOP-16548 - ABFS: Config to enable/disable flush operation issue to branch-3.2

2020-05-18 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17109946#comment-17109946
 ] 

Akira Ajisaka commented on HADOOP-16750:


Cleanup: Removed fix/version from this issue and added fix version 3.2.2 to 
HADOOP-16548.

> Backport HADOOP-16548 - ABFS: Config to enable/disable flush operation issue 
> to branch-3.2
> --
>
> Key: HADOOP-16750
> URL: https://issues.apache.org/jira/browse/HADOOP-16750
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2
>Reporter: Mandar Inamdar
>Assignee: Sneha Vijayarajan
>Priority: Minor
>
> Make flush operation enabled/disabled through configuration. This is part of 
> performance improvements for ABFS driver.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16750) Backport HADOOP-16548 - ABFS: Config to enable/disable flush operation issue to branch-3.2

2020-05-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-16750.

Resolution: Duplicate

Reopened and closed this issue to change the resolution.

> Backport HADOOP-16548 - ABFS: Config to enable/disable flush operation issue 
> to branch-3.2
> --
>
> Key: HADOOP-16750
> URL: https://issues.apache.org/jira/browse/HADOOP-16750
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2
>Reporter: Mandar Inamdar
>Assignee: Sneha Vijayarajan
>Priority: Minor
>
> Make flush operation enabled/disabled through configuration. This is part of 
> performance improvements for ABFS driver.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16750) Backport HADOOP-16548 - ABFS: Config to enable/disable flush operation issue to branch-3.2

2020-05-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16750:
---
Fix Version/s: (was: 3.2)
   (was: 3.2.2)

> Backport HADOOP-16548 - ABFS: Config to enable/disable flush operation issue 
> to branch-3.2
> --
>
> Key: HADOOP-16750
> URL: https://issues.apache.org/jira/browse/HADOOP-16750
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2
>Reporter: Mandar Inamdar
>Assignee: Sneha Vijayarajan
>Priority: Minor
>
> Make flush operation enabled/disabled through configuration. This is part of 
> performance improvements for ABFS driver.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16548) ABFS: Config to enable/disable flush operation

2020-05-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16548:
---
Fix Version/s: 3.2.2

> ABFS: Config to enable/disable flush operation
> --
>
> Key: HADOOP-16548
> URL: https://issues.apache.org/jira/browse/HADOOP-16548
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Reporter: Bilahari T H
>Assignee: Sneha Vijayarajan
>Priority: Minor
> Fix For: 3.3.0, 3.2.2
>
>
> Make flush operation enabled/disabled through configuration. This is part of 
> performance improvements for ABFS driver.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-16750) Backport HADOOP-16548 - ABFS: Config to enable/disable flush operation issue to branch-3.2

2020-05-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-16750:


> Backport HADOOP-16548 - ABFS: Config to enable/disable flush operation issue 
> to branch-3.2
> --
>
> Key: HADOOP-16750
> URL: https://issues.apache.org/jira/browse/HADOOP-16750
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.2
>Reporter: Mandar Inamdar
>Assignee: Sneha Vijayarajan
>Priority: Minor
>
> Make flush operation enabled/disabled through configuration. This is part of 
> performance improvements for ABFS driver.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16612) Track Azure Blob File System client-perceived latency

2020-05-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16612:
---
Fix Version/s: (was: 3.2)
   3.3.0

> Track Azure Blob File System client-perceived latency
> -
>
> Key: HADOOP-16612
> URL: https://issues.apache.org/jira/browse/HADOOP-16612
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure, hdfs-client
>Affects Versions: 3.2.2, 3.2
>Reporter: Jeetesh Mangwani
>Assignee: Jeetesh Mangwani
>Priority: Major
> Fix For: 3.3.0, 3.2.2
>
> Attachments: HADOOP-16612.001.patch, HADOOP-16612.002.patch, 
> HADOOP-16612.003.patch, HADOOP-16612.004.patch
>
>
> Track the end-to-end performance of ADLS Gen 2 REST APIs by measuring latency 
> in the Hadoop ABFS driver.
> The latency information is sent back to the ADLS Gen 2 REST API endpoints in 
> the subsequent requests.
> Here's the PR: https://github.com/apache/hadoop/pull/1611



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17042) Hadoop distcp throws "ERROR: Tools helper ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found"

2020-05-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17042:
---
Component/s: tools/distcp

> Hadoop distcp throws "ERROR: Tools helper 
> ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found"
> -
>
> Key: HADOOP-17042
> URL: https://issues.apache.org/jira/browse/HADOOP-17042
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: tools/distcp
>Affects Versions: 3.2.1, 3.1.3
>Reporter: Aki Tanaka
>Assignee: Aki Tanaka
>Priority: Minor
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HADOOP-17042.patch
>
>
> On Hadoop 3.x, we see following "ERROR: Tools helper 
> ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found." message on 
> the first line of the command output when running Hadoop DistCp.
> {code:java}
> $ hadoop distcp /path/to/src /user/hadoop/
> ERROR: Tools helper ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not 
> found.
> 2020-05-14 17:11:53,173 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> useRdiff=false, fromSnapshot=null, toSnapshot=null, skipCRC=false, 
> blocking=true
> ..
> {code}
> This message was added by HADOOP-12857 and it would be an expected behavior.
>  DistCp calls 'hadoop_add_to_classpath_tools hadoop-distcp' when [it 
> starts|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/shellprofile.d/hadoop-distcp.sh],
>  and the error is returned because the hadoop-distcp.sh does not exist in the 
> tools directory.
> However, that error message confuses us. Since this is not an user end 
> configuration issue, I would think it's better to change the log level to 
> debug (hadoop_debug).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17042) Hadoop distcp throws "ERROR: Tools helper ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found"

2020-05-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17042:
---
Fix Version/s: 3.1.5
   3.4.0
   3.3.1
   3.2.2
 Hadoop Flags: Reviewed
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-3.3, branch-3.2, and branch-3.1. Thanks [~tanakahda] 
for your contribution.

> Hadoop distcp throws "ERROR: Tools helper 
> ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found"
> -
>
> Key: HADOOP-17042
> URL: https://issues.apache.org/jira/browse/HADOOP-17042
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.1, 3.1.3
>Reporter: Aki Tanaka
>Assignee: Aki Tanaka
>Priority: Minor
> Fix For: 3.2.2, 3.3.1, 3.4.0, 3.1.5
>
> Attachments: HADOOP-17042.patch
>
>
> On Hadoop 3.x, we see following "ERROR: Tools helper 
> ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found." message on 
> the first line of the command output when running Hadoop DistCp.
> {code:java}
> $ hadoop distcp /path/to/src /user/hadoop/
> ERROR: Tools helper ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not 
> found.
> 2020-05-14 17:11:53,173 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> useRdiff=false, fromSnapshot=null, toSnapshot=null, skipCRC=false, 
> blocking=true
> ..
> {code}
> This message was added by HADOOP-12857 and it would be an expected behavior.
>  DistCp calls 'hadoop_add_to_classpath_tools hadoop-distcp' when [it 
> starts|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/shellprofile.d/hadoop-distcp.sh],
>  and the error is returned because the hadoop-distcp.sh does not exist in the 
> tools directory.
> However, that error message confuses us. Since this is not an user end 
> configuration issue, I would think it's better to change the log level to 
> debug (hadoop_debug).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17042) Hadoop distcp throws "ERROR: Tools helper ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found"

2020-05-17 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17109887#comment-17109887
 ] 

Akira Ajisaka commented on HADOOP-17042:


I'm not sure we can remove hadoop_add_to_classpath_tools function because the 
function might be used by someone.
Anyway, I'm +1 for your patch. Committing this.

> Hadoop distcp throws "ERROR: Tools helper 
> ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found"
> -
>
> Key: HADOOP-17042
> URL: https://issues.apache.org/jira/browse/HADOOP-17042
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.1, 3.1.3
>Reporter: Aki Tanaka
>Assignee: Aki Tanaka
>Priority: Minor
> Attachments: HADOOP-17042.patch
>
>
> On Hadoop 3.x, we see following "ERROR: Tools helper 
> ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found." message on 
> the first line of the command output when running Hadoop DistCp.
> {code:java}
> $ hadoop distcp /path/to/src /user/hadoop/
> ERROR: Tools helper ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not 
> found.
> 2020-05-14 17:11:53,173 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> useRdiff=false, fromSnapshot=null, toSnapshot=null, skipCRC=false, 
> blocking=true
> ..
> {code}
> This message was added by HADOOP-12857 and it would be an expected behavior.
>  DistCp calls 'hadoop_add_to_classpath_tools hadoop-distcp' when [it 
> starts|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/shellprofile.d/hadoop-distcp.sh],
>  and the error is returned because the hadoop-distcp.sh does not exist in the 
> tools directory.
> However, that error message confuses us. Since this is not an user end 
> configuration issue, I would think it's better to change the log level to 
> debug (hadoop_debug).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17042) Hadoop distcp throws "ERROR: Tools helper ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found"

2020-05-17 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17109780#comment-17109780
 ] 

Akira Ajisaka commented on HADOOP-17042:


bq.  I think we cannot move libexec/shellprofile.d/hadoop-distcp.sh to 
libexec/tools/hadoop-distcp.sh. 

You are right. If we do this, we cannot execute "hadoop distcp" command.

> Hadoop distcp throws "ERROR: Tools helper 
> ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found"
> -
>
> Key: HADOOP-17042
> URL: https://issues.apache.org/jira/browse/HADOOP-17042
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.1, 3.1.3
>Reporter: Aki Tanaka
>Assignee: Aki Tanaka
>Priority: Minor
> Attachments: HADOOP-17042.patch
>
>
> On Hadoop 3.x, we see following "ERROR: Tools helper 
> ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found." message on 
> the first line of the command output when running Hadoop DistCp.
> {code:java}
> $ hadoop distcp /path/to/src /user/hadoop/
> ERROR: Tools helper ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not 
> found.
> 2020-05-14 17:11:53,173 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> useRdiff=false, fromSnapshot=null, toSnapshot=null, skipCRC=false, 
> blocking=true
> ..
> {code}
> This message was added by HADOOP-12857 and it would be an expected behavior.
>  DistCp calls 'hadoop_add_to_classpath_tools hadoop-distcp' when [it 
> starts|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/shellprofile.d/hadoop-distcp.sh],
>  and the error is returned because the hadoop-distcp.sh does not exist in the 
> tools directory.
> However, that error message confuses us. Since this is not an user end 
> configuration issue, I would think it's better to change the log level to 
> debug (hadoop_debug).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17045) `Configuration` javadoc describes supporting environment variables, but the feature is not available

2020-05-17 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17109765#comment-17109765
 ] 

Akira Ajisaka commented on HADOOP-17045:


Hi [~iwasakims], would you backport this to branch-2.9?

> `Configuration` javadoc describes supporting environment variables, but the 
> feature is not available
> 
>
> Key: HADOOP-17045
> URL: https://issues.apache.org/jira/browse/HADOOP-17045
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.10.0
>Reporter: Nick Dimiduk
>Assignee: Masatake Iwasaki
>Priority: Minor
> Fix For: 2.10.1
>
> Attachments: HADOOP-17045-branch-2.10.001.patch
>
>
> In Hadoop 2.10.0, the javadoc on the `Configuration` class describes the 
> ability to read values from environment variables. However, this feature 
> isn't implemented until HADOOP-9642, which shipped in 3.0.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17045) `Configuration` javadoc describes supporting environment variables, but the feature is not available

2020-05-17 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17045?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17109726#comment-17109726
 ] 

Akira Ajisaka commented on HADOOP-17045:


+1, thanks [~ndimiduk] and [~iwasakims].

> `Configuration` javadoc describes supporting environment variables, but the 
> feature is not available
> 
>
> Key: HADOOP-17045
> URL: https://issues.apache.org/jira/browse/HADOOP-17045
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.10.0
>Reporter: Nick Dimiduk
>Assignee: Masatake Iwasaki
>Priority: Minor
> Attachments: HADOOP-17045-branch-2.10.001.patch
>
>
> In Hadoop 2.10.0, the javadoc on the `Configuration` class describes the 
> ability to read values from environment variables. However, this feature 
> isn't implemented until HADOOP-9642, which shipped in 3.0.0.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17042) Hadoop distcp throws "ERROR: Tools helper ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found"

2020-05-15 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17108062#comment-17108062
 ] 

Akira Ajisaka commented on HADOOP-17042:


+1, the build failure is not related to the patch.

> Hadoop distcp throws "ERROR: Tools helper 
> ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found"
> -
>
> Key: HADOOP-17042
> URL: https://issues.apache.org/jira/browse/HADOOP-17042
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.1, 3.1.3
>Reporter: Aki Tanaka
>Assignee: Aki Tanaka
>Priority: Minor
> Attachments: HADOOP-17042.patch
>
>
> On Hadoop 3.x, we see following "ERROR: Tools helper 
> ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found." message on 
> the first line of the command output when running Hadoop DistCp.
> {code:java}
> $ hadoop distcp /path/to/src /user/hadoop/
> ERROR: Tools helper ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not 
> found.
> 2020-05-14 17:11:53,173 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> useRdiff=false, fromSnapshot=null, toSnapshot=null, skipCRC=false, 
> blocking=true
> ..
> {code}
> This message was added by HADOOP-12857 and it would be an expected behavior.
>  DistCp calls 'hadoop_add_to_classpath_tools hadoop-distcp' when [it 
> starts|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/shellprofile.d/hadoop-distcp.sh],
>  and the error is returned because the hadoop-distcp.sh does not exist in the 
> tools directory.
> However, that error message confuses us. Since this is not an user end 
> configuration issue, I would think it's better to change the log level to 
> debug (hadoop_debug).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17042) Hadoop distcp throws "ERROR: Tools helper ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found"

2020-05-15 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17108000#comment-17108000
 ] 

Akira Ajisaka commented on HADOOP-17042:


Thanks [~tanakahda] for your report and the patch. I'm +1 for changing the log 
level to debug.

In addition, can we move the script from libexec/hadoop-distcp.sh to 
libexec/tools/hadoop-distcp.sh ? This change can be done in a separate jira.

> Hadoop distcp throws "ERROR: Tools helper 
> ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found"
> -
>
> Key: HADOOP-17042
> URL: https://issues.apache.org/jira/browse/HADOOP-17042
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.2.1, 3.1.3
>Reporter: Aki Tanaka
>Assignee: Aki Tanaka
>Priority: Minor
> Attachments: HADOOP-17042.patch
>
>
> On Hadoop 3.x, we see following "ERROR: Tools helper 
> ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found." message on 
> the first line of the command output when running Hadoop DistCp.
> {code:java}
> $ hadoop distcp /path/to/src /user/hadoop/
> ERROR: Tools helper ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not 
> found.
> 2020-05-14 17:11:53,173 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> useRdiff=false, fromSnapshot=null, toSnapshot=null, skipCRC=false, 
> blocking=true
> ..
> {code}
> This message was added by HADOOP-12857 and it would be an expected behavior.
>  DistCp calls 'hadoop_add_to_classpath_tools hadoop-distcp' when [it 
> starts|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/shellprofile.d/hadoop-distcp.sh],
>  and the error is returned because the hadoop-distcp.sh does not exist in the 
> tools directory.
> However, that error message confuses us. Since this is not an user end 
> configuration issue, I would think it's better to change the log level to 
> debug (hadoop_debug).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17042) Hadoop distcp throws "ERROR: Tools helper ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found"

2020-05-15 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17042:
---
Assignee: Aki Tanaka
  Status: Patch Available  (was: Open)

> Hadoop distcp throws "ERROR: Tools helper 
> ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found"
> -
>
> Key: HADOOP-17042
> URL: https://issues.apache.org/jira/browse/HADOOP-17042
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.1.3, 3.2.1
>Reporter: Aki Tanaka
>Assignee: Aki Tanaka
>Priority: Minor
> Attachments: HADOOP-17042.patch
>
>
> On Hadoop 3.x, we see following "ERROR: Tools helper 
> ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not found." message on 
> the first line of the command output when running Hadoop DistCp.
> {code:java}
> $ hadoop distcp /path/to/src /user/hadoop/
> ERROR: Tools helper ///usr/lib/hadoop/libexec/tools/hadoop-distcp.sh was not 
> found.
> 2020-05-14 17:11:53,173 INFO tools.DistCp: Input Options: 
> DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
> ignoreFailures=false, overwrite=false, append=false, useDiff=false, 
> useRdiff=false, fromSnapshot=null, toSnapshot=null, skipCRC=false, 
> blocking=true
> ..
> {code}
> This message was added by HADOOP-12857 and it would be an expected behavior.
>  DistCp calls 'hadoop_add_to_classpath_tools hadoop-distcp' when [it 
> starts|https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-distcp/src/main/shellprofile.d/hadoop-distcp.sh],
>  and the error is returned because the hadoop-distcp.sh does not exist in the 
> tools directory.
> However, that error message confuses us. Since this is not an user end 
> configuration issue, I would think it's better to change the log level to 
> debug (hadoop_debug).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16690) Update dependency com.nimbusds:nimbus-jose-jwt due to security vulnerability

2020-05-14 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka resolved HADOOP-16690.

Fix Version/s: (was: 3.2.2)
   Resolution: Done

> Update dependency com.nimbusds:nimbus-jose-jwt due to security vulnerability
> 
>
> Key: HADOOP-16690
> URL: https://issues.apache.org/jira/browse/HADOOP-16690
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Affects Versions: 3.2.1
>Reporter: DW
>Priority: Major
>
> Apache Hadoop Auth org.apache.hadoop:hadoop-auth:3.2.1 define dependency to 
> com.nimbusds:nimbus-jose-jwt:4.41.1. There is a known security vulnerability 
> for nimbus-jose-jwt: CVE-2019-17195. Can you upgrade to v7.9 or higher?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Reopened] (HADOOP-16690) Update dependency com.nimbusds:nimbus-jose-jwt due to security vulnerability

2020-05-14 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16690?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reopened HADOOP-16690:


> Update dependency com.nimbusds:nimbus-jose-jwt due to security vulnerability
> 
>
> Key: HADOOP-16690
> URL: https://issues.apache.org/jira/browse/HADOOP-16690
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: auth
>Affects Versions: 3.2.1
>Reporter: DW
>Priority: Major
> Fix For: 3.2.2
>
>
> Apache Hadoop Auth org.apache.hadoop:hadoop-auth:3.2.1 define dependency to 
> com.nimbusds:nimbus-jose-jwt:4.41.1. There is a known security vulnerability 
> for nimbus-jose-jwt: CVE-2019-17195. Can you upgrade to v7.9 or higher?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16888) [JDK11] Support JDK11 in the precommit job

2020-05-12 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17105637#comment-17105637
 ] 

Akira Ajisaka commented on HADOOP-16888:


PR: https://github.com/apache/hadoop/pull/2012

> [JDK11] Support JDK11 in the precommit job
> --
>
> Key: HADOOP-16888
> URL: https://issues.apache.org/jira/browse/HADOOP-16888
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> Install openjdk-11 in the Dockerfile and use Yetus multijdk plugin to run 
> precommit job in both jdk8 and jdk11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16888) [JDK11] Support JDK11 in the precommit job

2020-05-12 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16888:
---
Status: Patch Available  (was: Open)

> [JDK11] Support JDK11 in the precommit job
> --
>
> Key: HADOOP-16888
> URL: https://issues.apache.org/jira/browse/HADOOP-16888
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> Install openjdk-11 in the Dockerfile and use Yetus multijdk plugin to run 
> precommit job in both jdk8 and jdk11.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16768) SnappyCompressor test cases wrongly assume that the compressed data is always smaller than the input data

2020-05-11 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16768:
---
Environment: 
X86/Aarch64
OS: Ubuntu 18.04, CentOS 8
Snappy 1.1.7

  was:
X86/Aarch64

OS: ubuntu 1804

JAVA 8


> SnappyCompressor test cases wrongly assume that the compressed data is always 
> smaller than the input data
> -
>
> Key: HADOOP-16768
> URL: https://issues.apache.org/jira/browse/HADOOP-16768
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, test
> Environment: X86/Aarch64
> OS: Ubuntu 18.04, CentOS 8
> Snappy 1.1.7
>Reporter: zhao bo
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
>
> * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompressInMultiThreads
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress
> These test will fail on X86 and ARM platform.
> Trace back
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
> 12:00:33 [ERROR]   
> TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit:92  
> Expected to find 'testCompressorDecompressorWithExeedBufferLimit error !!!' 
> but got un
> expected exception: java.lang.NullPointerException
>   
>     at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877)
>     at com.google.common.base.Joiner.toString(Joiner.java:452)
>  
>     at com.google.common.base.Joiner.appendTo(Joiner.java:109)
> 
>     at com.google.common.base.Joiner.appendTo(Joiner.java:152)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:195)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:185)
>     at com.google.common.base.Joiner.join(Joiner.java:211)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTestStrategy$2.assertCompression(CompressDecompressTester.java:329)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester.test(CompressDecompressTester.java:135)
>     at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit(TestCompressorDecompressor.java:89)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>     at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>     at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>     at 
> 

[jira] [Updated] (HADOOP-16768) SnappyCompressor test cases wrongly assume that the compressed data is always smaller than the input data

2020-05-11 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16768:
---
Fix Version/s: 3.4.0
   3.3.1
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Merged the PR into trunk and branch-3.3.

> SnappyCompressor test cases wrongly assume that the compressed data is always 
> smaller than the input data
> -
>
> Key: HADOOP-16768
> URL: https://issues.apache.org/jira/browse/HADOOP-16768
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, test
> Environment: X86/Aarch64
> OS: ubuntu 1804
> JAVA 8
>Reporter: zhao bo
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
>
> * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompressInMultiThreads
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress
> These test will fail on X86 and ARM platform.
> Trace back
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
> 12:00:33 [ERROR]   
> TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit:92  
> Expected to find 'testCompressorDecompressorWithExeedBufferLimit error !!!' 
> but got un
> expected exception: java.lang.NullPointerException
>   
>     at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877)
>     at com.google.common.base.Joiner.toString(Joiner.java:452)
>  
>     at com.google.common.base.Joiner.appendTo(Joiner.java:109)
> 
>     at com.google.common.base.Joiner.appendTo(Joiner.java:152)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:195)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:185)
>     at com.google.common.base.Joiner.join(Joiner.java:211)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTestStrategy$2.assertCompression(CompressDecompressTester.java:329)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester.test(CompressDecompressTester.java:135)
>     at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit(TestCompressorDecompressor.java:89)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>     at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>     at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>     at 
> 

[jira] [Updated] (HADOOP-16768) SnappyCompressor test cases wrongly assume that the compressed data is always smaller than the input data

2020-05-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16768:
---
Summary: SnappyCompressor test cases wrongly assume that the compressed 
data is always smaller than the input data  (was: Compress tests failed on X86 
and ARM platform)

> SnappyCompressor test cases wrongly assume that the compressed data is always 
> smaller than the input data
> -
>
> Key: HADOOP-16768
> URL: https://issues.apache.org/jira/browse/HADOOP-16768
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, test
> Environment: X86/Aarch64
> OS: ubuntu 1804
> JAVA 8
>Reporter: zhao bo
>Assignee: Akira Ajisaka
>Priority: Major
>
> * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompressInMultiThreads
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress
> These test will fail on X86 and ARM platform.
> Trace back
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
> 12:00:33 [ERROR]   
> TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit:92  
> Expected to find 'testCompressorDecompressorWithExeedBufferLimit error !!!' 
> but got un
> expected exception: java.lang.NullPointerException
>   
>     at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877)
>     at com.google.common.base.Joiner.toString(Joiner.java:452)
>  
>     at com.google.common.base.Joiner.appendTo(Joiner.java:109)
> 
>     at com.google.common.base.Joiner.appendTo(Joiner.java:152)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:195)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:185)
>     at com.google.common.base.Joiner.join(Joiner.java:211)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTestStrategy$2.assertCompression(CompressDecompressTester.java:329)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester.test(CompressDecompressTester.java:135)
>     at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit(TestCompressorDecompressor.java:89)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>     at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>     at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>     at 

[jira] [Updated] (HADOOP-16768) Compress tests failed on X86 and ARM platform

2020-05-10 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16768:
---
 Component/s: test
Target Version/s: 3.2.2, 3.3.1, 3.4.0, 3.1.5  (was: 3.3.0)
Priority: Major  (was: Critical)

This problem is in the test code, not in the production code. Lowering the 
priority.

> Compress tests failed on X86 and ARM platform
> -
>
> Key: HADOOP-16768
> URL: https://issues.apache.org/jira/browse/HADOOP-16768
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io, test
> Environment: X86/Aarch64
> OS: ubuntu 1804
> JAVA 8
>Reporter: zhao bo
>Assignee: Akira Ajisaka
>Priority: Major
>
> * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompressInMultiThreads
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress
> These test will fail on X86 and ARM platform.
> Trace back
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
> 12:00:33 [ERROR]   
> TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit:92  
> Expected to find 'testCompressorDecompressorWithExeedBufferLimit error !!!' 
> but got un
> expected exception: java.lang.NullPointerException
>   
>     at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877)
>     at com.google.common.base.Joiner.toString(Joiner.java:452)
>  
>     at com.google.common.base.Joiner.appendTo(Joiner.java:109)
> 
>     at com.google.common.base.Joiner.appendTo(Joiner.java:152)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:195)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:185)
>     at com.google.common.base.Joiner.join(Joiner.java:211)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTestStrategy$2.assertCompression(CompressDecompressTester.java:329)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester.test(CompressDecompressTester.java:135)
>     at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit(TestCompressorDecompressor.java:89)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>     at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>     at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>     at 
> 

[jira] [Commented] (HADOOP-16768) Compress tests failed on X86 and ARM platform

2020-05-08 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102502#comment-17102502
 ] 

Akira Ajisaka commented on HADOOP-16768:


PR submitted: https://github.com/apache/hadoop/pull/2003

> Compress tests failed on X86 and ARM platform
> -
>
> Key: HADOOP-16768
> URL: https://issues.apache.org/jira/browse/HADOOP-16768
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
> Environment: X86/Aarch64
> OS: ubuntu 1804
> JAVA 8
>Reporter: zhao bo
>Assignee: Akira Ajisaka
>Priority: Critical
>
> * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompressInMultiThreads
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress
> These test will fail on X86 and ARM platform.
> Trace back
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
> 12:00:33 [ERROR]   
> TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit:92  
> Expected to find 'testCompressorDecompressorWithExeedBufferLimit error !!!' 
> but got un
> expected exception: java.lang.NullPointerException
>   
>     at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877)
>     at com.google.common.base.Joiner.toString(Joiner.java:452)
>  
>     at com.google.common.base.Joiner.appendTo(Joiner.java:109)
> 
>     at com.google.common.base.Joiner.appendTo(Joiner.java:152)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:195)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:185)
>     at com.google.common.base.Joiner.join(Joiner.java:211)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTestStrategy$2.assertCompression(CompressDecompressTester.java:329)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester.test(CompressDecompressTester.java:135)
>     at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit(TestCompressorDecompressor.java:89)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>     at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>     at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>     at 
> 

[jira] [Updated] (HADOOP-16768) Compress tests failed on X86 and ARM platform

2020-05-08 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16768:
---
Target Version/s: 3.3.0
  Status: Patch Available  (was: Open)

> Compress tests failed on X86 and ARM platform
> -
>
> Key: HADOOP-16768
> URL: https://issues.apache.org/jira/browse/HADOOP-16768
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
> Environment: X86/Aarch64
> OS: ubuntu 1804
> JAVA 8
>Reporter: zhao bo
>Assignee: Akira Ajisaka
>Priority: Critical
>
> * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompressInMultiThreads
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress
> These test will fail on X86 and ARM platform.
> Trace back
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
> 12:00:33 [ERROR]   
> TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit:92  
> Expected to find 'testCompressorDecompressorWithExeedBufferLimit error !!!' 
> but got un
> expected exception: java.lang.NullPointerException
>   
>     at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877)
>     at com.google.common.base.Joiner.toString(Joiner.java:452)
>  
>     at com.google.common.base.Joiner.appendTo(Joiner.java:109)
> 
>     at com.google.common.base.Joiner.appendTo(Joiner.java:152)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:195)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:185)
>     at com.google.common.base.Joiner.join(Joiner.java:211)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTestStrategy$2.assertCompression(CompressDecompressTester.java:329)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester.test(CompressDecompressTester.java:135)
>     at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit(TestCompressorDecompressor.java:89)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>     at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>     at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>     at 
> 

[jira] [Commented] (HADOOP-16768) Compress tests failed on X86 and ARM platform

2020-05-08 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102493#comment-17102493
 ] 

Akira Ajisaka commented on HADOOP-16768:


The problem is that Hadoop SnappyCompressor assumes (compressed size) <= (input 
size) and truncate the compressed byte buffer by the input size. However, 
snappy compression can increase data size.
https://github.com/google/snappy/blob/c98344f6260d24d921e5e04006d4bedb528f404a/snappy.cc#L120

I'll create a PR shortly.

> Compress tests failed on X86 and ARM platform
> -
>
> Key: HADOOP-16768
> URL: https://issues.apache.org/jira/browse/HADOOP-16768
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
> Environment: X86/Aarch64
> OS: ubuntu 1804
> JAVA 8
>Reporter: zhao bo
>Assignee: Akira Ajisaka
>Priority: Critical
>
> * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompressInMultiThreads
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress
> These test will fail on X86 and ARM platform.
> Trace back
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
> 12:00:33 [ERROR]   
> TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit:92  
> Expected to find 'testCompressorDecompressorWithExeedBufferLimit error !!!' 
> but got un
> expected exception: java.lang.NullPointerException
>   
>     at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877)
>     at com.google.common.base.Joiner.toString(Joiner.java:452)
>  
>     at com.google.common.base.Joiner.appendTo(Joiner.java:109)
> 
>     at com.google.common.base.Joiner.appendTo(Joiner.java:152)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:195)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:185)
>     at com.google.common.base.Joiner.join(Joiner.java:211)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTestStrategy$2.assertCompression(CompressDecompressTester.java:329)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester.test(CompressDecompressTester.java:135)
>     at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit(TestCompressorDecompressor.java:89)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>     at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>     at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>     at 
> 

[jira] [Assigned] (HADOOP-16768) Compress tests failed on X86 and ARM platform

2020-05-08 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-16768:
--

Assignee: Akira Ajisaka

> Compress tests failed on X86 and ARM platform
> -
>
> Key: HADOOP-16768
> URL: https://issues.apache.org/jira/browse/HADOOP-16768
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
> Environment: X86/Aarch64
> OS: ubuntu 1804
> JAVA 8
>Reporter: zhao bo
>Assignee: Akira Ajisaka
>Priority: Major
>
> * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompressInMultiThreads
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress
> These test will fail on X86 and ARM platform.
> Trace back
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
> 12:00:33 [ERROR]   
> TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit:92  
> Expected to find 'testCompressorDecompressorWithExeedBufferLimit error !!!' 
> but got un
> expected exception: java.lang.NullPointerException
>   
>     at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877)
>     at com.google.common.base.Joiner.toString(Joiner.java:452)
>  
>     at com.google.common.base.Joiner.appendTo(Joiner.java:109)
> 
>     at com.google.common.base.Joiner.appendTo(Joiner.java:152)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:195)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:185)
>     at com.google.common.base.Joiner.join(Joiner.java:211)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTestStrategy$2.assertCompression(CompressDecompressTester.java:329)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester.test(CompressDecompressTester.java:135)
>     at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit(TestCompressorDecompressor.java:89)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>     at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>     at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>     at 
> 

[jira] [Updated] (HADOOP-16768) Compress tests failed on X86 and ARM platform

2020-05-08 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16768?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16768:
---
Priority: Critical  (was: Major)

> Compress tests failed on X86 and ARM platform
> -
>
> Key: HADOOP-16768
> URL: https://issues.apache.org/jira/browse/HADOOP-16768
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
> Environment: X86/Aarch64
> OS: ubuntu 1804
> JAVA 8
>Reporter: zhao bo
>Assignee: Akira Ajisaka
>Priority: Critical
>
> * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompressInMultiThreads
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress
> These test will fail on X86 and ARM platform.
> Trace back
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
> 12:00:33 [ERROR]   
> TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit:92  
> Expected to find 'testCompressorDecompressorWithExeedBufferLimit error !!!' 
> but got un
> expected exception: java.lang.NullPointerException
>   
>     at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877)
>     at com.google.common.base.Joiner.toString(Joiner.java:452)
>  
>     at com.google.common.base.Joiner.appendTo(Joiner.java:109)
> 
>     at com.google.common.base.Joiner.appendTo(Joiner.java:152)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:195)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:185)
>     at com.google.common.base.Joiner.join(Joiner.java:211)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTestStrategy$2.assertCompression(CompressDecompressTester.java:329)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester.test(CompressDecompressTester.java:135)
>     at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit(TestCompressorDecompressor.java:89)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>     at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>     at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:345)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.execute(ForkedBooter.java:126)
>     at 
> 

[jira] [Commented] (HADOOP-16768) Compress tests failed on X86 and ARM platform

2020-05-08 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17102295#comment-17102295
 ] 

Akira Ajisaka commented on HADOOP-16768:


After HADOOP-16054, we can see this error in Jenkins.
https://ci-hadoop.apache.org/job/hadoop-qbt-trunk-java8-linux-x86_64/133/testReport/org.apache.hadoop.io.compress/TestCompressorDecompressor/testCompressorDecompressor/

> Compress tests failed on X86 and ARM platform
> -
>
> Key: HADOOP-16768
> URL: https://issues.apache.org/jira/browse/HADOOP-16768
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
> Environment: X86/Aarch64
> OS: ubuntu 1804
> JAVA 8
>Reporter: zhao bo
>Priority: Major
>
> * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompressInMultiThreads
>  * 
> org.apache.hadoop.io.compress.snappy.TestSnappyCompressorDecompressor.testSnappyCompressDecompress
> These test will fail on X86 and ARM platform.
> Trace back
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressor
>  * 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit
> 12:00:33 [ERROR]   
> TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit:92  
> Expected to find 'testCompressorDecompressorWithExeedBufferLimit error !!!' 
> but got un
> expected exception: java.lang.NullPointerException
>   
>     at 
> com.google.common.base.Preconditions.checkNotNull(Preconditions.java:877)
>     at com.google.common.base.Joiner.toString(Joiner.java:452)
>  
>     at com.google.common.base.Joiner.appendTo(Joiner.java:109)
> 
>     at com.google.common.base.Joiner.appendTo(Joiner.java:152)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:195)
> 
>     at com.google.common.base.Joiner.join(Joiner.java:185)
>     at com.google.common.base.Joiner.join(Joiner.java:211)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester$CompressionTestStrategy$2.assertCompression(CompressDecompressTester.java:329)
>     at 
> org.apache.hadoop.io.compress.CompressDecompressTester.test(CompressDecompressTester.java:135)
>     at 
> org.apache.hadoop.io.compress.TestCompressorDecompressor.testCompressorDecompressorWithExeedBufferLimit(TestCompressorDecompressor.java:89)
>     at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>     at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>     at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>     at java.lang.reflect.Method.invoke(Method.java:498)
>     at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>     at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>     at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>     at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>     at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>     at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>     at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>     at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>     at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>     at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>     at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>     at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:365)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:273)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
>     at 
> org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:159)
>     at 
> org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:384)
>     at 
> 

[jira] [Updated] (HADOOP-17007) hadoop-cos fails to build

2020-04-29 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17007:
---
Fix Version/s: (was: 3.4.0)
   (was: 3.3.1)
   3.3.0
   Labels:   (was: release-blocker)

Cherry-picked to branch-3.3.0. Thanks.

> hadoop-cos fails to build
> -
>
> Key: HADOOP-17007
> URL: https://issues.apache.org/jira/browse/HADOOP-17007
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/cos
>Affects Versions: 3.3.0, 3.4.0
>Reporter: Wei-Chiu Chuang
>Assignee: Yang Yu
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-17007.001.patch
>
>
> Found the following compilation error in a PR precommit. The failure doesn't 
> seem related to the PR itself. Cant' reproduce locally though.
> https://builds.apache.org/job/hadoop-multibranch/job/PR-1972/1/artifact/out/patch-compile-root.txt
> {noformat}
> [INFO] Apache Hadoop Tencent COS Support .. FAILURE [  0.074 
> s]
> [INFO] Apache Hadoop Cloud Storage  SKIPPED
> [INFO] Apache Hadoop Cloud Storage Project  SKIPPED
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 17:31 min
> [INFO] Finished at: 2020-04-22T07:37:51+00:00
> [INFO] Final Memory: 192M/1714M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-dependency-plugin:3.0.2:copy-dependencies 
> (package) on project hadoop-cos: Artifact has not been packaged yet. When 
> used on reactor artifact, copy should be executed after packaging: see 
> MDEP-187. -> [Help 1]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16054) Update Dockerfile to use Bionic

2020-04-25 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16054:
---
Fix Version/s: 3.4.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Merged. Thank you [~ayushtkn].

> Update Dockerfile to use Bionic
> ---
>
> Key: HADOOP-16054
> URL: https://issues.apache.org/jira/browse/HADOOP-16054
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.4.0
>
>
> Ubuntu xenial goes EoL in April 2021. Let's upgrade until the date.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16054) Update Dockerfile to use Bionic

2020-04-25 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16054?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17092287#comment-17092287
 ] 

Akira Ajisaka commented on HADOOP-16054:


Hi [~ayushtkn]

bq. Can this be taken forward now?
Yes. I'll merge https://github.com/apache/hadoop/pull/1966 if you are +1.

> Update Dockerfile to use Bionic
> ---
>
> Key: HADOOP-16054
> URL: https://issues.apache.org/jira/browse/HADOOP-16054
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> Ubuntu xenial goes EoL in April 2021. Let's upgrade until the date.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17007) hadoop-cos fails to build

2020-04-22 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17007:
---
Target Version/s: 3.3.0
  Labels: release-blocker  (was: )

> hadoop-cos fails to build
> -
>
> Key: HADOOP-17007
> URL: https://issues.apache.org/jira/browse/HADOOP-17007
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/cos
>Reporter: Wei-Chiu Chuang
>Priority: Major
>  Labels: release-blocker
>
> Found the following compilation error in a PR precommit. The failure doesn't 
> seem related to the PR itself. Cant' reproduce locally though.
> https://builds.apache.org/job/hadoop-multibranch/job/PR-1972/1/artifact/out/patch-compile-root.txt
> {noformat}
> [INFO] Apache Hadoop Tencent COS Support .. FAILURE [  0.074 
> s]
> [INFO] Apache Hadoop Cloud Storage  SKIPPED
> [INFO] Apache Hadoop Cloud Storage Project  SKIPPED
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 17:31 min
> [INFO] Finished at: 2020-04-22T07:37:51+00:00
> [INFO] Final Memory: 192M/1714M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-dependency-plugin:3.0.2:copy-dependencies 
> (package) on project hadoop-cos: Artifact has not been packaged yet. When 
> used on reactor artifact, copy should be executed after packaging: see 
> MDEP-187. -> [Help 1]
> {noformat}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16959) Resolve hadoop-cos dependency conflict

2020-04-22 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16959:
---
Fix Version/s: (was: 3.4.0)
   (was: 3.3.1)
   3.3.0

> Resolve hadoop-cos dependency conflict
> --
>
> Key: HADOOP-16959
> URL: https://issues.apache.org/jira/browse/HADOOP-16959
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16959-branch-3.3.001.patch, 
> HADOOP-16959-branch-3.3.002.patch, HADOOP-16959-branch-3.3.003.patch, 
> HADOOP-16959-branch-3.3.004.patch, HADOOP-16959-branch-3.3.005.patch
>
>
> There are some dependency conflicts between the Hadoop-common and Hadoop-cos. 
> For example, joda time lib, HTTP client lib and etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16959) Resolve hadoop-cos dependency conflict

2020-04-22 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090282#comment-17090282
 ] 

Akira Ajisaka commented on HADOOP-16959:


Probably this broke HADOOP-17007.
Hi [~yuyang733] and [~Sammi], would you check this?

> Resolve hadoop-cos dependency conflict
> --
>
> Key: HADOOP-16959
> URL: https://issues.apache.org/jira/browse/HADOOP-16959
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build, fs/cos
>Reporter: YangY
>Assignee: YangY
>Priority: Major
> Fix For: 3.3.1, 3.4.0
>
> Attachments: HADOOP-16959-branch-3.3.001.patch, 
> HADOOP-16959-branch-3.3.002.patch, HADOOP-16959-branch-3.3.003.patch, 
> HADOOP-16959-branch-3.3.004.patch, HADOOP-16959-branch-3.3.005.patch
>
>
> There are some dependency conflicts between the Hadoop-common and Hadoop-cos. 
> For example, joda time lib, HTTP client lib and etc.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17007) hadoop-cos fails to build

2020-04-22 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090279#comment-17090279
 ] 

Akira Ajisaka commented on HADOOP-17007:


I could reproduce the error.
{noformat}
$  mvn --batch-mode -Ptest-patch -DskipTests clean test-compile 
-DskipTests=true -X
(snip)
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-dependency-plugin:3.0.2:copy-dependencies 
(package) on project hadoop-cos: Artifact has not been packaged yet. When used 
on reactor artifact, copy should be executed after packaging: see MDEP-187. -> 
[Help 1]
org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute goal 
org.apache.maven.plugins:maven-dependency-plugin:3.0.2:copy-dependencies 
(package) on project hadoop-cos: Artifact has not been packaged yet. When used 
on reactor artifact, copy should be executed after packaging: see MDEP-187.
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:215)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:81)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
 (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:193)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke 
(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke 
(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced 
(Launcher.java:282)
at org.codehaus.plexus.classworlds.launcher.Launcher.launch 
(Launcher.java:225)
at org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode 
(Launcher.java:406)
at org.codehaus.plexus.classworlds.launcher.Launcher.main 
(Launcher.java:347)
Caused by: org.apache.maven.plugin.MojoExecutionException: Artifact has not 
been packaged yet. When used on reactor artifact, copy should be executed after 
packaging: see MDEP-187.
at org.apache.maven.plugins.dependency.AbstractDependencyMojo.copyFile 
(AbstractDependencyMojo.java:177)
at 
org.apache.maven.plugins.dependency.fromDependencies.CopyDependenciesMojo.copyArtifact
 (CopyDependenciesMojo.java:250)
at 
org.apache.maven.plugins.dependency.fromDependencies.CopyDependenciesMojo.doExecute
 (CopyDependenciesMojo.java:125)
at org.apache.maven.plugins.dependency.AbstractDependencyMojo.execute 
(AbstractDependencyMojo.java:144)
at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo 
(DefaultBuildPluginManager.java:137)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:210)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:156)
at org.apache.maven.lifecycle.internal.MojoExecutor.execute 
(MojoExecutor.java:148)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:117)
at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:81)
at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
 (SingleThreadedBuilder.java:56)
at org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
(LifecycleStarter.java:128)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:305)
at org.apache.maven.DefaultMaven.doExecute (DefaultMaven.java:192)
at org.apache.maven.DefaultMaven.execute (DefaultMaven.java:105)
at org.apache.maven.cli.MavenCli.execute (MavenCli.java:957)
at org.apache.maven.cli.MavenCli.doMain (MavenCli.java:289)
at org.apache.maven.cli.MavenCli.main (MavenCli.java:193)
at sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke 
(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke 
(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke (Method.java:498)
at 

[jira] [Commented] (HADOOP-15338) Java 11 runtime support

2020-04-22 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17090166#comment-17090166
 ] 

Akira Ajisaka commented on HADOOP-15338:


We used Dynamometer to simulate the real workloads. The performance of Java 11 
NameNode is almost the same as that of Java 8 NN. We used G1GC in the benchmark.
Cc: [~tasanuma]

> Java 11 runtime support
> ---
>
> Key: HADOOP-15338
> URL: https://issues.apache.org/jira/browse/HADOOP-15338
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0
>
>
> Oracle JDK 8 will be EoL during January 2019, and RedHat will end support for 
> OpenJDK 8 in June 2023 ([https://access.redhat.com/articles/1299013]), so we 
> need to support Java 11 LTS at least before June 2023.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16944) Use Yetus 0.12.0 in GitHub PR

2020-04-21 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16944:
---
Fix Version/s: 2.10.1
   3.2.2
   3.1.4
   2.9.3
   3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Cherry-picked to branch-3.3, branch-3.2, branch-3.1, branch-2.10, and 
branch-2.9.
This change does not have any effect on the create-release script for 3.3.0, so 
it's safe.

> Use Yetus 0.12.0 in GitHub PR
> -
>
> Key: HADOOP-16944
> URL: https://issues.apache.org/jira/browse/HADOOP-16944
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, 2.9.3, 3.1.4, 3.2.2, 2.10.1, 3.4.0
>
>
> HADOOP-16054 wants to upgrade the ubuntu version of the docker image from 
> 16.04 to 18.04. However, ubuntu 18.04 brings maven 3.6.0 by default and the 
> pre-commit jobs fail to add comments to GitHub and JIRA. The issue was fixed 
> by YETUS-957 and upgrading the Yetus version to 0.12.0-SNAPSHOT (or 0.12.0, 
> if released) will fix the problem.
> How to upgrade Yetus version in the pre-commit jobs:
> * GitHub PR (hadoop-multibranch): Upgrade Jenkinsfile
> * JIRA (PreCommit--Build): Manually update the config in builds.apache.org



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16944) Use Yetus 0.12.0 in GitHub PR

2020-04-21 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16944:
---
Fix Version/s: (was: 3.4.0)

> Use Yetus 0.12.0 in GitHub PR
> -
>
> Key: HADOOP-16944
> URL: https://issues.apache.org/jira/browse/HADOOP-16944
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.3.0, 2.9.3, 3.1.4, 3.2.2, 2.10.1
>
>
> HADOOP-16054 wants to upgrade the ubuntu version of the docker image from 
> 16.04 to 18.04. However, ubuntu 18.04 brings maven 3.6.0 by default and the 
> pre-commit jobs fail to add comments to GitHub and JIRA. The issue was fixed 
> by YETUS-957 and upgrading the Yetus version to 0.12.0-SNAPSHOT (or 0.12.0, 
> if released) will fix the problem.
> How to upgrade Yetus version in the pre-commit jobs:
> * GitHub PR (hadoop-multibranch): Upgrade Jenkinsfile
> * JIRA (PreCommit--Build): Manually update the config in builds.apache.org



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16054) Update Dockerfile to use Bionic

2020-04-19 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16054?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16054:
---
Status: Patch Available  (was: Reopened)

> Update Dockerfile to use Bionic
> ---
>
> Key: HADOOP-16054
> URL: https://issues.apache.org/jira/browse/HADOOP-16054
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build, test
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> Ubuntu xenial goes EoL in April 2021. Let's upgrade until the date.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17000) [Test] Use Yetus 0.12.0 in precommit job

2020-04-19 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17000:
---
Resolution: Done
Status: Resolved  (was: Patch Available)

> [Test] Use Yetus 0.12.0 in precommit job
> 
>
> Key: HADOOP-17000
> URL: https://issues.apache.org/jira/browse/HADOOP-17000
> Project: Hadoop Common
>  Issue Type: Task
>  Components: bulid
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-17000.01.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17000) [Test] Use Yetus 0.12.0 in precommit job

2020-04-19 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086833#comment-17086833
 ] 

Akira Ajisaka commented on HADOOP-17000:


Updated the configs.

> [Test] Use Yetus 0.12.0 in precommit job
> 
>
> Key: HADOOP-17000
> URL: https://issues.apache.org/jira/browse/HADOOP-17000
> Project: Hadoop Common
>  Issue Type: Task
>  Components: bulid
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-17000.01.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-17000) [Test] Use Yetus 0.12.0 in precommit job

2020-04-19 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-17000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086832#comment-17086832
 ] 

Akira Ajisaka commented on HADOOP-17000:


LGTM. Here is the updated config: 
https://gist.github.com/aajisaka/4b33b66e1832a6a8225d2d6caf7f6b8e

I'll update the HDFS, MAPREDUCE, and YARN jobs.

> [Test] Use Yetus 0.12.0 in precommit job
> 
>
> Key: HADOOP-17000
> URL: https://issues.apache.org/jira/browse/HADOOP-17000
> Project: Hadoop Common
>  Issue Type: Task
>  Components: bulid
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-17000.01.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17000) [Test] Use Yetus 0.12.0 in precommit job

2020-04-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17000:
---
Status: Patch Available  (was: Open)

Test patch to kick hadoop precommit job.

> [Test] Use Yetus 0.12.0 in precommit job
> 
>
> Key: HADOOP-17000
> URL: https://issues.apache.org/jira/browse/HADOOP-17000
> Project: Hadoop Common
>  Issue Type: Task
>  Components: bulid
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-17000.01.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-17000) [Test] Use Yetus 0.12.0 in precommit job

2020-04-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-17000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-17000:
---
Attachment: HADOOP-17000.01.patch

> [Test] Use Yetus 0.12.0 in precommit job
> 
>
> Key: HADOOP-17000
> URL: https://issues.apache.org/jira/browse/HADOOP-17000
> Project: Hadoop Common
>  Issue Type: Task
>  Components: bulid
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Attachments: HADOOP-17000.01.patch
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-17000) [Test] Use Yetus 0.12.0 in precommit job

2020-04-18 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-17000:
--

 Summary: [Test] Use Yetus 0.12.0 in precommit job
 Key: HADOOP-17000
 URL: https://issues.apache.org/jira/browse/HADOOP-17000
 Project: Hadoop Common
  Issue Type: Task
  Components: bulid
Reporter: Akira Ajisaka
Assignee: Akira Ajisaka






--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16944) Use Yetus 0.12.0 in GitHub PR

2020-04-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16944:
---
Summary: Use Yetus 0.12.0 in GitHub PR  (was: Use Yetus 0.12.0-SNAPSHOT for 
precommit jobs)

> Use Yetus 0.12.0 in GitHub PR
> -
>
> Key: HADOOP-16944
> URL: https://issues.apache.org/jira/browse/HADOOP-16944
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.4.0
>
>
> HADOOP-16054 wants to upgrade the ubuntu version of the docker image from 
> 16.04 to 18.04. However, ubuntu 18.04 brings maven 3.6.0 by default and the 
> pre-commit jobs fail to add comments to GitHub and JIRA. The issue was fixed 
> by YETUS-957 and upgrading the Yetus version to 0.12.0-SNAPSHOT (or 0.12.0, 
> if released) will fix the problem.
> How to upgrade Yetus version in the pre-commit jobs:
> * GitHub PR (hadoop-multibranch): Upgrade Jenkinsfile
> * JIRA (PreCommit--Build): Manually update the config in builds.apache.org



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16944) Use Yetus 0.12.0-SNAPSHOT for precommit jobs

2020-04-18 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086573#comment-17086573
 ] 

Akira Ajisaka commented on HADOOP-16944:


I'll cherry-pick this to the lower branches after branch-3.3.0 is cut.

> Use Yetus 0.12.0-SNAPSHOT for precommit jobs
> 
>
> Key: HADOOP-16944
> URL: https://issues.apache.org/jira/browse/HADOOP-16944
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.4.0
>
>
> HADOOP-16054 wants to upgrade the ubuntu version of the docker image from 
> 16.04 to 18.04. However, ubuntu 18.04 brings maven 3.6.0 by default and the 
> pre-commit jobs fail to add comments to GitHub and JIRA. The issue was fixed 
> by YETUS-957 and upgrading the Yetus version to 0.12.0-SNAPSHOT (or 0.12.0, 
> if released) will fix the problem.
> How to upgrade Yetus version in the pre-commit jobs:
> * GitHub PR (hadoop-multibranch): Upgrade Jenkinsfile
> * JIRA (PreCommit--Build): Manually update the config in builds.apache.org



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16944) Use Yetus 0.12.0-SNAPSHOT for precommit jobs

2020-04-18 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17086552#comment-17086552
 ] 

Akira Ajisaka commented on HADOOP-16944:


Merged the PR into trunk. Thanks [~ayushtkn] for the review.

> Use Yetus 0.12.0-SNAPSHOT for precommit jobs
> 
>
> Key: HADOOP-16944
> URL: https://issues.apache.org/jira/browse/HADOOP-16944
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> HADOOP-16054 wants to upgrade the ubuntu version of the docker image from 
> 16.04 to 18.04. However, ubuntu 18.04 brings maven 3.6.0 by default and the 
> pre-commit jobs fail to add comments to GitHub and JIRA. The issue was fixed 
> by YETUS-957 and upgrading the Yetus version to 0.12.0-SNAPSHOT (or 0.12.0, 
> if released) will fix the problem.
> How to upgrade Yetus version in the pre-commit jobs:
> * GitHub PR (hadoop-multibranch): Upgrade Jenkinsfile
> * JIRA (PreCommit--Build): Manually update the config in builds.apache.org



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16944) Use Yetus 0.12.0-SNAPSHOT for precommit jobs

2020-04-18 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16944:
---
Fix Version/s: 3.4.0

> Use Yetus 0.12.0-SNAPSHOT for precommit jobs
> 
>
> Key: HADOOP-16944
> URL: https://issues.apache.org/jira/browse/HADOOP-16944
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
> Fix For: 3.4.0
>
>
> HADOOP-16054 wants to upgrade the ubuntu version of the docker image from 
> 16.04 to 18.04. However, ubuntu 18.04 brings maven 3.6.0 by default and the 
> pre-commit jobs fail to add comments to GitHub and JIRA. The issue was fixed 
> by YETUS-957 and upgrading the Yetus version to 0.12.0-SNAPSHOT (or 0.12.0, 
> if released) will fix the problem.
> How to upgrade Yetus version in the pre-commit jobs:
> * GitHub PR (hadoop-multibranch): Upgrade Jenkinsfile
> * JIRA (PreCommit--Build): Manually update the config in builds.apache.org



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16944) Use Yetus 0.12.0-SNAPSHOT for precommit jobs

2020-04-17 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17085594#comment-17085594
 ] 

Akira Ajisaka commented on HADOOP-16944:


Yetus 0.12.0 has been released.
Hi [~ayushtkn], would you check the PR?

> Use Yetus 0.12.0-SNAPSHOT for precommit jobs
> 
>
> Key: HADOOP-16944
> URL: https://issues.apache.org/jira/browse/HADOOP-16944
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> HADOOP-16054 wants to upgrade the ubuntu version of the docker image from 
> 16.04 to 18.04. However, ubuntu 18.04 brings maven 3.6.0 by default and the 
> pre-commit jobs fail to add comments to GitHub and JIRA. The issue was fixed 
> by YETUS-957 and upgrading the Yetus version to 0.12.0-SNAPSHOT (or 0.12.0, 
> if released) will fix the problem.
> How to upgrade Yetus version in the pre-commit jobs:
> * GitHub PR (hadoop-multibranch): Upgrade Jenkinsfile
> * JIRA (PreCommit--Build): Manually update the config in builds.apache.org



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16993) Hadoop 3.1.2 download link is broken

2020-04-16 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16993:
---
Status: Patch Available  (was: Open)

> Hadoop 3.1.2 download link is broken
> 
>
> Key: HADOOP-16993
> URL: https://issues.apache.org/jira/browse/HADOOP-16993
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: website
>Reporter: Arpit Agarwal
>Assignee: Akira Ajisaka
>Priority: Major
>
> Remove broken Hadoop 3.1.2 download links from the website.
> https://hadoop.apache.org/releases.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16993) Hadoop 3.1.2 download link is broken

2020-04-16 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16993:
---
Summary: Hadoop 3.1.2 download link is broken  (was: Hadoop 3.2.1 download 
link is broken)

> Hadoop 3.1.2 download link is broken
> 
>
> Key: HADOOP-16993
> URL: https://issues.apache.org/jira/browse/HADOOP-16993
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: website
>Reporter: Arpit Agarwal
>Assignee: Akira Ajisaka
>Priority: Major
>
> Remove broken Hadoop 3.1.2 download links from the website.
> https://hadoop.apache.org/releases.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16993) Hadoop 3.2.1 download link is broken

2020-04-16 Thread Akira Ajisaka (Jira)
Akira Ajisaka created HADOOP-16993:
--

 Summary: Hadoop 3.2.1 download link is broken
 Key: HADOOP-16993
 URL: https://issues.apache.org/jira/browse/HADOOP-16993
 Project: Hadoop Common
  Issue Type: Bug
  Components: website
Reporter: Arpit Agarwal
Assignee: Akira Ajisaka


Remove broken Hadoop 3.1.2 download links from the website.
https://hadoop.apache.org/releases.html



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16988) Remove source code from branch-2

2020-04-15 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16988:
---
Status: Patch Available  (was: Open)

> Remove source code from branch-2
> 
>
> Key: HADOOP-16988
> URL: https://issues.apache.org/jira/browse/HADOOP-16988
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> Now, branch-2 is dead and unused. I think we can delete the entire source 
> code from branch-2 to avoid committing or cherry-picking to the unused branch.
> Chen Liang asked ASF INFRA for help but it didn't help for us: INFRA-19581



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-16988) Remove source code from branch-2

2020-04-15 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-16988:
--

Assignee: Akira Ajisaka

> Remove source code from branch-2
> 
>
> Key: HADOOP-16988
> URL: https://issues.apache.org/jira/browse/HADOOP-16988
> Project: Hadoop Common
>  Issue Type: Task
>Reporter: Akira Ajisaka
>Assignee: Akira Ajisaka
>Priority: Major
>
> Now, branch-2 is dead and unused. I think we can delete the entire source 
> code from branch-2 to avoid committing or cherry-picking to the unused branch.
> Chen Liang asked ASF INFRA for help but it didn't help for us: INFRA-19581



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16734) Backport HADOOP-16455- "ABFS: Implement FileSystem.access() method" to branch-2

2020-04-15 Thread Akira Ajisaka (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-16734:
---
Fix Version/s: (was: 2.10.0)

> Backport HADOOP-16455- "ABFS: Implement FileSystem.access() method" to 
> branch-2
> ---
>
> Key: HADOOP-16734
> URL: https://issues.apache.org/jira/browse/HADOOP-16734
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.3.0
>Reporter: Bilahari T H
>Assignee: Bilahari T H
>Priority: Minor
>
> Backport https://issues.apache.org/jira/browse/HADOOP-16455 to branch-2



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



<    5   6   7   8   9   10   11   12   13   14   >