[jira] [Commented] (HADOOP-16655) Change cipher suite when fetching tomcat tarball for branch-2

2019-10-15 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952458#comment-16952458
 ] 

Akira Ajisaka commented on HADOOP-16655:


The original motivation behind HADOOP-16323 is to fix possible MITM attacks. 
[https://medium.com/bugbountywriteup/want-to-take-over-the-java-ecosystem-all-you-need-is-a-mitm-1fc329d898fb]

> Change cipher suite when fetching tomcat tarball for branch-2
> -
>
> Key: HADOOP-16655
> URL: https://issues.apache.org/jira/browse/HADOOP-16655
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Fix For: 2.10.0, 2.11.0
>
> Attachments: HADOOP-16655-branch-2.001.patch, 
> HADOOP-16655-branch-2.002.patch
>
>
> Hit this error when building via docker:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
> hadoop-kms: An Ant BuildException has occured: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
> [ERROR] around Ant part ... skipexisting="true" verbose="true" 
> src="https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
>  @ 5:183 in 
> /build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
> [ERROR] -> [Help 1] {noformat}
> Seems this is caused by HADOOP-16323 which fetches via https.
> This should only be an issue in branch-2 since this was removed for KMS in 
> HADOOP-13597, and httpfs in HDFS-10860
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16655) Change cipher suite when fetching tomcat tarball for branch-2

2019-10-15 Thread Akira Ajisaka (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952452#comment-16952452
 ] 

Akira Ajisaka commented on HADOOP-16655:


Thank you for the fix, [~jhung] and [~weichiu].

> Change cipher suite when fetching tomcat tarball for branch-2
> -
>
> Key: HADOOP-16655
> URL: https://issues.apache.org/jira/browse/HADOOP-16655
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Fix For: 2.10.0, 2.11.0
>
> Attachments: HADOOP-16655-branch-2.001.patch, 
> HADOOP-16655-branch-2.002.patch
>
>
> Hit this error when building via docker:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
> hadoop-kms: An Ant BuildException has occured: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
> [ERROR] around Ant part ... skipexisting="true" verbose="true" 
> src="https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
>  @ 5:183 in 
> /build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
> [ERROR] -> [Help 1] {noformat}
> Seems this is caused by HADOOP-16323 which fetches via https.
> This should only be an issue in branch-2 since this was removed for KMS in 
> HADOOP-13597, and httpfs in HDFS-10860
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2019-10-15 Thread lqjacklee (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952432#comment-16952432
 ] 

lqjacklee commented on HADOOP-15870:


HADOOP-15870-006.patch provide the option `support-available-is-zero` to check 
the available is zero support
provide the option `support-available-is-positive` to check the available is 
positive support
provide the option `support-available-at-eof` to check the available at eof 
support

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, 
> HADOOP-15870-004.patch, HADOOP-15870-005.patch, HADOOP-15870-006.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15870) S3AInputStream.remainingInFile should use nextReadPos

2019-10-15 Thread lqjacklee (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15870?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

lqjacklee updated HADOOP-15870:
---
Attachment: HADOOP-15870-006.patch

> S3AInputStream.remainingInFile should use nextReadPos
> -
>
> Key: HADOOP-15870
> URL: https://issues.apache.org/jira/browse/HADOOP-15870
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.4, 3.1.1
>Reporter: Shixiong Zhu
>Assignee: lqjacklee
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15870-002.patch, HADOOP-15870-003.patch, 
> HADOOP-15870-004.patch, HADOOP-15870-005.patch, HADOOP-15870-006.patch
>
>
> Otherwise `remainingInFile` will not change after `seek`.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hddong commented on a change in pull request #1614: HADOOP-16615. Add password check for credential provider

2019-10-15 Thread GitBox
hddong commented on a change in pull request #1614: HADOOP-16615. Add password 
check for credential provider
URL: https://github.com/apache/hadoop/pull/1614#discussion_r334237483
 
 

 ##
 File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/alias/CredentialShell.java
 ##
 @@ -293,6 +300,91 @@ public String getUsage() {
 }
   }
 
+  private class CheckCommand extends Command {
+public static final String USAGE = "check  [-value alias-value] " +
+"[-provider provider-path] [-strict]";
+public static final String DESC =
+"The check subcommand check a password for the name\n" +
+"specified as the  argument within the provider indicated\n" +
+"through the -provider argument. If -strict is supplied, fail\n" +
+"immediately if the provider requires a password and none is given.\n" 
+
+"If -value is provided, use that for the value of the credential\n" +
+"instead of prompting the user.";
+
+private String alias = null;
+
+public CheckCommand(String alias) {
 
 Review comment:
   @steveloughran a checkstyle error here `public CheckCommand(String alias) 
{:5: Redundant 'public' modifier. [RedundantModifier]` , but i think `public` 
is need here. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16655) Change cipher suite when fetching tomcat tarball for branch-2

2019-10-15 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16655:
---
Fix Version/s: 2.11.0
   2.10.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Thanks! Committed.

> Change cipher suite when fetching tomcat tarball for branch-2
> -
>
> Key: HADOOP-16655
> URL: https://issues.apache.org/jira/browse/HADOOP-16655
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Fix For: 2.10.0, 2.11.0
>
> Attachments: HADOOP-16655-branch-2.001.patch, 
> HADOOP-16655-branch-2.002.patch
>
>
> Hit this error when building via docker:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
> hadoop-kms: An Ant BuildException has occured: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
> [ERROR] around Ant part ... skipexisting="true" verbose="true" 
> src="https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
>  @ 5:183 in 
> /build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
> [ERROR] -> [Help 1] {noformat}
> Seems this is caused by HADOOP-16323 which fetches via https.
> This should only be an issue in branch-2 since this was removed for KMS in 
> HADOOP-13597, and httpfs in HDFS-10860
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16655) Change cipher suite when fetching tomcat tarball for branch-2

2019-10-15 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16655:
---
Summary: Change cipher suite when fetching tomcat tarball for branch-2  
(was: Use http when fetching tomcat tarball for branch-2)

> Change cipher suite when fetching tomcat tarball for branch-2
> -
>
> Key: HADOOP-16655
> URL: https://issues.apache.org/jira/browse/HADOOP-16655
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HADOOP-16655-branch-2.001.patch, 
> HADOOP-16655-branch-2.002.patch
>
>
> Hit this error when building via docker:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
> hadoop-kms: An Ant BuildException has occured: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
> [ERROR] around Ant part ... skipexisting="true" verbose="true" 
> src="https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
>  @ 5:183 in 
> /build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
> [ERROR] -> [Help 1] {noformat}
> Seems this is caused by HADOOP-16323 which fetches via https.
> This should only be an issue in branch-2 since this was removed for KMS in 
> HADOOP-13597, and httpfs in HDFS-10860
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16655) Use http when fetching tomcat tarball for branch-2

2019-10-15 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952403#comment-16952403
 ] 

Wei-Chiu Chuang commented on HADOOP-16655:
--

LGTM +1

http://www.apache.org/dev/release-download-pages.html states cchecksums, 
detached signatures and public key should use HTTPS but it doesn't state use 
http for artifacts. So not the best, but that's probably ok.

> Use http when fetching tomcat tarball for branch-2
> --
>
> Key: HADOOP-16655
> URL: https://issues.apache.org/jira/browse/HADOOP-16655
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HADOOP-16655-branch-2.001.patch, 
> HADOOP-16655-branch-2.002.patch
>
>
> Hit this error when building via docker:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
> hadoop-kms: An Ant BuildException has occured: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
> [ERROR] around Ant part ... skipexisting="true" verbose="true" 
> src="https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
>  @ 5:183 in 
> /build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
> [ERROR] -> [Help 1] {noformat}
> Seems this is caused by HADOOP-16323 which fetches via https.
> This should only be an issue in branch-2 since this was removed for KMS in 
> HADOOP-13597, and httpfs in HDFS-10860
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16655) Use http when fetching tomcat tarball for branch-2

2019-10-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952398#comment-16952398
 ] 

Hadoop QA commented on HADOOP-16655:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
55s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
40s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
54s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
38s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
47s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
18s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
1s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
47s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
1s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
39s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 99m 36s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.3 Server=19.03.3 Image:yetus/hadoop:da675796017 |
| JIRA Issue | HADOOP-16655 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12983100/HADOOP-16655-branch-2.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux 85126ff81c39 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / 1081272 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 

[jira] [Commented] (HADOOP-16655) Use http when fetching tomcat tarball for branch-2

2019-10-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952390#comment-16952390
 ] 

Hadoop QA commented on HADOOP-16655:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
1s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} hadolint {color} | {color:green}  0m  
2s{color} | {color:green} There were no new hadolint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 0s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  2m 10s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.3 Server=19.03.3 Image:yetus/hadoop:da675796017 |
| JIRA Issue | HADOOP-16655 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12983105/HADOOP-16655-branch-2.002.patch
 |
| Optional Tests |  dupname  asflicense  hadolint  shellcheck  shelldocs  |
| uname | Linux 41fa54fe6586 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / 1081272 |
| maven | version: Apache Maven 3.3.9 |
| shellcheck | v0.4.6 |
| Max. process+thread count | 43 (vs. ulimit of 5500) |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16600/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Use http when fetching tomcat tarball for branch-2
> --
>
> Key: HADOOP-16655
> URL: https://issues.apache.org/jira/browse/HADOOP-16655
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HADOOP-16655-branch-2.001.patch, 
> HADOOP-16655-branch-2.002.patch
>
>
> Hit this error when building via docker:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
> hadoop-kms: An Ant BuildException has occured: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
> [ERROR] around Ant part ... skipexisting="true" verbose="true" 
> src="https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
>  @ 5:183 in 
> /build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
> [ERROR] -> [Help 1] {noformat}
> Seems this is caused by HADOOP-16323 which fetches via https.
> This should only be an issue in branch-2 since this was removed for KMS in 
> HADOOP-13597, and httpfs in HDFS-10860
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16655) Use http when fetching tomcat tarball for branch-2

2019-10-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952387#comment-16952387
 ] 

Hadoop QA commented on HADOOP-16655:


(!) A patch to the testing environment has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16600/console in case of 
problems.


> Use http when fetching tomcat tarball for branch-2
> --
>
> Key: HADOOP-16655
> URL: https://issues.apache.org/jira/browse/HADOOP-16655
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HADOOP-16655-branch-2.001.patch, 
> HADOOP-16655-branch-2.002.patch
>
>
> Hit this error when building via docker:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
> hadoop-kms: An Ant BuildException has occured: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
> [ERROR] around Ant part ... skipexisting="true" verbose="true" 
> src="https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
>  @ 5:183 in 
> /build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
> [ERROR] -> [Help 1] {noformat}
> Seems this is caused by HADOOP-16323 which fetches via https.
> This should only be an issue in branch-2 since this was removed for KMS in 
> HADOOP-13597, and httpfs in HDFS-10860
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16655) Use http when fetching tomcat tarball for branch-2

2019-10-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952381#comment-16952381
 ] 

Hadoop QA commented on HADOOP-16655:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 21m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
57s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m  
4s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
12s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} branch-2 passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} branch-2 passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
15s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
22s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  
8s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
3s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed with JDK v1.8.0_222 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
46s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m  
5s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
37s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 91m 43s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=19.03.3 Server=19.03.3 Image:yetus/hadoop:da675796017 |
| JIRA Issue | HADOOP-16655 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12983100/HADOOP-16655-branch-2.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  xml  |
| uname | Linux c65ef6729770 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / 1081272 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 

[jira] [Updated] (HADOOP-16655) Use http when fetching tomcat tarball for branch-2

2019-10-15 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16655:
---
Attachment: HADOOP-16655-branch-2.002.patch

> Use http when fetching tomcat tarball for branch-2
> --
>
> Key: HADOOP-16655
> URL: https://issues.apache.org/jira/browse/HADOOP-16655
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HADOOP-16655-branch-2.001.patch, 
> HADOOP-16655-branch-2.002.patch
>
>
> Hit this error when building via docker:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
> hadoop-kms: An Ant BuildException has occured: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
> [ERROR] around Ant part ... skipexisting="true" verbose="true" 
> src="https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
>  @ 5:183 in 
> /build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
> [ERROR] -> [Help 1] {noformat}
> Seems this is caused by HADOOP-16323 which fetches via https.
> This should only be an issue in branch-2 since this was removed for KMS in 
> HADOOP-13597, and httpfs in HDFS-10860
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16655) Use http when fetching tomcat tarball for branch-2

2019-10-15 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952380#comment-16952380
 ] 

Jonathan Hung commented on HADOOP-16655:


Oh, interesting. That seems to work too. Attached 002 patch for this. Mind 
taking a look [~weichiu]? Thanks!

> Use http when fetching tomcat tarball for branch-2
> --
>
> Key: HADOOP-16655
> URL: https://issues.apache.org/jira/browse/HADOOP-16655
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HADOOP-16655-branch-2.001.patch, 
> HADOOP-16655-branch-2.002.patch
>
>
> Hit this error when building via docker:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
> hadoop-kms: An Ant BuildException has occured: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
> [ERROR] around Ant part ... skipexisting="true" verbose="true" 
> src="https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
>  @ 5:183 in 
> /build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
> [ERROR] -> [Help 1] {noformat}
> Seems this is caused by HADOOP-16323 which fetches via https.
> This should only be an issue in branch-2 since this was removed for KMS in 
> HADOOP-13597, and httpfs in HDFS-10860
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16655) Use http when fetching tomcat tarball for branch-2

2019-10-15 Thread Wei-Chiu Chuang (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952354#comment-16952354
 ] 

Wei-Chiu Chuang commented on HADOOP-16655:
--

I think it's a problem with JDK7?

This is what i have when i use JDK7 to build:
{noformat}
JAVA_HOME=`/usr/libexec/java_home -v 1.7` mvn -Dhttps.protocols=TLSv1.2  
-Dhttps.cipherSuites=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 clean install 
-DskipTests -Pdist -Dmaven.javadoc.skip=true
{noformat}


> Use http when fetching tomcat tarball for branch-2
> --
>
> Key: HADOOP-16655
> URL: https://issues.apache.org/jira/browse/HADOOP-16655
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HADOOP-16655-branch-2.001.patch
>
>
> Hit this error when building via docker:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
> hadoop-kms: An Ant BuildException has occured: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
> [ERROR] around Ant part ... skipexisting="true" verbose="true" 
> src="https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
>  @ 5:183 in 
> /build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
> [ERROR] -> [Help 1] {noformat}
> Seems this is caused by HADOOP-16323 which fetches via https.
> This should only be an issue in branch-2 since this was removed for KMS in 
> HADOOP-13597, and httpfs in HDFS-10860
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16656) Document FairCallQueue configs in core-default.xml

2019-10-15 Thread Siyao Meng (Jira)
Siyao Meng created HADOOP-16656:
---

 Summary: Document FairCallQueue configs in core-default.xml
 Key: HADOOP-16656
 URL: https://issues.apache.org/jira/browse/HADOOP-16656
 Project: Hadoop Common
  Issue Type: Task
Reporter: Siyao Meng


So far those callqueue / scheduler / faircallqueue -related configurations are 
only documented in FairCallQueue.md in 3.3.0:
https://aajisaka.github.io/hadoop-document/hadoop-project/hadoop-project-dist/hadoop-common/FairCallQueue.html#Full_List_of_Configurations
(Thanks Akira for uploading this.)

Goal: Document those configs in core-default.xml as well to make it easier for 
users(admins) to find and use.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16655) Use http when fetching tomcat tarball for branch-2

2019-10-15 Thread Jonathan Hung (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952322#comment-16952322
 ] 

Jonathan Hung commented on HADOOP-16655:


[~aajisaka] mind taking a look at this? Not sure about the original motivation 
behind HADOOP-16323. Thanks!

> Use http when fetching tomcat tarball for branch-2
> --
>
> Key: HADOOP-16655
> URL: https://issues.apache.org/jira/browse/HADOOP-16655
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HADOOP-16655-branch-2.001.patch
>
>
> Hit this error when building via docker:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
> hadoop-kms: An Ant BuildException has occured: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
> [ERROR] around Ant part ... skipexisting="true" verbose="true" 
> src="https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
>  @ 5:183 in 
> /build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
> [ERROR] -> [Help 1] {noformat}
> Seems this is caused by HADOOP-16323 which fetches via https.
> This should only be an issue in branch-2 since this was removed for KMS in 
> HADOOP-13597, and httpfs in HDFS-10860
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16655) Use http when fetching tomcat tarball for branch-2

2019-10-15 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16655:
---
Status: Patch Available  (was: Open)

> Use http when fetching tomcat tarball for branch-2
> --
>
> Key: HADOOP-16655
> URL: https://issues.apache.org/jira/browse/HADOOP-16655
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HADOOP-16655-branch-2.001.patch
>
>
> Hit this error when building via docker:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
> hadoop-kms: An Ant BuildException has occured: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
> [ERROR] around Ant part ... skipexisting="true" verbose="true" 
> src="https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
>  @ 5:183 in 
> /build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
> [ERROR] -> [Help 1] {noformat}
> Seems this is caused by HADOOP-16323 which fetches via https.
> This should only be an issue in branch-2 since this was removed for KMS in 
> HADOOP-13597, and httpfs in HDFS-10860
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16655) Use http when fetching tomcat tarball for branch-2

2019-10-15 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16655?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16655:
---
Attachment: HADOOP-16655-branch-2.001.patch

> Use http when fetching tomcat tarball for branch-2
> --
>
> Key: HADOOP-16655
> URL: https://issues.apache.org/jira/browse/HADOOP-16655
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jonathan Hung
>Assignee: Jonathan Hung
>Priority: Major
> Attachments: HADOOP-16655-branch-2.001.patch
>
>
> Hit this error when building via docker:
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
> hadoop-kms: An Ant BuildException has occured: 
> javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
> [ERROR] around Ant part ... skipexisting="true" verbose="true" 
> src="https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
>  @ 5:183 in 
> /build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
> [ERROR] -> [Help 1] {noformat}
> Seems this is caused by HADOOP-16323 which fetches via https.
> This should only be an issue in branch-2 since this was removed for KMS in 
> HADOOP-13597, and httpfs in HDFS-10860
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16655) Use http when fetching tomcat tarball for branch-2

2019-10-15 Thread Jonathan Hung (Jira)
Jonathan Hung created HADOOP-16655:
--

 Summary: Use http when fetching tomcat tarball for branch-2
 Key: HADOOP-16655
 URL: https://issues.apache.org/jira/browse/HADOOP-16655
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jonathan Hung
Assignee: Jonathan Hung


Hit this error when building via docker:
{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-antrun-plugin:1.7:run (dist) on project 
hadoop-kms: An Ant BuildException has occured: 
javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
[ERROR] around Ant part ...https://archive.apache.org/dist/tomcat/tomcat-8/v8.5.43/bin/apache-tomcat-8.5.43.tar.gz"/>...
 @ 5:183 in 
/build/source/hadoop-common-project/hadoop-kms/target/antrun/build-main.xml
[ERROR] -> [Help 1] {noformat}
Seems this is caused by HADOOP-16323 which fetches via https.

This should only be an issue in branch-2 since this was removed for KMS in 
HADOOP-13597, and httpfs in HDFS-10860

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15169) "hadoop.ssl.enabled.protocols" should be considered in httpserver2

2019-10-15 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952293#comment-16952293
 ] 

Hudson commented on HADOOP-15169:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17537 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17537/])
HADOOP-15169. "hadoop.ssl.enabled.protocols" should be considered in (weichiu: 
rev c39e9fc9a3ce7bf6f627c003526fa903a69c2646)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/http/TestSSLHttpServer.java
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/http/HttpServer2.java


> "hadoop.ssl.enabled.protocols" should be considered in httpserver2
> --
>
> Key: HADOOP-15169
> URL: https://issues.apache.org/jira/browse/HADOOP-15169
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15169-branch-2.patch, HADOOP-15169.002.patch, 
> HADOOP-15169.003.patch, HADOOP-15169.patch
>
>
> As of now *hadoop.ssl.enabled.protocols"* will not take effect for all the 
> http servers( only Datanodehttp server will use this config).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15169) "hadoop.ssl.enabled.protocols" should be considered in httpserver2

2019-10-15 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-15169:
-
Fix Version/s: 3.3.0
   Resolution: Fixed
   Status: Resolved  (was: Patch Available)

Pushed to trunk, thanks [~brahmareddy] for the initial patch, and [~aajisaka] 
and [~xyao] for the review and comments!

> "hadoop.ssl.enabled.protocols" should be considered in httpserver2
> --
>
> Key: HADOOP-15169
> URL: https://issues.apache.org/jira/browse/HADOOP-15169
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15169-branch-2.patch, HADOOP-15169.002.patch, 
> HADOOP-15169.003.patch, HADOOP-15169.patch
>
>
> As of now *hadoop.ssl.enabled.protocols"* will not take effect for all the 
> http servers( only Datanodehttp server will use this config).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] avijayanhwx closed pull request #1604: HDDS-2254. Fix flaky unit test TestContainerStateMachine#testRatisSnapshotRetention

2019-10-15 Thread GitBox
avijayanhwx closed pull request #1604: HDDS-2254. Fix flaky unit test 
TestContainerStateMachine#testRatisSnapshotRetention
URL: https://github.com/apache/hadoop/pull/1604
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15169) "hadoop.ssl.enabled.protocols" should be considered in httpserver2

2019-10-15 Thread Xiaoyu Yao (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952214#comment-16952214
 ] 

Xiaoyu Yao commented on HADOOP-15169:
-

Agree, thanks [~weichiu]. +1.

There is one checkstyle issue that you can fix at commit.

> "hadoop.ssl.enabled.protocols" should be considered in httpserver2
> --
>
> Key: HADOOP-15169
> URL: https://issues.apache.org/jira/browse/HADOOP-15169
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-15169-branch-2.patch, HADOOP-15169.002.patch, 
> HADOOP-15169.003.patch, HADOOP-15169.patch
>
>
> As of now *hadoop.ssl.enabled.protocols"* will not take effect for all the 
> http servers( only Datanodehttp server will use this config).



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16613) s3a to set fake directory marker contentType to application/x-directory

2019-10-15 Thread Jose Torres (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952213#comment-16952213
 ] 

Jose Torres commented on HADOOP-16613:
--

Confirmed that (2) is fine; the S3 web console can't create directory objects 
that don't end in a slash, and doesn't recognize them as directories if 
manually created even when they have length 0 and content-type == 
application/x-directory.

> s3a to set fake directory marker contentType to application/x-directory
> ---
>
> Key: HADOOP-16613
> URL: https://issues.apache.org/jira/browse/HADOOP-16613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1, 3.1.3
>Reporter: Jose Torres
>Priority: Minor
>
> S3AFileSystem doesn't set a contentType for fake directory files, causing it 
> to be inferred as "application/octet-stream". But fake directory files 
> created through the S3 web console have content type 
> "application/x-directory". We may want to adopt the web console behavior as a 
> standard, since some systems will rely on content type and not size + 
> trailing slash to determine if an object represents a directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16613) s3a to set fake directory marker contentType to application/x-directory

2019-10-15 Thread Jose Torres (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952194#comment-16952194
 ] 

Jose Torres commented on HADOOP-16613:
--

Working on preparing and testing a PR, but I'd propose:
 # Yes, but we should still consider length 0 things which don't have 
content-type == application/x-directory to be directories. Otherwise we 
wouldn't be able to read directory structures from old Hadoop versions.
 # We should still reject objects that don't end in a / as directories. 
Changing this would surely introduce weird edge cases in path parsing logic 
elsewhere. (But I'm gonna check that the web console can't create such objects.)

> s3a to set fake directory marker contentType to application/x-directory
> ---
>
> Key: HADOOP-16613
> URL: https://issues.apache.org/jira/browse/HADOOP-16613
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.2.1, 3.1.3
>Reporter: Jose Torres
>Priority: Minor
>
> S3AFileSystem doesn't set a contentType for fake directory files, causing it 
> to be inferred as "application/octet-stream". But fake directory files 
> created through the S3 web console have content type 
> "application/x-directory". We may want to adopt the web console behavior as a 
> standard, since some systems will rely on content type and not size + 
> trailing slash to determine if an object represents a directory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2019-10-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952127#comment-16952127
 ] 

Hadoop QA commented on HADOOP-14176:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m  
1s{color} | {color:red} Docker failed to build yetus/hadoop:da675796017. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-14176 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859412/HADOOP-14176-branch-2.004.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16596/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch, 
> HADOOP-14176-branch-2.004.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2019-10-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952126#comment-16952126
 ] 

Hadoop QA commented on HADOOP-10738:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m  
0s{color} | {color:red} Docker failed to build yetus/hadoop:da675796017. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-10738 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12859700/HADOOP-10738-branch-2.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16597/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738-branch-2.001.patch, HADOOP-10738.v1.patch, 
> HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13091) DistCp masks potential CRC check failures

2019-10-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952123#comment-16952123
 ] 

Hadoop QA commented on HADOOP-13091:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-13091 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-13091 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12803138/HADOOP-13091.004.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16595/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> DistCp masks potential CRC check failures
> -
>
> Key: HADOOP-13091
> URL: https://issues.apache.org/jira/browse/HADOOP-13091
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Elliot West
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HADOOP-13091.003.patch, HADOOP-13091.004.patch, 
> HDFS-10338.001.patch, HDFS-10338.002.patch
>
>
> There appear to be edge cases whereby CRC checks may be circumvented when 
> requests for checksums from the source or target file system fail. In this 
> event CRCs could differ between the source and target and yet the DistCp copy 
> would succeed, even when the 'skip CRC check' option is not being used.
> The code in question is contained in the method 
> [{{org.apache.hadoop.tools.util.DistCpUtils#checksumsAreEqual(...)}}|https://github.com/apache/hadoop/blob/release-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java#L457]
> Specifically this code block suggests that if there is a failure when trying 
> to read the source or target checksum then the method will return {{true}} 
> (i.e.  the checksums are equal), implying that the check succeeded. In actual 
> fact we just failed to obtain the checksum and could not perform the check.
> {code}
> try {
>   sourceChecksum = sourceChecksum != null ? sourceChecksum : 
> sourceFS.getFileChecksum(source);
>   targetChecksum = targetFS.getFileChecksum(target);
> } catch (IOException e) {
>   LOG.error("Unable to retrieve checksum for " + source + " or "
> + target, e);
> }
> return (sourceChecksum == null || targetChecksum == null ||
>   sourceChecksum.equals(targetChecksum));
> {code}
> I believe that at the very least the caught {{IOException}} should be 
> re-thrown. If this is not deemed desirable then I believe an option 
> ({{--strictCrc}}?) should be added to enforce a strict check where we require 
> that both the source and target CRCs are retrieved, are not null, and are 
> then compared for equality. If for any reason either of the CRCs retrievals 
> fail then an exception is thrown.
> Clearly some {{FileSystems}} do not support CRCs and invocations to 
> {{FileSystem.getFileChecksum(...)}} return {{null}} in these instances. I 
> would suggest that these should fail a strict CRC check to prevent users 
> developing a false sense of security in their copy pipeline.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10738) Dynamically adjust distcp configuration by adding distcp-site.xml into code base

2019-10-15 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-10738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-10738:
---
Target Version/s: 2.10.1  (was: 2.10.0)

> Dynamically adjust distcp configuration by adding distcp-site.xml into code 
> base
> 
>
> Key: HADOOP-10738
> URL: https://issues.apache.org/jira/browse/HADOOP-10738
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Siqi Li
>Priority: Major
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10738-branch-2.001.patch, HADOOP-10738.v1.patch, 
> HADOOP-10738.v2.patch
>
>
> For now, the configuration of distcp resides in hadoop-distcp.jar. This makes 
> it difficult to adjust the configuration dynamically.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16039) backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2

2019-10-15 Thread Hadoop QA (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952114#comment-16952114
 ] 

Hadoop QA commented on HADOOP-16039:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red}  0m  
0s{color} | {color:red} Docker failed to build yetus/hadoop:da675796017. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-16039 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12955833/HADOOP-16039-branch-2-001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/16594/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2
> --
>
> Key: HADOOP-16039
> URL: https://issues.apache.org/jira/browse/HADOOP-16039
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16039-001.patch, HADOOP-16039-branch-2-001.patch
>
>
> Backport the ADLS SDK 2.3.3 update to branch-2, retest. etc



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13091) DistCp masks potential CRC check failures

2019-10-15 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-13091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-13091:
---
Target Version/s: 2.10.1  (was: 2.10.0)

> DistCp masks potential CRC check failures
> -
>
> Key: HADOOP-13091
> URL: https://issues.apache.org/jira/browse/HADOOP-13091
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Elliot West
>Assignee: Yiqun Lin
>Priority: Major
> Attachments: HADOOP-13091.003.patch, HADOOP-13091.004.patch, 
> HDFS-10338.001.patch, HDFS-10338.002.patch
>
>
> There appear to be edge cases whereby CRC checks may be circumvented when 
> requests for checksums from the source or target file system fail. In this 
> event CRCs could differ between the source and target and yet the DistCp copy 
> would succeed, even when the 'skip CRC check' option is not being used.
> The code in question is contained in the method 
> [{{org.apache.hadoop.tools.util.DistCpUtils#checksumsAreEqual(...)}}|https://github.com/apache/hadoop/blob/release-2.7.1/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/DistCpUtils.java#L457]
> Specifically this code block suggests that if there is a failure when trying 
> to read the source or target checksum then the method will return {{true}} 
> (i.e.  the checksums are equal), implying that the check succeeded. In actual 
> fact we just failed to obtain the checksum and could not perform the check.
> {code}
> try {
>   sourceChecksum = sourceChecksum != null ? sourceChecksum : 
> sourceFS.getFileChecksum(source);
>   targetChecksum = targetFS.getFileChecksum(target);
> } catch (IOException e) {
>   LOG.error("Unable to retrieve checksum for " + source + " or "
> + target, e);
> }
> return (sourceChecksum == null || targetChecksum == null ||
>   sourceChecksum.equals(targetChecksum));
> {code}
> I believe that at the very least the caught {{IOException}} should be 
> re-thrown. If this is not deemed desirable then I believe an option 
> ({{--strictCrc}}?) should be added to enforce a strict check where we require 
> that both the source and target CRCs are retrieved, are not null, and are 
> then compared for equality. If for any reason either of the CRCs retrievals 
> fail then an exception is thrown.
> Clearly some {{FileSystems}} do not support CRCs and invocations to 
> {{FileSystem.getFileChecksum(...)}} return {{null}} in these instances. I 
> would suggest that these should fail a strict CRC check to prevent users 
> developing a false sense of security in their copy pipeline.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16039) backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2

2019-10-15 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16039:
---
Target Version/s: 2.10.1  (was: 2.10.0)

> backport HADOOP-15965, "Upgrade ADLS SDK to 2.3.3" to branch-2
> --
>
> Key: HADOOP-16039
> URL: https://issues.apache.org/jira/browse/HADOOP-16039
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.9.2
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Attachments: HADOOP-16039-001.patch, HADOOP-16039-branch-2-001.patch
>
>
> Backport the ADLS SDK 2.3.3 update to branch-2, retest. etc



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16647) Support OpenSSL 1.1.1 LTS

2019-10-15 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-16647:
---
Target Version/s: 3.3.0, 2.10.1  (was: 2.10.0, 3.3.0)

> Support OpenSSL 1.1.1 LTS
> -
>
> Key: HADOOP-16647
> URL: https://issues.apache.org/jira/browse/HADOOP-16647
> Project: Hadoop Common
>  Issue Type: Task
>  Components: security
>Reporter: Wei-Chiu Chuang
>Priority: Major
>
> See Hadoop user mailing list 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201910.mbox/%3CCADiq6%3DweDFxHTL_7eGwDNnxVCza39y2QYQTSggfLn7mXhMLOdg%40mail.gmail.com%3E
> Hadoop 2 supports OpenSSL 1.0.2.
> Hadoop 3 supports OpenSSL 1.1.0 (HADOOP-14597) and I believe 1.0.2 too.
> Per OpenSSL blog https://www.openssl.org/policies/releasestrat.html
> * 1.1.0 is EOL 2019/09/11
> * 1.0.2 EOL 2019/12/31
> * 1.1.1 is EOL 2023/09/11 (LTS)
> Many Hadoop installation relies on the OpenSSL package provided by Linux 
> distros, but it's not clear to me if Linux distros are going support 
> 1.1.0/1.0.2 beyond this date.
> We should make sure Hadoop works with OpenSSL 1.1.1, as well as document the 
> openssl version supported. File this jira to test/document/fix bugs.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14176) distcp reports beyond physical memory limits on 2.X

2019-10-15 Thread Jonathan Hung (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-14176?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hung updated HADOOP-14176:
---
Target Version/s: 2.10.1  (was: 2.10.0)

> distcp reports beyond physical memory limits on 2.X
> ---
>
> Key: HADOOP-14176
> URL: https://issues.apache.org/jira/browse/HADOOP-14176
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 2.9.0
>Reporter: Fei Hui
>Assignee: Fei Hui
>Priority: Major
> Attachments: HADOOP-14176-branch-2.001.patch, 
> HADOOP-14176-branch-2.002.patch, HADOOP-14176-branch-2.003.patch, 
> HADOOP-14176-branch-2.004.patch
>
>
> When i run distcp,  i get some errors as follow
> {quote}
> 17/02/21 15:31:18 INFO mapreduce.Job: Task Id : 
> attempt_1487645941615_0037_m_03_0, Status : FAILED
> Container [pid=24661,containerID=container_1487645941615_0037_01_05] is 
> running beyond physical memory limits. Current usage: 1.1 GB of 1 GB physical 
> memory used; 4.0 GB of 5 GB virtual memory used. Killing container.
> Dump of the process-tree for container_1487645941615_0037_01_05 :
> |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) 
> SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
> |- 24661 24659 24661 24661 (bash) 0 0 108650496 301 /bin/bash -c 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN  -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5 
> 1>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stdout
>  
> 2>/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05/stderr
> |- 24665 24661 24661 24661 (java) 1766 336 4235558912 280699 
> /usr/lib/jvm/java/bin/java -Djava.net.preferIPv4Stack=true 
> -Dhadoop.metrics.log.level=WARN -Xmx2120m 
> -Djava.io.tmpdir=/mnt/disk4/yarn/usercache/hadoop/appcache/application_1487645941615_0037/container_1487645941615_0037_01_05/tmp
>  -Dlog4j.configuration=container-log4j.properties 
> -Dyarn.app.container.log.dir=/mnt/disk2/log/hadoop-yarn/containers/application_1487645941615_0037/container_1487645941615_0037_01_05
>  -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA 
> -Dhadoop.root.logfile=syslog org.apache.hadoop.mapred.YarnChild 192.168.1.208 
> 44048 attempt_1487645941615_0037_m_03_0 5
> Container killed on request. Exit code is 143
> Container exited with a non-zero exit code 143
> {quote}
> Deep into the code , i find that because distcp configuration covers 
> mapred-site.xml
> {code}
> 
> mapred.job.map.memory.mb
> 1024
> 
> 
> mapred.job.reduce.memory.mb
> 1024
> 
> {code}
> When mapreduce.map.java.opts and mapreduce.map.memory.mb is setting in 
> mapred-default.xml, and the value is larger than setted in 
> distcp-default.xml, the error maybe occur.
> we should remove those two configurations in distcp-default.xml 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16643) Update netty4 to the latest 4.1.42

2019-10-15 Thread Hudson (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16952084#comment-16952084
 ] 

Hudson commented on HADOOP-16643:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #17536 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/17536/])
HADOOP-16643. Update netty4 to the latest 4.1.42. Contributed by Lisheng 
(weichiu: rev 85af77c75768416db24ca506fd1704ce664ca92f)
* (edit) hadoop-project/pom.xml


> Update netty4 to the latest 4.1.42
> --
>
> Key: HADOOP-16643
> URL: https://issues.apache.org/jira/browse/HADOOP-16643
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16643.001.patch
>
>
> The latest netty is out. Let's update it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1619: HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation

2019-10-15 Thread GitBox
hadoop-yetus commented on issue #1619: HADOOP-16478. S3Guard bucket-info fails 
if the caller lacks s3:GetBucketLocation
URL: https://github.com/apache/hadoop/pull/1619#issuecomment-542287508
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 2134 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1080 | trunk passed |
   | +1 | compile | 36 | trunk passed |
   | +1 | checkstyle | 28 | trunk passed |
   | +1 | mvnsite | 41 | trunk passed |
   | +1 | shadedclient | 794 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 29 | trunk passed |
   | 0 | spotbugs | 60 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 57 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 34 | the patch passed |
   | +1 | compile | 28 | the patch passed |
   | +1 | javac | 28 | the patch passed |
   | +1 | checkstyle | 21 | the patch passed |
   | +1 | mvnsite | 33 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 787 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 27 | hadoop-tools_hadoop-aws generated 0 new + 4 unchanged 
- 1 fixed = 4 total (was 5) |
   | +1 | findbugs | 62 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 89 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 5407 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.2 Server=19.03.2 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/4/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1619 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux ff3661cb80c0 4.15.0-65-generic #74-Ubuntu SMP Tue Sep 17 
17:06:04 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 336abbd |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/4/testReport/ |
   | Max. process+thread count | 433 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/4/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16643) Update netty4 to the latest 4.1.42

2019-10-15 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16643:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~leosun08]!

> Update netty4 to the latest 4.1.42
> --
>
> Key: HADOOP-16643
> URL: https://issues.apache.org/jira/browse/HADOOP-16643
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16643.001.patch
>
>
> The latest netty is out. Let's update it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-16643) Update netty4 to the latest 4.1.42

2019-10-15 Thread Wei-Chiu Chuang (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-16643:
-
Fix Version/s: 3.3.0

> Update netty4 to the latest 4.1.42
> --
>
> Key: HADOOP-16643
> URL: https://issues.apache.org/jira/browse/HADOOP-16643
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Lisheng Sun
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-16643.001.patch
>
>
> The latest netty is out. Let's update it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1656: HADOOP-16579. Upgrade to Curator 4.2.0 and ZooKeeper 3.5.5

2019-10-15 Thread GitBox
hadoop-yetus commented on issue #1656: HADOOP-16579. Upgrade to Curator 4.2.0 
and ZooKeeper 3.5.5
URL: https://github.com/apache/hadoop/pull/1656#issuecomment-542271513
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 39 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 0 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 2 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 68 | Maven dependency ordering for branch |
   | +1 | mvninstall | 1065 | trunk passed |
   | +1 | compile | 1013 | trunk passed |
   | +1 | checkstyle | 167 | trunk passed |
   | +1 | mvnsite | 266 | trunk passed |
   | +1 | shadedclient | 1219 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 252 | trunk passed |
   | 0 | spotbugs | 111 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | 0 | findbugs | 34 | branch/hadoop-project no findbugs output file 
(findbugsXml.xml) |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 30 | Maven dependency ordering for patch |
   | +1 | mvninstall | 157 | the patch passed |
   | +1 | compile | 969 | the patch passed |
   | +1 | javac | 969 | the patch passed |
   | -0 | checkstyle | 171 | root: The patch generated 10 new + 531 unchanged - 
3 fixed = 541 total (was 534) |
   | +1 | mvnsite | 264 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 1 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 748 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 251 | the patch passed |
   | 0 | findbugs | 34 | hadoop-project has no data from findbugs |
   ||| _ Other Tests _ |
   | +1 | unit | 33 | hadoop-project in the patch passed. |
   | +1 | unit | 200 | hadoop-auth in the patch passed. |
   | +1 | unit | 555 | hadoop-common in the patch passed. |
   | -1 | unit | 523 | hadoop-registry in the patch failed. |
   | -1 | unit | 5008 | hadoop-yarn-server-resourcemanager in the patch failed. 
|
   | +1 | asflicense | 55 | The patch does not generate ASF License warnings. |
   | | | 13711 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | Failed junit tests | hadoop.registry.cli.TestRegistryCli |
   |   | hadoop.registry.secure.TestSecureRegistry |
   |   | hadoop.registry.client.impl.TestCuratorService |
   |   | hadoop.registry.operations.TestRegistryOperations |
   |   | 
hadoop.yarn.server.resourcemanager.scheduler.fair.TestFairSchedulerPreemption |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1656/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1656 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml |
   | uname | Linux e75a8751066a 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 336abbd |
   | Default Java | 1.8.0_222 |
   | checkstyle | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1656/2/artifact/out/diff-checkstyle-root.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1656/2/artifact/out/patch-unit-hadoop-common-project_hadoop-registry.txt
 |
   | unit | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1656/2/artifact/out/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-server_hadoop-yarn-server-resourcemanager.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1656/2/testReport/ |
   | Max. process+thread count | 5413 (vs. ulimit of 5500) |
   | modules | C: hadoop-project hadoop-common-project/hadoop-auth 
hadoop-common-project/hadoop-common hadoop-common-project/hadoop-registry 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager
 U: . |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1656/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To 

[GitHub] [hadoop] steveloughran commented on issue #1619: HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation

2019-10-15 Thread GitBox
steveloughran commented on issue #1619: HADOOP-16478. S3Guard bucket-info fails 
if the caller lacks s3:GetBucketLocation
URL: https://github.com/apache/hadoop/pull/1619#issuecomment-542244630
 
 
   Reviewed myself; minor tuning.
   
   tested: s3 ireland w/ s3guard


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1619: HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation

2019-10-15 Thread GitBox
hadoop-yetus removed a comment on issue #1619: HADOOP-16478. S3Guard 
bucket-info fails if the caller lacks s3:GetBucketLocation
URL: https://github.com/apache/hadoop/pull/1619#issuecomment-541135916
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 43 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1104 | trunk passed |
   | +1 | compile | 35 | trunk passed |
   | +1 | checkstyle | 29 | trunk passed |
   | +1 | mvnsite | 39 | trunk passed |
   | +1 | shadedclient | 813 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 28 | trunk passed |
   | 0 | spotbugs | 61 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 58 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 34 | the patch passed |
   | +1 | compile | 29 | the patch passed |
   | +1 | javac | 29 | the patch passed |
   | +1 | checkstyle | 21 | the patch passed |
   | +1 | mvnsite | 34 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 809 | patch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 29 | hadoop-tools_hadoop-aws generated 0 new + 4 unchanged 
- 1 fixed = 4 total (was 5) |
   | +1 | findbugs | 62 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 75 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 33 | The patch does not generate ASF License warnings. |
   | | | 3381 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/3/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1619 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 529d7def58f5 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec86f42 |
   | Default Java | 1.8.0_222 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/3/testReport/ |
   | Max. process+thread count | 450 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/3/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus removed a comment on issue #1619: HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation

2019-10-15 Thread GitBox
hadoop-yetus removed a comment on issue #1619: HADOOP-16478. S3Guard 
bucket-info fails if the caller lacks s3:GetBucketLocation
URL: https://github.com/apache/hadoop/pull/1619#issuecomment-541130201
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 92 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 1 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 3 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | +1 | mvninstall | 1333 | trunk passed |
   | +1 | compile | 32 | trunk passed |
   | +1 | checkstyle | 23 | trunk passed |
   | +1 | mvnsite | 36 | trunk passed |
   | +1 | shadedclient | 854 | branch has no errors when building and testing 
our client artifacts. |
   | +1 | javadoc | 26 | trunk passed |
   | 0 | spotbugs | 59 | Used deprecated FindBugs config; considering switching 
to SpotBugs. |
   | +1 | findbugs | 58 | trunk passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 32 | the patch passed |
   | +1 | compile | 26 | the patch passed |
   | +1 | javac | 26 | the patch passed |
   | +1 | checkstyle | 19 | the patch passed |
   | +1 | mvnsite | 30 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedclient | 926 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 30 | hadoop-tools_hadoop-aws generated 1 new + 5 unchanged 
- 0 fixed = 6 total (was 5) |
   | +1 | findbugs | 71 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 102 | hadoop-aws in the patch passed. |
   | +1 | asflicense | 37 | The patch does not generate ASF License warnings. |
   | | | 3837 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1619 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle |
   | uname | Linux 88e94afb263b 4.15.0-54-generic #58-Ubuntu SMP Mon Jun 24 
10:55:24 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / ec86f42 |
   | Default Java | 1.8.0_222 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/2/artifact/out/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
   |  Test Results | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/2/testReport/ |
   | Max. process+thread count | 337 (vs. ulimit of 5500) |
   | modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
   | Console output | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1619/2/console |
   | versions | git=2.7.4 maven=3.3.9 findbugs=3.1.0-RC1 |
   | Powered by | Apache Yetus 0.10.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16510) [hadoop-common] Fix order of actual and expected expression in assert statements

2019-10-15 Thread Zoltan Siegl (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16951901#comment-16951901
 ] 

Zoltan Siegl commented on HADOOP-16510:
---

LGTM +1, non-binding

> [hadoop-common] Fix order of actual and expected expression in assert 
> statements
> 
>
> Key: HADOOP-16510
> URL: https://issues.apache.org/jira/browse/HADOOP-16510
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: HADOOP-16510.001.patch, HADOOP-16510.002.patch, 
> HADOOP-16510.003.patch
>
>
> Fix order of actual and expected expression in assert statements which gives 
> misleading message when test case fails. Attached file has some of the places 
> where it is placed wrongly.
> {code:java}
> [ERROR] 
> testNodeRemovalGracefully(org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService)
>   Time elapsed: 3.385 s  <<< FAILURE!
> java.lang.AssertionError: Shutdown nodes should be 0 now expected:<1> but 
> was:<0>
> {code}
> For long term, [AssertJ|http://joel-costigliola.github.io/assertj/] can be 
> used for new test cases which avoids such mistakes.
> This is a follow-up jira for the hadoop-common project.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-16634) S3A ITest failures without S3Guard

2019-10-15 Thread Steve Loughran (Jira)


 [ 
https://issues.apache.org/jira/browse/HADOOP-16634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-16634.
-
Fix Version/s: 3.3.0
   Resolution: Fixed

fixed while debugging HADOOP-16635

> S3A ITest failures without S3Guard
> --
>
> Key: HADOOP-16634
> URL: https://issues.apache.org/jira/browse/HADOOP-16634
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 3.3.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Major
> Fix For: 3.3.0
>
>
> This has probably been lurking for a while but we hadn't noticed because if 
> your auth-keys xml settings mark a specific store as guarded, then the maven 
> CLI settings aren't picked up. Remove those bindings and things fail.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16510) [hadoop-common] Fix order of actual and expected expression in assert statements

2019-10-15 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16510?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16951864#comment-16951864
 ] 

Adam Antal commented on HADOOP-16510:
-

Cannot reproduce the TestFixKerberosTicketOrder.test failure locally - I 
suppose this is some intermittent issue.
The javac error is still not relevant since the mentioned deprecated method was 
present before the patch and I simply didn't remove it.

> [hadoop-common] Fix order of actual and expected expression in assert 
> statements
> 
>
> Key: HADOOP-16510
> URL: https://issues.apache.org/jira/browse/HADOOP-16510
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 3.2.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: HADOOP-16510.001.patch, HADOOP-16510.002.patch, 
> HADOOP-16510.003.patch
>
>
> Fix order of actual and expected expression in assert statements which gives 
> misleading message when test case fails. Attached file has some of the places 
> where it is placed wrongly.
> {code:java}
> [ERROR] 
> testNodeRemovalGracefully(org.apache.hadoop.yarn.server.resourcemanager.TestResourceTrackerService)
>   Time elapsed: 3.385 s  <<< FAILURE!
> java.lang.AssertionError: Shutdown nodes should be 0 now expected:<1> but 
> was:<0>
> {code}
> For long term, [AssertJ|http://joel-costigliola.github.io/assertj/] can be 
> used for new test cases which avoids such mistakes.
> This is a follow-up jira for the hadoop-common project.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16580) Disable retry of FailoverOnNetworkExceptionRetry in case of AccessControlException

2019-10-15 Thread Adam Antal (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16951859#comment-16951859
 ] 

Adam Antal commented on HADOOP-16580:
-

One last checkstyle should be ignored because the surrounding code is also 
misindented. 

If you agree with the javadocs could you please commit this [~snemeth]?

> Disable retry of FailoverOnNetworkExceptionRetry in case of 
> AccessControlException
> --
>
> Key: HADOOP-16580
> URL: https://issues.apache.org/jira/browse/HADOOP-16580
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.3.0
>Reporter: Adam Antal
>Assignee: Adam Antal
>Priority: Major
> Attachments: HADOOP-16580.001.patch, HADOOP-16580.002.patch, 
> HADOOP-16580.003.patch
>
>
> HADOOP-14982 handled the case where a SaslException is thrown. The issue 
> still persists, since the exception that is thrown is an 
> *AccessControlException* because user has no kerberos credentials. 
> My suggestion is that we should add this case as well to 
> {{FailoverOnNetworkExceptionRetry}}.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1619: HADOOP-16478. S3Guard bucket-info fails if the caller lacks s3:GetBucketLocation

2019-10-15 Thread GitBox
steveloughran commented on issue #1619: HADOOP-16478. S3Guard bucket-info fails 
if the caller lacks s3:GetBucketLocation
URL: https://github.com/apache/hadoop/pull/1619#issuecomment-542140848
 
 
   @sidseth @bgaborg can you look at this. It fixes two real issues


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on issue #1621: HADOOP-16640. WASB: Override getCanonicalServiceName() to return URI

2019-10-15 Thread GitBox
steveloughran commented on issue #1621: HADOOP-16640. WASB: Override 
getCanonicalServiceName() to return URI
URL: https://github.com/apache/hadoop/pull/1621#issuecomment-542140083
 
 
   
   small change to the tests proposed, but LGTM.
   
   I do worry that a lot of the abfs config is hidden in the javadocs. Someone 
should fix that with more user docs. I know its not that relevant for managed 
cluster deployments (HD/I etc), but for people talking to abfs externally, it 
really matters.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] steveloughran commented on a change in pull request #1621: HADOOP-16640. WASB: Override getCanonicalServiceName() to return URI

2019-10-15 Thread GitBox
steveloughran commented on a change in pull request #1621: HADOOP-16640. WASB: 
Override getCanonicalServiceName() to return URI
URL: https://github.com/apache/hadoop/pull/1621#discussion_r334862383
 
 

 ##
 File path: 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/ITestWasbUriAndConfiguration.java
 ##
 @@ -640,4 +640,28 @@ public void testUserAgentConfig() throws Exception {
   FileSystem.closeAll();
 }
   }
+
+  @Test
+  public void testCanonicalServiceName() throws Exception {
+AzureBlobStorageTestAccount testAccount = 
AzureBlobStorageTestAccount.createMock();
+Configuration conf = testAccount.getFileSystem().getConf();
+String authority = testAccount.getFileSystem().getUri().getAuthority();
+URI defaultUri = new URI("wasbs", authority, null, null, null);
+conf.set(FS_DEFAULT_NAME_KEY, defaultUri.toString());
+
+final FileSystem fs0 =  FileSystem.get(conf);
+// Default getCanonicalServiceName() will try to resolve the host to IP,
+// because the mock container does not exist, this call is expected to 
fail.
+intercept(IllegalArgumentException.class,
+"java.net.UnknownHostException",
+()-> {
+  fs0.getCanonicalServiceName();
+});
+
+// clear fs cache
+FileSystem.closeAll();
 
 Review comment:
   you don't need to do that , just call `FileSystem.newInstance(defaultURI, 
conf)`, remembering to close the new FS afterwards


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] supratimdeka commented on a change in pull request #1637: HDDS-2206. Separate handling for OMException and IOException in the Ozone Manager. Contributed by Supratim Deka

2019-10-15 Thread GitBox
supratimdeka commented on a change in pull request #1637: HDDS-2206. Separate 
handling for OMException and IOException in the Ozone Manager. Contributed by 
Supratim Deka
URL: https://github.com/apache/hadoop/pull/1637#discussion_r334861662
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/om/OzoneManager.java
 ##
 @@ -200,15 +200,7 @@
 import static org.apache.hadoop.ozone.OzoneConsts.OM_METRICS_FILE;
 import static org.apache.hadoop.ozone.OzoneConsts.OM_METRICS_TEMP_FILE;
 import static org.apache.hadoop.ozone.OzoneConsts.RPC_PORT;
-import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_ADDRESS_KEY;
-import static 
org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_HANDLER_COUNT_DEFAULT;
-import static 
org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_HANDLER_COUNT_KEY;
-import static 
org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_KERBEROS_KEYTAB_FILE_KEY;
-import static 
org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_KERBEROS_PRINCIPAL_KEY;
-import static 
org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_METRICS_SAVE_INTERVAL;
-import static 
org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_METRICS_SAVE_INTERVAL_DEFAULT;
-import static org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_USER_MAX_VOLUME;
-import static 
org.apache.hadoop.ozone.om.OMConfigKeys.OZONE_OM_USER_MAX_VOLUME_DEFAULT;
+import static org.apache.hadoop.ozone.om.OMConfigKeys.*;
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] supratimdeka commented on a change in pull request #1637: HDDS-2206. Separate handling for OMException and IOException in the Ozone Manager. Contributed by Supratim Deka

2019-10-15 Thread GitBox
supratimdeka commented on a change in pull request #1637: HDDS-2206. Separate 
handling for OMException and IOException in the Ozone Manager. Contributed by 
Supratim Deka
URL: https://github.com/apache/hadoop/pull/1637#discussion_r334861818
 
 

 ##
 File path: hadoop-hdds/common/src/main/resources/ozone-default.xml
 ##
 @@ -1641,6 +1641,20 @@
 
   
 
+  
+ozone.om.exception.stacktrace.propagate
 
 Review comment:
   done the change in new PR
   https://github.com/apache/hadoop-ozone/pull/12


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] supratimdeka commented on a change in pull request #1637: HDDS-2206. Separate handling for OMException and IOException in the Ozone Manager. Contributed by Supratim Deka

2019-10-15 Thread GitBox
supratimdeka commented on a change in pull request #1637: HDDS-2206. Separate 
handling for OMException and IOException in the Ozone Manager. Contributed by 
Supratim Deka
URL: https://github.com/apache/hadoop/pull/1637#discussion_r334861542
 
 

 ##
 File path: 
hadoop-ozone/ozone-manager/src/main/java/org/apache/hadoop/ozone/protocolPB/OzoneManagerProtocolServerSideTranslatorPB.java
 ##
 @@ -219,7 +244,7 @@ private OMResponse submitRequestDirectlyToOM(OMRequest 
request) {
 omClientResponse = omClientRequest.validateAndUpdateCache(
 ozoneManager, index, ozoneManagerDoubleBuffer::add);
   }
-} catch(IOException ex) {
+} catch(OMException ex) {
 
 Review comment:
   Addressed comments in
   https://github.com/apache/hadoop-ozone/pull/12


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16579) Upgrade to Apache Curator 4.2.0 in Hadoop

2019-10-15 Thread Mate Szalay-Beko (Jira)


[ 
https://issues.apache.org/jira/browse/HADOOP-16579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16951708#comment-16951708
 ] 

Mate Szalay-Beko commented on HADOOP-16579:
---

I fixed some checkstyle errors. We also run into some deprecated function calls 
in the curator API, I hope I was able to find / fix all of these as well.

> Upgrade to Apache Curator 4.2.0 in Hadoop
> -
>
> Key: HADOOP-16579
> URL: https://issues.apache.org/jira/browse/HADOOP-16579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Mate Szalay-Beko
>Assignee: Norbert Kalmár
>Priority: Major
>
> Currently in Hadoop we are using [ZooKeeper version 
> 3.4.13|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L90].
>  ZooKeeper 3.5.5 is the latest stable Apache ZooKeeper release. It contains 
> many new features (including SSL related improvements which can be very 
> important for production use; see [the release 
> notes|https://zookeeper.apache.org/doc/r3.5.5/releasenotes.html]).
> Apache Curator is a high level ZooKeeper client library, that makes it easier 
> to use the low level ZooKeeper API. Currently [in Hadoop we are using Curator 
> 2.13.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/hadoop-project/pom.xml#L91]
>  and [in Ozone we use Curator 
> 2.12.0|https://github.com/apache/hadoop/blob/7f9073132dcc9db157a6792635d2ed099f2ef0d2/pom.ozone.xml#L146].
> Curator 2.x is supporting only the ZooKeeper 3.4.x releases, while Curator 
> 3.x is compatible only with the new ZooKeeper 3.5.x releases. Fortunately, 
> the latest Curator 4.x versions are compatible with both ZooKeeper 3.4.x and 
> 3.5.x. (see [the relevant Curator 
> page|https://curator.apache.org/zk-compatibility.html]). Many Apache projects 
> have already migrated to Curator 4 (like HBase, Phoenix, Druid, etc.), other 
> components are doing it right now (e.g. Hive).
> *The aims of this task are* to:
>  - change Curator version in Hadoop to the latest stable 4.x version 
> (currently 4.2.0)
>  - also make sure we don't have multiple ZooKeeper versions in the classpath 
> to avoid runtime problems (it is 
> [recommended|https://curator.apache.org/zk-compatibility.html] to exclude the 
> ZooKeeper which come with Curator, so that there will be only a single 
> ZooKeeper version used runtime in Hadoop)
> In this ticket we still don't want to change the default ZooKeeper version in 
> Hadoop, we only want to make it possible for the community to be able to 
> build / use Hadoop with the new ZooKeeper (e.g. if they need to secure the 
> ZooKeeper communication with SSL, what is only supported in the new ZooKeeper 
> version). Upgrading to Curator 4.x should keep Hadoop to be compatible with 
> both ZooKeeper 3.4 and 3.5.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] xiaoxiaopan118 opened a new pull request #1658: Merge pull request #1 from apache/trunk

2019-10-15 Thread GitBox
xiaoxiaopan118 opened a new pull request #1658: Merge pull request #1 from 
apache/trunk
URL: https://github.com/apache/hadoop/pull/1658
 
 
   new pull
   
   ## NOTICE
   
   Please create an issue in ASF JIRA before opening a pull request,
   and you need to set the title of the pull request which starts with
   the corresponding JIRA issue number. (e.g. HADOOP-X. Fix a typo in YYY.)
   For more details, please see 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] [hadoop] hadoop-yetus commented on issue #1650: HDDS-2034. Async RATIS pipeline creation and destroy through datanode…

2019-10-15 Thread GitBox
hadoop-yetus commented on issue #1650: HDDS-2034. Async RATIS pipeline creation 
and destroy through datanode…
URL: https://github.com/apache/hadoop/pull/1650#issuecomment-542052548
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 41 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | dupname | 2 | No case conflicting files found. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 16 new or modified test 
files. |
   ||| _ trunk Compile Tests _ |
   | 0 | mvndep | 67 | Maven dependency ordering for branch |
   | -1 | mvninstall | 37 | hadoop-hdds in trunk failed. |
   | -1 | mvninstall | 40 | hadoop-ozone in trunk failed. |
   | -1 | compile | 20 | hadoop-hdds in trunk failed. |
   | -1 | compile | 16 | hadoop-ozone in trunk failed. |
   | +1 | checkstyle | 60 | trunk passed |
   | +1 | mvnsite | 0 | trunk passed |
   | +1 | shadedclient | 861 | branch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in trunk failed. |
   | -1 | javadoc | 20 | hadoop-ozone in trunk failed. |
   | 0 | spotbugs | 964 | Used deprecated FindBugs config; considering 
switching to SpotBugs. |
   | -1 | findbugs | 34 | hadoop-hdds in trunk failed. |
   | -1 | findbugs | 21 | hadoop-ozone in trunk failed. |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 29 | Maven dependency ordering for patch |
   | -1 | mvninstall | 34 | hadoop-hdds in the patch failed. |
   | -1 | mvninstall | 36 | hadoop-ozone in the patch failed. |
   | -1 | compile | 24 | hadoop-hdds in the patch failed. |
   | -1 | compile | 19 | hadoop-ozone in the patch failed. |
   | -1 | cc | 24 | hadoop-hdds in the patch failed. |
   | -1 | cc | 19 | hadoop-ozone in the patch failed. |
   | -1 | javac | 24 | hadoop-hdds in the patch failed. |
   | -1 | javac | 19 | hadoop-ozone in the patch failed. |
   | +1 | checkstyle | 57 | the patch passed |
   | +1 | mvnsite | 0 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedclient | 717 | patch has no errors when building and testing 
our client artifacts. |
   | -1 | javadoc | 23 | hadoop-hdds in the patch failed. |
   | -1 | javadoc | 20 | hadoop-ozone in the patch failed. |
   | -1 | findbugs | 31 | hadoop-hdds in the patch failed. |
   | -1 | findbugs | 21 | hadoop-ozone in the patch failed. |
   ||| _ Other Tests _ |
   | -1 | unit | 28 | hadoop-hdds in the patch failed. |
   | -1 | unit | 27 | hadoop-ozone in the patch failed. |
   | +1 | asflicense | 34 | The patch does not generate ASF License warnings. |
   | | | 2480 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=19.03.3 Server=19.03.3 base: 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hadoop/pull/1650 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall 
mvnsite unit shadedclient findbugs checkstyle xml cc |
   | uname | Linux 9ab173466796 4.15.0-58-generic #64-Ubuntu SMP Tue Aug 6 
11:12:41 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | personality/hadoop.sh |
   | git revision | trunk / 336abbd |
   | Default Java | 1.8.0_222 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/branch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/branch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/branch-compile-hadoop-hdds.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/branch-compile-hadoop-ozone.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/branch-javadoc-hadoop-hdds.txt
 |
   | javadoc | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/branch-javadoc-hadoop-ozone.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/branch-findbugs-hadoop-hdds.txt
 |
   | findbugs | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/branch-findbugs-hadoop-ozone.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/patch-mvninstall-hadoop-hdds.txt
 |
   | mvninstall | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/patch-mvninstall-hadoop-ozone.txt
 |
   | compile | 
https://builds.apache.org/job/hadoop-multibranch/job/PR-1650/2/artifact/out/patch-compile-hadoop-hdds.txt
 |
   | compile |