[jira] [Commented] (HADOOP-15992) JSON License is included in the transitive dependency of aliyun-sdk-oss 3.0.0

2018-12-12 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719772#comment-16719772
 ] 

Akira Ajisaka commented on HADOOP-15992:


Note: Raised the issue to upgrade the dependency in aliyun-sdk-oss: 
https://github.com/aliyun/aliyun-oss-java-sdk/issues/178

> JSON License is included in the transitive dependency of aliyun-sdk-oss 3.0.0
> -
>
> Key: HADOOP-15992
> URL: https://issues.apache.org/jira/browse/HADOOP-15992
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.2
>Reporter: Akira Ajisaka
>Priority: Blocker
>
> This is the output of {{mvn dependency:tree}}
> {noformat}
> [INFO] +- com.aliyun.oss:aliyun-sdk-oss:jar:3.0.0:compile
> [INFO] |  +- org.jdom:jdom:jar:1.1:compile
> [INFO] |  +- com.sun.jersey:jersey-json:jar:1.19:compile
> [INFO] |  |  +- org.codehaus.jettison:jettison:jar:1.1:compile
> [INFO] |  |  +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:compile
> [INFO] |  |  +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO] |  |  +- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> [INFO] |  |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
> [INFO] |  |  \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile
> [INFO] |  +- com.aliyun:aliyun-java-sdk-core:jar:3.4.0:compile
> [INFO] |  |  \- org.json:json:jar:20170516:compile
> {noformat}
> The license of org.json:json:jar:20170516:compile is JSON License, which 
> cannot be included.
> https://www.apache.org/legal/resolved.html#json



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15992) JSON License is included in the transitive dependency of aliyun-sdk-oss 3.0.0

2018-12-12 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15992?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719759#comment-16719759
 ] 

Akira Ajisaka commented on HADOOP-15992:


We need to upgrade aliyun-java-sdk-core version to 4.0.0 or upper.
https://github.com/aliyun/aliyun-openapi-java-sdk/commit/8240a4c89229e62db173bd8b32789de78667b2d4

The latest version of aliyun-sdk-oss is 3.3.0 and it uses aliyun-java-sdk-core 
3.4.0, so we need to upgrade the dependency manually.

> JSON License is included in the transitive dependency of aliyun-sdk-oss 3.0.0
> -
>
> Key: HADOOP-15992
> URL: https://issues.apache.org/jira/browse/HADOOP-15992
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.2
>Reporter: Akira Ajisaka
>Priority: Blocker
>
> This is the output of {{mvn dependency:tree}}
> {noformat}
> [INFO] +- com.aliyun.oss:aliyun-sdk-oss:jar:3.0.0:compile
> [INFO] |  +- org.jdom:jdom:jar:1.1:compile
> [INFO] |  +- com.sun.jersey:jersey-json:jar:1.19:compile
> [INFO] |  |  +- org.codehaus.jettison:jettison:jar:1.1:compile
> [INFO] |  |  +- com.sun.xml.bind:jaxb-impl:jar:2.2.3-1:compile
> [INFO] |  |  +- org.codehaus.jackson:jackson-core-asl:jar:1.9.13:compile
> [INFO] |  |  +- org.codehaus.jackson:jackson-mapper-asl:jar:1.9.13:compile
> [INFO] |  |  +- org.codehaus.jackson:jackson-jaxrs:jar:1.9.13:compile
> [INFO] |  |  \- org.codehaus.jackson:jackson-xc:jar:1.9.13:compile
> [INFO] |  +- com.aliyun:aliyun-java-sdk-core:jar:3.4.0:compile
> [INFO] |  |  \- org.json:json:jar:20170516:compile
> {noformat}
> The license of org.json:json:jar:20170516:compile is JSON License, which 
> cannot be included.
> https://www.apache.org/legal/resolved.html#json



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15711) Fix branch-2 builds

2018-12-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15711?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719756#comment-16719756
 ] 

Hadoop QA commented on HADOOP-15711:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
44s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 10m 
36s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
52s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m  
0s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
27s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  6m 
41s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}207m 31s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  2m 
33s{color} | {color:red} The patch generated 352 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}296m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Unreaped Processes | root:46 |
| Failed junit tests | hadoop.hdfs.server.datanode.TestDataNodeUUID |
|   | hadoop.hdfs.server.namenode.ha.TestBootstrapStandbyWithQJM |
|   | hadoop.hdfs.server.datanode.TestDataNodeVolumeFailureReporting |
|   | hadoop.hdfs.server.namenode.TestNamenodeCapacityReport |
| Timed out junit tests | org.apache.hadoop.hdfs.server.datanode.TestHSync |
|   | org.apache.hadoop.hdfs.server.datanode.TestDataNodeVolumeMetrics |
|   | org.apache.hadoop.hdfs.TestWriteRead |
|   | org.apache.hadoop.hdfs.server.datanode.TestDeleteBlockPool |
|   | org.apache.hadoop.hdfs.server.datanode.TestNNHandlesBlockReportPerStorage 
|
|   | org.apache.hadoop.hdfs.qjournal.client.TestQuorumJournalManager |
|   | org.apache.hadoop.hdfs.server.datanode.TestDataNodeHotSwapVolumes |
|   | org.apache.hadoop.hdfs.server.datanode.TestNNHandlesCombinedBlockReport |
|   | org.apache.hadoop.hdfs.server.datanode.TestDataNodeMetrics |
|   | org.apache.hadoop.hdfs.server.datanode.TestDataNodeRollingUpgrade |
|   | org.apache.hadoop.hdfs.server.datanode.TestDataNodeExit |
|   | org.apache.hadoop.hdfs.TestDFSClientFailover |
|   | org.apache.hadoop.fs.TestEnhancedByteBufferAccess |
|   | org.apache.hadoop.hdfs.server.datanode.TestBlockReplacement |
|   | org.apache.hadoop.hdfs.server.datanode.TestRefreshNamenodes |
|   | org.apache.hadoop.hdfs.qjournal.server.TestJournalNode |
|   | org.apache.hadoop.hdfs.server.datanode.TestBlockRecovery |
|   | org.apache.hadoop.hdfs.server.datanode.TestTriggerBlockReport |
|   | org.apache.hadoop.hdfs.server.datanode.TestDataNodeFaultInjector |
|   | org.apache.hadoop.hdfs.TestFileAppend4 |
|   | 

[jira] [Commented] (HADOOP-15984) Update jersey from 1.19 to 2.x

2018-12-12 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15984?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719710#comment-16719710
 ] 

Akira Ajisaka commented on HADOOP-15984:


Apache Hadoop (GitHub mirror): https://github.com/apache/hadoop
How to contribute: 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
 

> Update jersey from 1.19 to 2.x
> --
>
> Key: HADOOP-15984
> URL: https://issues.apache.org/jira/browse/HADOOP-15984
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Akira Ajisaka
>Priority: Major
>
> jersey-json 1.19 depends on Jackson 1.9.2. Let's upgrade.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15994) Upgrade Jackson2 to the latest version

2018-12-12 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719708#comment-16719708
 ] 

Akira Ajisaka commented on HADOOP-15994:


Given Jackson is widely used in Apache Hadoop, I will run full unit tests.

> Upgrade Jackson2 to the latest version
> --
>
> Key: HADOOP-15994
> URL: https://issues.apache.org/jira/browse/HADOOP-15994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: lqjacklee
>Priority: Major
> Attachments: HADOOP-15994-001.patch, HADOOP-15994-002.patch, 
> HADOOP-15994-003.patch
>
>
> Now Jackson 2.9.5 is used and it is vulnerable (CVE-2018-11307). Let's 
> upgrade to 2.9.6 or upper.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2018-12-12 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719660#comment-16719660
 ] 

Sean Busbey commented on HADOOP-15998:
--

on windows the classpath separator is {{;}} which means we should fail 
similarly there once this patch is applied.

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, windows
> Fix For: 3.3.0
>
> Attachments: HADOOP-15998.v2.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2018-12-12 Thread Sean Busbey (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HADOOP-15998:
-
Labels: build windows  (was: build newbie windows)

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, windows
> Fix For: 3.3.0
>
> Attachments: HADOOP-15998.v2.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2018-12-12 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719649#comment-16719649
 ] 

Sean Busbey commented on HADOOP-15998:
--

okay the integration tests do show issues but we aren't properly recognizing it.

Here's the branch version in precommit above:
https://builds.apache.org/job/PreCommit-HADOOP-Build/15643/artifact/out/branch-shadedclient.txt/*view*/
{code}
[INFO] --- maven-dependency-plugin:3.0.2:build-classpath 
(put-client-artifacts-in-a-property) @ hadoop-client-check-invariants ---
[INFO] Dependencies classpath:
/testptch/hadoop/hadoop-client-modules/hadoop-client-api/target/hadoop-client-api-3.3.0-SNAPSHOT.jar:/testptch/hadoop/hadoop-client-modules/hadoop-client-runtime/target/hadoop-client-runtime-3.3.0-SNAPSHOT.jar
[INFO] 
[INFO] --- exec-maven-plugin:1.3.1:exec (check-jar-contents) @ 
hadoop-client-check-invariants ---
[INFO] Artifact looks correct: 'hadoop-client-api-3.3.0-SNAPSHOT.jar'
[INFO] Artifact looks correct: 'hadoop-client-runtime-3.3.0-SNAPSHOT.jar'
[INFO] 
{code}

Here's after the patch has been applied:
https://builds.apache.org/job/PreCommit-HADOOP-Build/15643/artifact/out/patch-shadedclient.txt/*view*/
{code}
[INFO] --- maven-dependency-plugin:3.0.2:build-classpath 
(put-client-artifacts-in-a-property) @ hadoop-client-check-invariants ---
[INFO] Dependencies classpath:
/testptch/hadoop/hadoop-client-modules/hadoop-client-api/target/hadoop-client-api-3.3.0-SNAPSHOT.jar:/testptch/hadoop/hadoop-client-modules/hadoop-client-runtime/target/hadoop-client-runtime-3.3.0-SNAPSHOT.jar
[INFO] 
[INFO] --- exec-maven-plugin:1.3.1:exec (check-jar-contents) @ 
hadoop-client-check-invariants ---
java.io.FileNotFoundException: 
/testptch/hadoop/hadoop-client-modules/hadoop-client-api/target/hadoop-client-api-3.3.0-SNAPSHOT.jar:/testptch/hadoop/hadoop-client-modules/hadoop-client-runtime/target/hadoop-client-runtime-3.3.0-SNAPSHOT.jar
 (No such file or directory)
at java.util.zip.ZipFile.open(Native Method)
at java.util.zip.ZipFile.(ZipFile.java:225)
at java.util.zip.ZipFile.(ZipFile.java:155)
at java.util.zip.ZipFile.(ZipFile.java:126)
at sun.tools.jar.Main.list(Main.java:1115)
at sun.tools.jar.Main.run(Main.java:293)
at sun.tools.jar.Main.main(Main.java:1288)
[INFO] Artifact looks correct: 'hadoop-client-runtime-3.3.0-SNAPSHOT.jar'
[INFO] 
{code}

Please fix this before commit. Ideally also figure out why the build didn't 
actually fail and fix that.

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, newbie, windows
> Fix For: 3.3.0
>
> Attachments: HADOOP-15998.v2.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2018-12-12 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719643#comment-16719643
 ] 

Sean Busbey commented on HADOOP-15998:
--

It looks like this only alters the scripts. how do the integration tests still 
pass? I'm presuming they pass multiple jars? Has it coincidentally just been 
sending a single jar?

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, newbie, windows
> Fix For: 3.3.0
>
> Attachments: HADOOP-15998.v2.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16000) Remove TLSv1 and SSLv2Hello from the default value of hadoop.ssl.enabled.protocols

2018-12-12 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719628#comment-16719628
 ] 

Akira Ajisaka commented on HADOOP-16000:


Hi [~gabor.bota], would you document the that parameter is only used from 
DatanodeHttpServer in core-default.xml?

> Remove TLSv1 and SSLv2Hello from the default value of 
> hadoop.ssl.enabled.protocols
> --
>
> Key: HADOOP-16000
> URL: https://issues.apache.org/jira/browse/HADOOP-16000
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-16000.001.patch
>
>
> {code:title=core-default.xml}
>   public static final String SSL_ENABLED_PROTOCOLS_DEFAULT =
>   "TLSv1,SSLv2Hello,TLSv1.1,TLSv1.2";
> {code}
> TLSv1 and SSLv2Hello are considered to be vulnerable. Let's remove these by 
> default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16000) Remove TLSv1 and SSLv2Hello from the default value of hadoop.ssl.enabled.protocols

2018-12-12 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719611#comment-16719611
 ] 

Akira Ajisaka commented on HADOOP-16000:


I prefer documenting that the parameter is limited impact rather than fixing 
HADOOP-15169 before the patch. As I commented in HADOOP-15169, I don't want to 
add a setting to accept TLS 1.1 or older protocols to create a security hole 
for now.

> Remove TLSv1 and SSLv2Hello from the default value of 
> hadoop.ssl.enabled.protocols
> --
>
> Key: HADOOP-16000
> URL: https://issues.apache.org/jira/browse/HADOOP-16000
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-16000.001.patch
>
>
> {code:title=core-default.xml}
>   public static final String SSL_ENABLED_PROTOCOLS_DEFAULT =
>   "TLSv1,SSLv2Hello,TLSv1.1,TLSv1.2";
> {code}
> TLSv1 and SSLv2Hello are considered to be vulnerable. Let's remove these by 
> default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop pull request #446: HDFS-14147: Back port of HDFS-13056 to the 2.9 bra...

2018-12-12 Thread yzhou2001
GitHub user yzhou2001 opened a pull request:

https://github.com/apache/hadoop/pull/446

HDFS-14147: Back port of HDFS-13056 to the 2.9 branch



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/yzhou2001/hadoop branch-2.9

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/446.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #446


commit e5c3dda5cdfa376ea2ad3d9284f2f9b224ccd1df
Author: yzhou2001 
Date:   2018-12-13T00:09:46Z

Back port of HDFS-13056 to the 2.9 branch




---

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15860) ABFS: Throw IllegalArgumentException when Directory/File name ends with a period(.)

2018-12-12 Thread Shweta (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719566#comment-16719566
 ] 

Shweta commented on HADOOP-15860:
-

Thank you [~mackrorysd] for filing this JIRA and [~mackrorysd], [~DanielZhou], 
[~joemcdonnell] for the insightful discussions. 
I have posted a patch to throw the IllegalArgumentException when a 
file/directory has a trailing period for mkdirs(), create() and rename() 
functions.
Please review the patch and provide suggestions.

> ABFS: Throw IllegalArgumentException when Directory/File name ends with a 
> period(.)
> ---
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Shweta
>Priority: Major
> Attachments: HADOOP-15860.001.patch, trailing-periods.patch
>
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15860) ABFS: Throw IllegalArgumentException when Directory/File name ends with a period(.)

2018-12-12 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HADOOP-15860:

Attachment: HADOOP-15860.001.patch

> ABFS: Throw IllegalArgumentException when Directory/File name ends with a 
> period(.)
> ---
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Shweta
>Priority: Major
> Attachments: HADOOP-15860.001.patch, trailing-periods.patch
>
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15860) ABFS: Throw IllegalArgumentException when Directory/File name ends with a period(.)

2018-12-12 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta updated HADOOP-15860:

Summary: ABFS: Throw IllegalArgumentException when Directory/File name ends 
with a period(.)  (was: ABFS: Trailing period in file names gets ignored for 
some operations.)

> ABFS: Throw IllegalArgumentException when Directory/File name ends with a 
> period(.)
> ---
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Priority: Major
> Attachments: trailing-periods.patch
>
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15860) ABFS: Throw IllegalArgumentException when Directory/File name ends with a period(.)

2018-12-12 Thread Shweta (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shweta reassigned HADOOP-15860:
---

Assignee: Shweta

> ABFS: Throw IllegalArgumentException when Directory/File name ends with a 
> period(.)
> ---
>
> Key: HADOOP-15860
> URL: https://issues.apache.org/jira/browse/HADOOP-15860
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/adl
>Affects Versions: 3.2.0
>Reporter: Sean Mackrory
>Assignee: Shweta
>Priority: Major
> Attachments: trailing-periods.patch
>
>
> If you create a directory with a trailing period (e.g. '/test.') the period 
> is silently dropped, and will be listed as simply '/test'. '/test.test' 
> appears to work just fine.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2018-12-12 Thread Brian Grunkemeyer (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719546#comment-16719546
 ] 

Brian Grunkemeyer commented on HADOOP-15998:


The script was written assuming you could separate paths with a colon.  It just 
doesn't work on Windows when paths always start with something like C:\, or 
where : is also used to access NTFS streams (ie, 
c:\tmp\foo.txt:SeparateDataStream creates a different part of foo.txt that is 
not visible to most tools, similar to Apple's resource fork and data fork in 
files).  Fortunately no one was using this feature of supporting multiple files 
as input.

I don't see a good way of maintaining compatibility that isn't overly 
complicated.  Simply changing the input to the script is a lot simpler.

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, newbie, windows
> Fix For: 3.3.0
>
> Attachments: HADOOP-15998.v2.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2018-12-12 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719541#comment-16719541
 ] 

Giovanni Matteo Fumarola commented on HADOOP-15998:
---

Thanks [~briangru] . 
My only concern is the change in IFS - internal field separator is backward 
incompatible.

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, newbie, windows
> Fix For: 3.3.0
>
> Attachments: HADOOP-15998.v2.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15998) Jar validation bash scripts don't work on Windows due to platform differences (colons in paths, \r\n)

2018-12-12 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15998:
--
Fix Version/s: 3.3.0

> Jar validation bash scripts don't work on Windows due to platform differences 
> (colons in paths, \r\n)
> -
>
> Key: HADOOP-15998
> URL: https://issues.apache.org/jira/browse/HADOOP-15998
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 3.2.0, 3.3.0
> Environment: Windows 10
> Visual Studio 2017
>Reporter: Brian Grunkemeyer
>Priority: Blocker
>  Labels: build, newbie, windows
> Fix For: 3.3.0
>
> Attachments: HADOOP-15998.v2.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> Building Hadoop fails on Windows due to a few shell scripts that make invalid 
> assumptions:
> 1) Colon shouldn't be used to separate multiple paths in command line 
> parameters. Colons occur in Windows paths.
> 2) Shell scripts that rely on running external tools need to deal with 
> carriage return - line feed differences (lines ending in \r\n, not just \n)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15999) [s3a] Better support for out-of-band operations

2018-12-12 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719468#comment-16719468
 ] 

Sean Mackrory edited comment on HADOOP-15999 at 12/12/18 9:35 PM:
--

{quote}We might just want to make that configurable (separate config knob 
probably). If we are in "check both MS and S3" mode, we probably want a 
configurable or pluggable conflict policy.{quote}

Yeah - I also considered addressing the out-of-band deletes problem with a 
config (or 2) that governs whether we create and / or honor tombstones. But 
that's adding exposed complexity and isn't very elegant. If we can relatively 
easily just start comparing modification times, then we can fix all these use 
cases and offer 2 basic modes:

- S3Guard with authoritative mode, in which the MetadataStore is the source of 
truth and we can assume All The Things.
- S3Guard without authoritative mode, in which S3 is the source of truth. We 
will always be at least as up to date as S3 appears, and will fix list 
consistency as long as S3 doesn't give us evidence to the contrary (i.e. older 
modification times or the lack of an update entirely).

I feel very uncomfortable with the idea of some middle ground where S3Guard 
can't be the source of truth, but we're still trying to be in some cases. It 
either has all the context or it doesn't, and if it doesn't we're trading in 
correctness for some performance, which I think is the wrong trade-off.


was (Author: mackrorysd):
{quote}We might just want to make that configurable (separate config knob 
probably). If we are in "check both MS and S3" mode, we probably want a 
configurable or pluggable conflict policy.{quote}

Yeah - I also considered addressing the out-of-band deletes problem with a 
config (or 2) that governs whether we create and / or honor tombstones. But 
that's adding exposed complexity and isn't very elegant. If we can relative 
easily just start comparing modification times, then we can offer 2 basic modes:

- S3Guard with authoritative mode, in which the MetadataStore is the source of 
truth and we can assume All The Things.
- S3Guard without authoritative mode, in which S3 is the source of truth. We 
will always be at least as up to date as S3 appears, and will fix list 
consistency as long as S3 doesn't give us evidence to the contrary (i.e. older 
modification times or the lack of an update entirely).

> [s3a] Better support for out-of-band operations
> ---
>
> Key: HADOOP-15999
> URL: https://issues.apache.org/jira/browse/HADOOP-15999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Major
> Attachments: out-of-band-operations.patch
>
>
> S3Guard was initially done on the premise that a new MetadataStore would be 
> the source of truth, and that it wouldn't provide guarantees if updates were 
> done without using S3Guard.
> I've been seeing increased demand for better support for scenarios where 
> operations are done on the data that can't reasonably be done with S3Guard 
> involved. For example:
> * A file is deleted using S3Guard, and replaced by some other tool. S3Guard 
> can't tell the difference between the new file and delete / list 
> inconsistency and continues to treat the file as deleted.
> * An S3Guard-ed file is overwritten by a longer file by some other tool. When 
> reading the file, only the length of the original file is read.
> We could possibly have smarter behavior here by querying both S3 and the 
> MetadataStore (even in cases where we may currently only query the 
> MetadataStore in getFileStatus) and use whichever one has the higher modified 
> time.
> This kills the performance boost we currently get in some workloads with the 
> short-circuited getFileStatus, but we could keep it with authoritative mode 
> which should give a larger performance boost. At least we'd get more 
> correctness without authoritative mode and a clear declaration of when we can 
> make the assumptions required to short-circuit the process. If we can't 
> consider S3Guard the source of truth, we need to defer to S3 more.
> We'd need to be extra sure of any locality / time zone issues if we start 
> relying on mod_time more directly, but currently we're tracking the 
> modification time as returned by S3 anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15999) [s3a] Better support for out-of-band operations

2018-12-12 Thread Aaron Fabbri (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719456#comment-16719456
 ] 

Aaron Fabbri commented on HADOOP-15999:
---

Was going to link HADOOP-15780 here, but [~gabor.bota] beat me to it.

Currently getFileStatus is always short-circuit as you mentioned. We might just 
want to make that configurable (separate config knob probably). If we are in 
"check both MS and S3" mode, we probably want a configurable or pluggable 
conflict policy. The default would probably be to go into a retry loop waiting 
for both systems (MetadataStore and S3) to agree. After retry policy is 
exhausted, throw error or continue depending on the conflict policy.

Feel feel to ping me for reviews etc.

> [s3a] Better support for out-of-band operations
> ---
>
> Key: HADOOP-15999
> URL: https://issues.apache.org/jira/browse/HADOOP-15999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Major
> Attachments: out-of-band-operations.patch
>
>
> S3Guard was initially done on the premise that a new MetadataStore would be 
> the source of truth, and that it wouldn't provide guarantees if updates were 
> done without using S3Guard.
> I've been seeing increased demand for better support for scenarios where 
> operations are done on the data that can't reasonably be done with S3Guard 
> involved. For example:
> * A file is deleted using S3Guard, and replaced by some other tool. S3Guard 
> can't tell the difference between the new file and delete / list 
> inconsistency and continues to treat the file as deleted.
> * An S3Guard-ed file is overwritten by a longer file by some other tool. When 
> reading the file, only the length of the original file is read.
> We could possibly have smarter behavior here by querying both S3 and the 
> MetadataStore (even in cases where we may currently only query the 
> MetadataStore in getFileStatus) and use whichever one has the higher modified 
> time.
> This kills the performance boost we currently get in some workloads with the 
> short-circuited getFileStatus, but we could keep it with authoritative mode 
> which should give a larger performance boost. At least we'd get more 
> correctness without authoritative mode and a clear declaration of when we can 
> make the assumptions required to short-circuit the process. If we can't 
> consider S3Guard the source of truth, we need to defer to S3 more.
> We'd need to be extra sure of any locality / time zone issues if we start 
> relying on mod_time more directly, but currently we're tracking the 
> modification time as returned by S3 anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15999) [s3a] Better support for out-of-band operations

2018-12-12 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719468#comment-16719468
 ] 

Sean Mackrory commented on HADOOP-15999:


{quote}We might just want to make that configurable (separate config knob 
probably). If we are in "check both MS and S3" mode, we probably want a 
configurable or pluggable conflict policy.{quote}

Yeah - I also considered addressing the out-of-band deletes problem with a 
config (or 2) that governs whether we create and / or honor tombstones. But 
that's adding exposed complexity and isn't very elegant. If we can relative 
easily just start comparing modification times, then we can offer 2 basic modes:

- S3Guard with authoritative mode, in which the MetadataStore is the source of 
truth and we can assume All The Things.
- S3Guard without authoritative mode, in which S3 is the source of truth. We 
will always be at least as up to date as S3 appears, and will fix list 
consistency as long as S3 doesn't give us evidence to the contrary (i.e. older 
modification times or the lack of an update entirely).

> [s3a] Better support for out-of-band operations
> ---
>
> Key: HADOOP-15999
> URL: https://issues.apache.org/jira/browse/HADOOP-15999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Major
> Attachments: out-of-band-operations.patch
>
>
> S3Guard was initially done on the premise that a new MetadataStore would be 
> the source of truth, and that it wouldn't provide guarantees if updates were 
> done without using S3Guard.
> I've been seeing increased demand for better support for scenarios where 
> operations are done on the data that can't reasonably be done with S3Guard 
> involved. For example:
> * A file is deleted using S3Guard, and replaced by some other tool. S3Guard 
> can't tell the difference between the new file and delete / list 
> inconsistency and continues to treat the file as deleted.
> * An S3Guard-ed file is overwritten by a longer file by some other tool. When 
> reading the file, only the length of the original file is read.
> We could possibly have smarter behavior here by querying both S3 and the 
> MetadataStore (even in cases where we may currently only query the 
> MetadataStore in getFileStatus) and use whichever one has the higher modified 
> time.
> This kills the performance boost we currently get in some workloads with the 
> short-circuited getFileStatus, but we could keep it with authoritative mode 
> which should give a larger performance boost. At least we'd get more 
> correctness without authoritative mode and a clear declaration of when we can 
> make the assumptions required to short-circuit the process. If we can't 
> consider S3Guard the source of truth, we need to defer to S3 more.
> We'd need to be extra sure of any locality / time zone issues if we start 
> relying on mod_time more directly, but currently we're tracking the 
> modification time as returned by S3 anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15995) Add ldap.bind.password.alias in LdapGroupsMapping to distinguish aliases when using multiple providers through CompositeGroupsMapping

2018-12-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719338#comment-16719338
 ] 

Hudson commented on HADOOP-15995:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15598 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15598/])
HADOOP-15995. Add ldap.bind.password.alias in LdapGroupsMapping to (gifuma: rev 
76efeacd5f8563bd02b5b2f09c59cee3acdad8c7)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/LdapGroupsMapping.java
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestLdapGroupsMapping.java
* (edit) hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


> Add ldap.bind.password.alias in LdapGroupsMapping to distinguish aliases when 
> using multiple providers through CompositeGroupsMapping
> -
>
> Key: HADOOP-15995
> URL: https://issues.apache.org/jira/browse/HADOOP-15995
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15995.001.patch, HADOOP-15995.002.patch, 
> HADOOP-15995.003.patch, HADOOP-15995.004.patch, HADOOP-15995.005.patch, 
> HADOOP-15995.006.patch, HADOOP-15995.007.patch
>
>
> Currently, the property name hadoop.security.group.mapping.ldap.bind.password 
> is used as an alias to get password from CredentialProviders. This has a big 
> issue, which is that when we configure multiple LdapGroupsMapping providers 
> through CompositeGroupsMapping, they will all have the same alias, and won't 
> be able to be distinguished. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16000) Remove TLSv1 and SSLv2Hello from the default value of hadoop.ssl.enabled.protocols

2018-12-12 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719328#comment-16719328
 ] 

Giovanni Matteo Fumarola commented on HADOOP-16000:
---

Should we fix HADOOP-15169 before this patch?

> Remove TLSv1 and SSLv2Hello from the default value of 
> hadoop.ssl.enabled.protocols
> --
>
> Key: HADOOP-16000
> URL: https://issues.apache.org/jira/browse/HADOOP-16000
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Akira Ajisaka
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-16000.001.patch
>
>
> {code:title=core-default.xml}
>   public static final String SSL_ENABLED_PROTOCOLS_DEFAULT =
>   "TLSv1,SSLv2Hello,TLSv1.1,TLSv1.2";
> {code}
> TLSv1 and SSLv2Hello are considered to be vulnerable. Let's remove these by 
> default.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15995) Add ldap.bind.password.alias in LdapGroupsMapping to distinguish aliases when using multiple providers through CompositeGroupsMapping

2018-12-12 Thread Giovanni Matteo Fumarola (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719320#comment-16719320
 ] 

Giovanni Matteo Fumarola commented on HADOOP-15995:
---

Thanks [~lukmajercak] for working on this and [~lmccay] for the review.

Committed to trunk.

> Add ldap.bind.password.alias in LdapGroupsMapping to distinguish aliases when 
> using multiple providers through CompositeGroupsMapping
> -
>
> Key: HADOOP-15995
> URL: https://issues.apache.org/jira/browse/HADOOP-15995
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15995.001.patch, HADOOP-15995.002.patch, 
> HADOOP-15995.003.patch, HADOOP-15995.004.patch, HADOOP-15995.005.patch, 
> HADOOP-15995.006.patch, HADOOP-15995.007.patch
>
>
> Currently, the property name hadoop.security.group.mapping.ldap.bind.password 
> is used as an alias to get password from CredentialProviders. This has a big 
> issue, which is that when we configure multiple LdapGroupsMapping providers 
> through CompositeGroupsMapping, they will all have the same alias, and won't 
> be able to be distinguished. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15995) Add ldap.bind.password.alias in LdapGroupsMapping to distinguish aliases when using multiple providers through CompositeGroupsMapping

2018-12-12 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15995:
--
Fix Version/s: 3.3.0

> Add ldap.bind.password.alias in LdapGroupsMapping to distinguish aliases when 
> using multiple providers through CompositeGroupsMapping
> -
>
> Key: HADOOP-15995
> URL: https://issues.apache.org/jira/browse/HADOOP-15995
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15995.001.patch, HADOOP-15995.002.patch, 
> HADOOP-15995.003.patch, HADOOP-15995.004.patch, HADOOP-15995.005.patch, 
> HADOOP-15995.006.patch, HADOOP-15995.007.patch
>
>
> Currently, the property name hadoop.security.group.mapping.ldap.bind.password 
> is used as an alias to get password from CredentialProviders. This has a big 
> issue, which is that when we configure multiple LdapGroupsMapping providers 
> through CompositeGroupsMapping, they will all have the same alias, and won't 
> be able to be distinguished. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15995) Add ldap.bind.password.alias in LdapGroupsMapping to distinguish aliases when using multiple providers through CompositeGroupsMapping

2018-12-12 Thread Giovanni Matteo Fumarola (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Giovanni Matteo Fumarola updated HADOOP-15995:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add ldap.bind.password.alias in LdapGroupsMapping to distinguish aliases when 
> using multiple providers through CompositeGroupsMapping
> -
>
> Key: HADOOP-15995
> URL: https://issues.apache.org/jira/browse/HADOOP-15995
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Lukas Majercak
>Assignee: Lukas Majercak
>Priority: Major
> Attachments: HADOOP-15995.001.patch, HADOOP-15995.002.patch, 
> HADOOP-15995.003.patch, HADOOP-15995.004.patch, HADOOP-15995.005.patch, 
> HADOOP-15995.006.patch, HADOOP-15995.007.patch
>
>
> Currently, the property name hadoop.security.group.mapping.ldap.bind.password 
> is used as an alias to get password from CredentialProviders. This has a big 
> issue, which is that when we configure multiple LdapGroupsMapping providers 
> through CompositeGroupsMapping, they will all have the same alias, and won't 
> be able to be distinguished. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15988) Should be able to set empty directory flag to TRUE in DynamoDBMetadataStore#innerGet when using authoritative directory listings

2018-12-12 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-15988:
---
   Resolution: Fixed
Fix Version/s: 3.3.0
   Status: Resolved  (was: Patch Available)

Looks good to me. I did remove the iff -> if change you made between .001. and 
.002. as I believe that's intentionally (if-and-only-if).

> Should be able to set empty directory flag to TRUE in 
> DynamoDBMetadataStore#innerGet when using authoritative directory listings
> 
>
> Key: HADOOP-15988
> URL: https://issues.apache.org/jira/browse/HADOOP-15988
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15988.001.patch, HADOOP-15988.002.patch
>
>
> We have the following comment and implementation in DynamoDBMetadataStore:
> {noformat}
> // When this class has support for authoritative
> // (fully-cached) directory listings, we may also be able to answer
> // TRUE here.  Until then, we don't know if we have full listing or
> // not, thus the UNKNOWN here:
> meta.setIsEmptyDirectory(
> hasChildren ? Tristate.FALSE : Tristate.UNKNOWN);
> {noformat}
> We have authoritative listings now in dynamo since HADOOP-15621, so we should 
> resolve this comment, implement the solution and test it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15988) Should be able to set empty directory flag to TRUE in DynamoDBMetadataStore#innerGet when using authoritative directory listings

2018-12-12 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15988?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719227#comment-16719227
 ] 

Hudson commented on HADOOP-15988:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #15597 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/15597/])
HADOOP-15988. DynamoDBMetadataStore#innerGet should support empty (mackrorysd: 
rev 82b798581d12a5cbc9ae17fa290aa81e8ebf6a45)
* (edit) 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/s3guard/DynamoDBMetadataStore.java
* (edit) 
hadoop-tools/hadoop-aws/src/test/java/org/apache/hadoop/fs/s3a/s3guard/ITestDynamoDBMetadataStore.java


> Should be able to set empty directory flag to TRUE in 
> DynamoDBMetadataStore#innerGet when using authoritative directory listings
> 
>
> Key: HADOOP-15988
> URL: https://issues.apache.org/jira/browse/HADOOP-15988
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Gabor Bota
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15988.001.patch, HADOOP-15988.002.patch
>
>
> We have the following comment and implementation in DynamoDBMetadataStore:
> {noformat}
> // When this class has support for authoritative
> // (fully-cached) directory listings, we may also be able to answer
> // TRUE here.  Until then, we don't know if we have full listing or
> // not, thus the UNKNOWN here:
> meta.setIsEmptyDirectory(
> hasChildren ? Tristate.FALSE : Tristate.UNKNOWN);
> {noformat}
> We have authoritative listings now in dynamo since HADOOP-15621, so we should 
> resolve this comment, implement the solution and test it. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15428) s3guard bucket-info will create s3guard table if FS is set to do this automatically

2018-12-12 Thread Sean Mackrory (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15428?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16719117#comment-16719117
 ] 

Sean Mackrory commented on HADOOP-15428:


Yeah that's more direct. Tweaked it a bit and updated. I'm happy with it, so 
I'll resolve. Feel free to discuss if there's more feedback...

> s3guard bucket-info will create s3guard table if FS is set to do this 
> automatically
> ---
>
> Key: HADOOP-15428
> URL: https://issues.apache.org/jira/browse/HADOOP-15428
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15428.001.patch
>
>
> If you call hadoop s3guard bucket-info on a bucket where the fs is set to 
> create a s3guard table on demand, then the DDB table is automatically 
> created. As a result
> the {{bucket-info -unguarded}} option cannot be used, and the call has 
> significant side effects (i.e. it can run up bills)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15999) [s3a] Better support for out-of-band operations

2018-12-12 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15999:

Issue Type: Sub-task  (was: New Feature)
Parent: HADOOP-15619

> [s3a] Better support for out-of-band operations
> ---
>
> Key: HADOOP-15999
> URL: https://issues.apache.org/jira/browse/HADOOP-15999
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Major
> Attachments: out-of-band-operations.patch
>
>
> S3Guard was initially done on the premise that a new MetadataStore would be 
> the source of truth, and that it wouldn't provide guarantees if updates were 
> done without using S3Guard.
> I've been seeing increased demand for better support for scenarios where 
> operations are done on the data that can't reasonably be done with S3Guard 
> involved. For example:
> * A file is deleted using S3Guard, and replaced by some other tool. S3Guard 
> can't tell the difference between the new file and delete / list 
> inconsistency and continues to treat the file as deleted.
> * An S3Guard-ed file is overwritten by a longer file by some other tool. When 
> reading the file, only the length of the original file is read.
> We could possibly have smarter behavior here by querying both S3 and the 
> MetadataStore (even in cases where we may currently only query the 
> MetadataStore in getFileStatus) and use whichever one has the higher modified 
> time.
> This kills the performance boost we currently get in some workloads with the 
> short-circuited getFileStatus, but we could keep it with authoritative mode 
> which should give a larger performance boost. At least we'd get more 
> correctness without authoritative mode and a clear declaration of when we can 
> make the assumptions required to short-circuit the process. If we can't 
> consider S3Guard the source of truth, we need to defer to S3 more.
> We'd need to be extra sure of any locality / time zone issues if we start 
> relying on mod_time more directly, but currently we're tracking the 
> modification time as returned by S3 anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15428) s3guard bucket-info will create s3guard table if FS is set to do this automatically

2018-12-12 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-15428:
---
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> s3guard bucket-info will create s3guard table if FS is set to do this 
> automatically
> ---
>
> Key: HADOOP-15428
> URL: https://issues.apache.org/jira/browse/HADOOP-15428
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15428.001.patch
>
>
> If you call hadoop s3guard bucket-info on a bucket where the fs is set to 
> create a s3guard table on demand, then the DDB table is automatically 
> created. As a result
> the {{bucket-info -unguarded}} option cannot be used, and the call has 
> significant side effects (i.e. it can run up bills)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15428) s3guard bucket-info will create s3guard table if FS is set to do this automatically

2018-12-12 Thread Sean Mackrory (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-15428:
---
Release Note: If -unguarded flag is passed to `hadoop s3guard bucket-info`, 
it will now proceed with S3Guard disabled instead of failing if S3Guard is not 
already disabled.  (was: The -unguarded flag, passed to `hadoop s3guard 
bucket-info` will no proceed with S3Guard disabled instead of failing if 
S3Guard is not already disabled.)

> s3guard bucket-info will create s3guard table if FS is set to do this 
> automatically
> ---
>
> Key: HADOOP-15428
> URL: https://issues.apache.org/jira/browse/HADOOP-15428
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Steve Loughran
>Assignee: Gabor Bota
>Priority: Major
> Fix For: 3.3.0
>
> Attachments: HADOOP-15428.001.patch
>
>
> If you call hadoop s3guard bucket-info on a bucket where the fs is set to 
> create a s3guard table on demand, then the DDB table is automatically 
> created. As a result
> the {{bucket-info -unguarded}} option cannot be used, and the call has 
> significant side effects (i.e. it can run up bills)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15999) [s3a] Better support for out-of-band operations

2018-12-12 Thread Gabor Bota (JIRA)


 [ 
https://issues.apache.org/jira/browse/HADOOP-15999?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-15999:
---

Assignee: Gabor Bota

> [s3a] Better support for out-of-band operations
> ---
>
> Key: HADOOP-15999
> URL: https://issues.apache.org/jira/browse/HADOOP-15999
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: 3.1.0
>Reporter: Sean Mackrory
>Assignee: Gabor Bota
>Priority: Major
> Attachments: out-of-band-operations.patch
>
>
> S3Guard was initially done on the premise that a new MetadataStore would be 
> the source of truth, and that it wouldn't provide guarantees if updates were 
> done without using S3Guard.
> I've been seeing increased demand for better support for scenarios where 
> operations are done on the data that can't reasonably be done with S3Guard 
> involved. For example:
> * A file is deleted using S3Guard, and replaced by some other tool. S3Guard 
> can't tell the difference between the new file and delete / list 
> inconsistency and continues to treat the file as deleted.
> * An S3Guard-ed file is overwritten by a longer file by some other tool. When 
> reading the file, only the length of the original file is read.
> We could possibly have smarter behavior here by querying both S3 and the 
> MetadataStore (even in cases where we may currently only query the 
> MetadataStore in getFileStatus) and use whichever one has the higher modified 
> time.
> This kills the performance boost we currently get in some workloads with the 
> short-circuited getFileStatus, but we could keep it with authoritative mode 
> which should give a larger performance boost. At least we'd get more 
> correctness without authoritative mode and a clear declaration of when we can 
> make the assumptions required to short-circuit the process. If we can't 
> consider S3Guard the source of truth, we need to defer to S3 more.
> We'd need to be extra sure of any locality / time zone issues if we start 
> relying on mod_time more directly, but currently we're tracking the 
> modification time as returned by S3 anyway.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-16002) TestBlockStorageMovementAttemptedItems#testNoBlockMovementAttemptFinishedReportAdded fails sporadically in Trunk

2018-12-12 Thread Ayush Saxena (JIRA)
Ayush Saxena created HADOOP-16002:
-

 Summary: 
TestBlockStorageMovementAttemptedItems#testNoBlockMovementAttemptFinishedReportAdded
 fails sporadically in Trunk
 Key: HADOOP-16002
 URL: https://issues.apache.org/jira/browse/HADOOP-16002
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ayush Saxena
Assignee: Ayush Saxena


Reference :

https://builds.apache.org/job/PreCommit-HDFS-Build/25739/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/

https://builds.apache.org/job/PreCommit-HDFS-Build/25746/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/

https://builds.apache.org/job/PreCommit-HDFS-Build/25768/testReport/junit/org.apache.hadoop.hdfs.server.namenode.sps/TestBlockStorageMovementAttemptedItems/testNoBlockMovementAttemptFinishedReportAdded/



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15129) Datanode caches namenode DNS lookup failure and cannot startup

2018-12-12 Thread Ravi Prakash (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16718798#comment-16718798
 ] 

Ravi Prakash edited comment on HADOOP-15129 at 12/12/18 11:10 AM:
--

Hi Karthik! Thanks for your contribution. Could you please rebase the patch to 
the latest trunk? I usually apply patches using
{code:java}
$ git apply {code}
A few suggestions:
 # Could you please use short descriptions in JIRA? [I was told a long time 
ago|https://issues.apache.org/jira/browse/HDFS-2011?focusedCommentId=13041707=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13041707].
 :)
 # When using JIRA numbers, could you please write HDFS-8068 (instead of just 
8068) because issues often cut across several different projects, and this way 
JIRA creates nice links for viewers to click on?

Patches are usually committed to trunk *first* and then a (possibly) different 
version of the patch may be committed to earlier branches like branch-2. So 
technically you could have used neat Lambdas in the trunk patch. ;) Its a nit 
though.

I'm trying to find the wikipage that tried to explain certain errors. I'm 
afraid I rarely found them useful (its probably because we didn't really expand 
on those wiki pages ever), so I'm fine with a more helpful error in the logs.

 


was (Author: raviprak):
Hi Karthik! Thanks for your contribution. Could you please rebase the patch to 
the latest trunk? I usually apply patches using
{code:java}
$ git apply {code}
A few suggestions:
 # Could you please use short descriptions in JIRA? I was told a long time ago. 
:)
 # When using JIRA numbers, could you please write HDFS-8068 (instead of just 
8068) because issues often cut across several different projects, and this way 
JIRA creates nice links for viewers to click on?

Patches are usually committed to trunk *first* and then a (possibly) different 
version of the patch may be committed to earlier branches like branch-2. So 
technically you could have used neat Lambdas in the trunk patch. ;) Its a nit 
though.

I'm trying to find the wikipage that tried to explain certain errors. I'm 
afraid I rarely found them useful (its probably because we didn't really expand 
on those wiki pages ever), so I'm fine with a more helpful error in the logs.

 

> Datanode caches namenode DNS lookup failure and cannot startup
> --
>
> Key: HADOOP-15129
> URL: https://issues.apache.org/jira/browse/HADOOP-15129
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.8.2
> Environment: Google Compute Engine.
> I'm using Java 8, Debian 8, Hadoop 2.8.2.
>Reporter: Karthik Palaniappan
>Assignee: Karthik Palaniappan
>Priority: Minor
> Attachments: HADOOP-15129.001.patch, HADOOP-15129.002.patch
>
>
> On startup, the Datanode creates an InetSocketAddress to register with each 
> namenode. Though there are retries on connection failure throughout the 
> stack, the same InetSocketAddress is reused.
> InetSocketAddress is an interesting class, because it resolves DNS names to 
> IP addresses on construction, and it is never refreshed. Hadoop re-creates an 
> InetSocketAddress in some cases just in case the remote IP has changed for a 
> particular DNS name: https://issues.apache.org/jira/browse/HADOOP-7472.
> Anyway, on startup, you cna see the Datanode log: "Namenode...remains 
> unresolved" -- referring to the fact that DNS lookup failed.
> {code:java}
> 2017-11-02 16:01:55,115 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Refresh request received for nameservices: null
> 2017-11-02 16:01:55,153 WARN org.apache.hadoop.hdfs.DFSUtilClient: Namenode 
> for null remains unresolved for ID null. Check your hdfs-site.xml file to 
> ensure namenodes are configured properly.
> 2017-11-02 16:01:55,156 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Starting BPOfferServices for nameservices: 
> 2017-11-02 16:01:55,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool  (Datanode Uuid unassigned) service to 
> cluster-32f5-m:8020 starting to offer service
> {code}
> The Datanode then proceeds to use this unresolved address, as it may work if 
> the DN is configured to use a proxy. Since I'm not using a proxy, it forever 
> prints out this message:
> {code:java}
> 2017-12-15 00:13:40,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:45,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:50,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:55,713 WARN 

[jira] [Commented] (HADOOP-15129) Datanode caches namenode DNS lookup failure and cannot startup

2018-12-12 Thread Ravi Prakash (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16718798#comment-16718798
 ] 

Ravi Prakash commented on HADOOP-15129:
---

Hi Karthik! Thanks for your contribution. Could you please rebase the patch to 
the latest trunk? I usually apply patches using
{code:java}
$ git apply {code}
A few suggestions:
 # Could you please use short descriptions in JIRA? I was told a long time ago. 
:)
 # When using JIRA numbers, could you please write HDFS-8068 (instead of just 
8068) because issues often cut across several different projects, and this way 
JIRA creates nice links for viewers to click on?

Patches are usually committed to trunk *first* and then a (possibly) different 
version of the patch may be committed to earlier branches like branch-2. So 
technically you could have used neat Lambdas in the trunk patch. ;) Its a nit 
though.

I'm trying to find the wikipage that tried to explain certain errors. I'm 
afraid I rarely found them useful (its probably because we didn't really expand 
on those wiki pages ever), so I'm fine with a more helpful error in the logs.

 

> Datanode caches namenode DNS lookup failure and cannot startup
> --
>
> Key: HADOOP-15129
> URL: https://issues.apache.org/jira/browse/HADOOP-15129
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.8.2
> Environment: Google Compute Engine.
> I'm using Java 8, Debian 8, Hadoop 2.8.2.
>Reporter: Karthik Palaniappan
>Assignee: Karthik Palaniappan
>Priority: Minor
> Attachments: HADOOP-15129.001.patch, HADOOP-15129.002.patch
>
>
> On startup, the Datanode creates an InetSocketAddress to register with each 
> namenode. Though there are retries on connection failure throughout the 
> stack, the same InetSocketAddress is reused.
> InetSocketAddress is an interesting class, because it resolves DNS names to 
> IP addresses on construction, and it is never refreshed. Hadoop re-creates an 
> InetSocketAddress in some cases just in case the remote IP has changed for a 
> particular DNS name: https://issues.apache.org/jira/browse/HADOOP-7472.
> Anyway, on startup, you cna see the Datanode log: "Namenode...remains 
> unresolved" -- referring to the fact that DNS lookup failed.
> {code:java}
> 2017-11-02 16:01:55,115 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Refresh request received for nameservices: null
> 2017-11-02 16:01:55,153 WARN org.apache.hadoop.hdfs.DFSUtilClient: Namenode 
> for null remains unresolved for ID null. Check your hdfs-site.xml file to 
> ensure namenodes are configured properly.
> 2017-11-02 16:01:55,156 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Starting BPOfferServices for nameservices: 
> 2017-11-02 16:01:55,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool  (Datanode Uuid unassigned) service to 
> cluster-32f5-m:8020 starting to offer service
> {code}
> The Datanode then proceeds to use this unresolved address, as it may work if 
> the DN is configured to use a proxy. Since I'm not using a proxy, it forever 
> prints out this message:
> {code:java}
> 2017-12-15 00:13:40,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:45,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:50,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:55,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:14:00,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> {code}
> Unfortunately, the log doesn't contain the exception that triggered it, but 
> the culprit is actually in IPC Client: 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L444.
> This line was introduced in https://issues.apache.org/jira/browse/HADOOP-487 
> to give a clear error message when somebody mispells an address.
> However, the fix in HADOOP-7472 doesn't apply here, because that code happens 
> in Client#getConnection after the Connection is constructed.
> My proposed fix (will attach a patch) is to move this exception out of the 
> constructor and into a place that will trigger HADOOP-7472's logic to 
> re-resolve addresses. If the DNS failure was temporary, this will allow the 
> connection to succeed. If not, the connection will fail after ipc client 
> retries (default 10 seconds worth of retries).
> I want to fix this in ipc client rather than just in Datanode 

[jira] [Commented] (HADOOP-15169) "hadoop.ssl.enabled.protocols" should be considered in httpserver2

2018-12-12 Thread Akira Ajisaka (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16718809#comment-16718809
 ] 

Akira Ajisaka commented on HADOOP-15169:


In Apache Hadoop 3.x, the Jetty version is greater than 9.3.12 and it only 
accepts TLS 1.2 by default. I don't want to add a setting to accept TLS 1.1 or 
older protocols to create a security hole for now. When we have migrated to 
Java 11 and Jetty 9.4.x to use TLS 1.3, then we can add the setting for Jetty 
server.

On the other hand, in Apache Hadoop 2.x, adding the setting for HttpServer2 
makes sense to me. That way we can avoid using SSLv2Hello, TLSv1, or TLSv1.1 in 
HttpServer2.

> "hadoop.ssl.enabled.protocols" should be considered in httpserver2
> --
>
> Key: HADOOP-15169
> URL: https://issues.apache.org/jira/browse/HADOOP-15169
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>Priority: Major
> Attachments: HADOOP-15169-branch-2.patch, HADOOP-15169.patch
>
>
> As of now *hadoop.ssl.enabled.protocols"* will not take effect for all the 
> http servers( only Datanodehttp server will use this config).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15129) Datanode caches namenode DNS lookup failure and cannot startup

2018-12-12 Thread Ravi Prakash (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16718798#comment-16718798
 ] 

Ravi Prakash edited comment on HADOOP-15129 at 12/12/18 11:13 AM:
--

Hi Karthik! Thanks for your contribution. Could you please rebase the patch to 
the latest trunk? I usually apply patches using
{code:java}
$ git apply {code}
A few suggestions:
 # Could you please use short descriptions in JIRA? I was told a long time ago. 
:)
 # When using JIRA numbers, could you please write HDFS-8068 (instead of just 
8068) because issues often cut across several different projects, and this way 
JIRA creates nice links for viewers to click on?

Patches are usually committed to trunk *first* and then a (possibly) different 
version of the patch may be committed to earlier branches like branch-2. So 
technically you could have used neat Lambdas in the trunk patch. ;) Its a nit 
though.

I'm trying to find the wikipage that tried to explain certain errors. I'm 
afraid I rarely found them useful (its probably because we didn't really expand 
on those wiki pages ever), so I'm fine with a more helpful error in the logs.

Could you please also comment on whether you have been running with this patch 
in production for any amount of time and seen / not seen any issues with it?

I concur that this is extremely important code, so it behooves us to tread very 
carefully. 


was (Author: raviprak):
Hi Karthik! Thanks for your contribution. Could you please rebase the patch to 
the latest trunk? I usually apply patches using
{code:java}
$ git apply {code}
A few suggestions:
 # Could you please use short descriptions in JIRA? [I was told a long time 
ago|https://issues.apache.org/jira/browse/HDFS-2011?focusedCommentId=13041707=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13041707].
 :)
 # When using JIRA numbers, could you please write HDFS-8068 (instead of just 
8068) because issues often cut across several different projects, and this way 
JIRA creates nice links for viewers to click on?

Patches are usually committed to trunk *first* and then a (possibly) different 
version of the patch may be committed to earlier branches like branch-2. So 
technically you could have used neat Lambdas in the trunk patch. ;) Its a nit 
though.

I'm trying to find the wikipage that tried to explain certain errors. I'm 
afraid I rarely found them useful (its probably because we didn't really expand 
on those wiki pages ever), so I'm fine with a more helpful error in the logs.

 

> Datanode caches namenode DNS lookup failure and cannot startup
> --
>
> Key: HADOOP-15129
> URL: https://issues.apache.org/jira/browse/HADOOP-15129
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.8.2
> Environment: Google Compute Engine.
> I'm using Java 8, Debian 8, Hadoop 2.8.2.
>Reporter: Karthik Palaniappan
>Assignee: Karthik Palaniappan
>Priority: Minor
> Attachments: HADOOP-15129.001.patch, HADOOP-15129.002.patch
>
>
> On startup, the Datanode creates an InetSocketAddress to register with each 
> namenode. Though there are retries on connection failure throughout the 
> stack, the same InetSocketAddress is reused.
> InetSocketAddress is an interesting class, because it resolves DNS names to 
> IP addresses on construction, and it is never refreshed. Hadoop re-creates an 
> InetSocketAddress in some cases just in case the remote IP has changed for a 
> particular DNS name: https://issues.apache.org/jira/browse/HADOOP-7472.
> Anyway, on startup, you cna see the Datanode log: "Namenode...remains 
> unresolved" -- referring to the fact that DNS lookup failed.
> {code:java}
> 2017-11-02 16:01:55,115 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Refresh request received for nameservices: null
> 2017-11-02 16:01:55,153 WARN org.apache.hadoop.hdfs.DFSUtilClient: Namenode 
> for null remains unresolved for ID null. Check your hdfs-site.xml file to 
> ensure namenodes are configured properly.
> 2017-11-02 16:01:55,156 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Starting BPOfferServices for nameservices: 
> 2017-11-02 16:01:55,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool  (Datanode Uuid unassigned) service to 
> cluster-32f5-m:8020 starting to offer service
> {code}
> The Datanode then proceeds to use this unresolved address, as it may work if 
> the DN is configured to use a proxy. Since I'm not using a proxy, it forever 
> prints out this message:
> {code:java}
> 2017-12-15 00:13:40,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:45,712 WARN 

[jira] [Commented] (HADOOP-15129) Datanode caches namenode DNS lookup failure and cannot startup

2018-12-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15129?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16718800#comment-16718800
 ] 

Hadoop QA commented on HADOOP-15129:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HADOOP-15129 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-15129 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905521/HADOOP-15129.002.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15649/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> Datanode caches namenode DNS lookup failure and cannot startup
> --
>
> Key: HADOOP-15129
> URL: https://issues.apache.org/jira/browse/HADOOP-15129
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.8.2
> Environment: Google Compute Engine.
> I'm using Java 8, Debian 8, Hadoop 2.8.2.
>Reporter: Karthik Palaniappan
>Assignee: Karthik Palaniappan
>Priority: Minor
> Attachments: HADOOP-15129.001.patch, HADOOP-15129.002.patch
>
>
> On startup, the Datanode creates an InetSocketAddress to register with each 
> namenode. Though there are retries on connection failure throughout the 
> stack, the same InetSocketAddress is reused.
> InetSocketAddress is an interesting class, because it resolves DNS names to 
> IP addresses on construction, and it is never refreshed. Hadoop re-creates an 
> InetSocketAddress in some cases just in case the remote IP has changed for a 
> particular DNS name: https://issues.apache.org/jira/browse/HADOOP-7472.
> Anyway, on startup, you cna see the Datanode log: "Namenode...remains 
> unresolved" -- referring to the fact that DNS lookup failed.
> {code:java}
> 2017-11-02 16:01:55,115 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Refresh request received for nameservices: null
> 2017-11-02 16:01:55,153 WARN org.apache.hadoop.hdfs.DFSUtilClient: Namenode 
> for null remains unresolved for ID null. Check your hdfs-site.xml file to 
> ensure namenodes are configured properly.
> 2017-11-02 16:01:55,156 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Starting BPOfferServices for nameservices: 
> 2017-11-02 16:01:55,169 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool  (Datanode Uuid unassigned) service to 
> cluster-32f5-m:8020 starting to offer service
> {code}
> The Datanode then proceeds to use this unresolved address, as it may work if 
> the DN is configured to use a proxy. Since I'm not using a proxy, it forever 
> prints out this message:
> {code:java}
> 2017-12-15 00:13:40,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:45,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:50,712 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:13:55,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> 2017-12-15 00:14:00,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Problem connecting to server: cluster-32f5-m:8020
> {code}
> Unfortunately, the log doesn't contain the exception that triggered it, but 
> the culprit is actually in IPC Client: 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Client.java#L444.
> This line was introduced in https://issues.apache.org/jira/browse/HADOOP-487 
> to give a clear error message when somebody mispells an address.
> However, the fix in HADOOP-7472 doesn't apply here, because that code happens 
> in Client#getConnection after the Connection is constructed.
> My proposed fix (will attach a patch) is to move this exception out of the 
> constructor and into a place that will trigger HADOOP-7472's logic to 
> re-resolve addresses. If the DNS failure was temporary, this will allow the 
> connection to succeed. If not, the connection will fail after ipc client 
> retries (default 10 seconds worth of retries).
> I want to fix this in ipc client rather than just in Datanode startup, as 
> this fixes temporary DNS issues for all of Hadoop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HADOOP-15169) "hadoop.ssl.enabled.protocols" should be considered in httpserver2

2018-12-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16718764#comment-16718764
 ] 

Hadoop QA commented on HADOOP-15169:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
25s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
36s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m  
5s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
37s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}210m 48s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red}  1m  
0s{color} | {color:red} The patch generated 2 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black}279m 31s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:a716388 |
| JIRA Issue | HADOOP-15169 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12905859/HADOOP-15169-branch-2.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 68202ab2315b 4.4.0-134-generic #160~14.04.1-Ubuntu SMP Fri Aug 
17 11:07:07 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / eb8b1ea |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_181 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15645/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15645/testReport/ |
| asflicense | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15645/artifact/out/patch-asflicense-problems.txt
 |
| Max. process+thread count | 1475 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15645/console |
| Powered by | Apache Yetus 0.8.0   http://yetus.apache.org |


This message was automatically generated.



> "hadoop.ssl.enabled.protocols" should be considered in httpserver2
> --
>
> Key: HADOOP-15169
> URL: 

[GitHub] hadoop issue #445: YARN-9095. Removed Unused field from Resource: NUM_MANDAT...

2018-12-12 Thread szilard-nemeth
Github user szilard-nemeth commented on the issue:

https://github.com/apache/hadoop/pull/445
  
Hi @vbmudalige !
Thanks for this patch.
LGTM + 1 (non-binding)


---

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[GitHub] hadoop issue #444: YARN-9093. Remove commented code block from the beginning...

2018-12-12 Thread szilard-nemeth
Github user szilard-nemeth commented on the issue:

https://github.com/apache/hadoop/pull/444
  
Hi @vbmudalige !
Thanks for this patch.
LGTM + 1 (non-binding)


---

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-16001) ZKDelegationTokenSecretManager should use KerberosName#getShortName to get the user name for ZK ACL

2018-12-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16718590#comment-16718590
 ] 

Hadoop QA commented on HADOOP-16001:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 17s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  5s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
30s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.util.TestReadWriteDiskValidator |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16001 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951321/HDFS-14136.001.patch |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  |
| uname | Linux d6ad4cde423a 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / fb55e52 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15648/artifact/out/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15648/testReport/ |
| Max. process+thread count | 1640 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console 

[jira] [Commented] (HADOOP-16000) Remove TLSv1 and SSLv2Hello from the default value of hadoop.ssl.enabled.protocols

2018-12-12 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-16000?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16718587#comment-16718587
 ] 

Hadoop QA commented on HADOOP-16000:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 25m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 16m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 33s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
2s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 15m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 57s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
24s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 26s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:8f97d6f |
| JIRA Issue | HADOOP-16000 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12951467/HADOOP-16000.001.patch
 |
| Optional Tests |  dupname  asflicense  compile  javac  javadoc  mvninstall  
mvnsite  unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 7e385959a334 4.4.0-138-generic #164~14.04.1-Ubuntu SMP Fri Oct 
5 08:56:16 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / fb55e52 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/15647/testReport/ |
| Max. process+thread count | 1410 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 

[jira] [Comment Edited] (HADOOP-15616) Incorporate Tencent Cloud COS File System Implementation

2018-12-12 Thread YangY (JIRA)


[ 
https://issues.apache.org/jira/browse/HADOOP-15616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16718572#comment-16718572
 ] 

YangY edited comment on HADOOP-15616 at 12/12/18 8:01 AM:
--

Thanks [~xyao] for comment on this code.

Here are the answers to your comments:

1. Changes under hadoop-tools/hadoop-aliyun unrelated to this patch.
 This may be a misoperation when formatting my code, and the error has been 
corrected in the new patch.

2. Should we put hadoop-cos under hadoop-tools project like s3a, adsl, etc. 
instead of hadoop-cloud-storage-project?
 At first, I also thought I should put it under the hadoop-tools project. 
However, as steve's comment above, using "hadoop-cloud-storage-project" seems 
more appropriate,isn't it?

3. More description to keys.
 Thank you for your reminder, I will add some detailed descriptions in our 
document.

4. BufferPool.java: since it sets the disk buffer file delete on exist, does it 
support recovery if client restart?
 BufferPool is a shared buffer pool. It initially provides two buffer types: 
Memory and Disk. The latter uses the memory file mapping to construct a byte 
buffer that can be used by other classes uniformly.
 Therefore, it can not support recovery if client restart. After all, the disk 
buffer is mapped a temporal file, and it will be cleaned up automatically when 
the Java Virtual Machine exists.

In the latest patch, I further optimize it by combining two buffer types and 
gain two improvements: memory usage and buffer performance. For this reason, 
the type of buffers here will not be visible to the user.

Finally, I look forward to your more comments.


was (Author: yuyang733):
Thanks [~xyao] for comment on this code.

Here are the answers to your comments:

1. Changes under hadoop-tools/hadoop-aliyun unrelated to this patch.
 This may be a misoperation when formatting my code, and the error has been 
corrected in the new patch.

2. Should we put hadoop-cos under hadoop-tools project like s3a, adsl, etc. 
instead of hadoop-cloud-storage-project?
 At first, I also thought I should put it under the hadoop-tools project. 
However, as steve's comment above, using "hadoop-cloud-storage-project" seems 
more appropriate,isn't it?

3. More description to keys.
 Thank you for your reminder, I will add some detailed descriptions in our 
document.

4. BufferPool.java: since it sets the disk buffer file delete on exist, does it 
support recovery if client restart?
 BufferPool is a shared buffer pool. It initially provides two buffer types: 
Memory and Disk. The latter uses the memory file mapping to construct a byte 
buffer that can be used by other classes uniformly.
 Therefore, it can not support recovery if client restart. After all, the disk 
buffer is mapped a temporal file, and it will be cleaned up automatically when 
the Java Virtual Machine exists.

In the latest patch, I further optimize it by combining two buffer types: 
memory usage and buffer performance. For this reason, the type of buffers here 
will not be visible to the user.

Finally, I look forward to your more comments.

> Incorporate Tencent Cloud COS File System Implementation
> 
>
> Key: HADOOP-15616
> URL: https://issues.apache.org/jira/browse/HADOOP-15616
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/cos
>Reporter: Junping Du
>Assignee: YangY
>Priority: Major
> Attachments: HADOOP-15616.001.patch, HADOOP-15616.002.patch, 
> HADOOP-15616.003.patch, HADOOP-15616.004.patch, HADOOP-15616.005.patch, 
> Tencent-COS-Integrated.pdf
>
>
> Tencent cloud is top 2 cloud vendors in China market and the object store COS 
> ([https://intl.cloud.tencent.com/product/cos]) is widely used among China’s 
> cloud users but now it is hard for hadoop user to access data laid on COS 
> storage as no native support for COS in Hadoop.
> This work aims to integrate Tencent cloud COS with Hadoop/Spark/Hive, just 
> like what we do before for S3, ADL, OSS, etc. With simple configuration, 
> Hadoop applications can read/write data from COS without any code change.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org