[jira] [Commented] (HADOOP-13972) ADLS to support per-store configuration

2018-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374004#comment-16374004
 ] 

ASF GitHub Bot commented on HADOOP-13972:
-

Github user ssonker commented on the issue:

https://github.com/apache/hadoop/pull/339
  
This PR is merged as [this 
commit](https://github.com/apache/hadoop/commit/481d79fedc48942654dab08e23e71e80c8eb2aca).
 Therefore, closing it.


> ADLS to support per-store configuration
> ---
>
> Key: HADOOP-13972
> URL: https://issues.apache.org/jira/browse/HADOOP-13972
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: Sharad Sonker
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.2
>
>
> Useful when distcp needs to access 2 Data Lake stores with different SPIs.
> Of course, a workaround is to grant the same SPI access permission to both 
> stores, but sometimes it might not be feasible.
> One idea is to embed the store name in the configuration property names, 
> e.g., {{dfs.adls.oauth2..client.id}}. Per-store keys will be consulted 
> first, then fall back to the global keys.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13972) ADLS to support per-store configuration

2018-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16374005#comment-16374005
 ] 

ASF GitHub Bot commented on HADOOP-13972:
-

Github user ssonker closed the pull request at:

https://github.com/apache/hadoop/pull/339


> ADLS to support per-store configuration
> ---
>
> Key: HADOOP-13972
> URL: https://issues.apache.org/jira/browse/HADOOP-13972
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: Sharad Sonker
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.2
>
>
> Useful when distcp needs to access 2 Data Lake stores with different SPIs.
> Of course, a workaround is to grant the same SPI access permission to both 
> stores, but sometimes it might not be feasible.
> One idea is to embed the store name in the configuration property names, 
> e.g., {{dfs.adls.oauth2..client.id}}. Per-store keys will be consulted 
> first, then fall back to the global keys.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15223) Replace Collections.EMPTY* with empty* when available

2018-02-22 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373943#comment-16373943
 ] 

Akira Ajisaka commented on HADOOP-15223:


Cherry-picked to branch-3.

> Replace Collections.EMPTY* with empty* when available
> -
>
> Key: HADOOP-15223
> URL: https://issues.apache.org/jira/browse/HADOOP-15223
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: fang zhenyi
>Priority: Minor
>  Labels: newbie
> Fix For: 3.1.0
>
> Attachments: HADOOP-15223.001.patch, HADOOP-15223.002.patch, 
> HADOOP-15223.003.patch, HADOOP-15223.004.patch, HADOOP-15223.005.patch
>
>
> The use of {{Collections.EMPTY_SET}} and {{Collections.EMPTY_MAP}} often 
> causes unchecked assignment and it should be replaced with 
> {{Collections.emptySet()}} and {{Collections.emptyMap()}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15236) Fix typo in RequestHedgingProxyProvider and RequestHedgingRMFailoverProxyProvider

2018-02-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373939#comment-16373939
 ] 

Hudson commented on HADOOP-15236:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #13703 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/13703/])
HADOOP-15236. Fix typo in RequestHedgingProxyProvider and (aajisaka: rev 
c36b4aa31ce25fbe5fa173bce36da2950d74a475)
* (edit) 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/src/main/java/org/apache/hadoop/yarn/client/RequestHedgingRMFailoverProxyProvider.java
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/server/namenode/ha/RequestHedgingProxyProvider.java


> Fix typo in RequestHedgingProxyProvider and 
> RequestHedgingRMFailoverProxyProvider
> -
>
> Key: HADOOP-15236
> URL: https://issues.apache.org/jira/browse/HADOOP-15236
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Gabor Bota
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.1.0, 2.10.0
>
> Attachments: HADOOP-15236.001.patch, HADOOP-15236.002.patch
>
>
> Typo 'configred' in RequestHedgingProxyProvider and 
> RequestHedgingRMFailoverProxyProvider.
> {noformat}
>  * standbys. Once it receive a response from any one of the configred proxies,
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15223) Replace Collections.EMPTY* with empty* when available

2018-02-22 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373935#comment-16373935
 ] 

Akira Ajisaka commented on HADOOP-15223:


Sorry I forgot to cherry-pick this to branch-3. I'll cherry-pick this.

> Replace Collections.EMPTY* with empty* when available
> -
>
> Key: HADOOP-15223
> URL: https://issues.apache.org/jira/browse/HADOOP-15223
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Akira Ajisaka
>Assignee: fang zhenyi
>Priority: Minor
>  Labels: newbie
> Fix For: 3.1.0
>
> Attachments: HADOOP-15223.001.patch, HADOOP-15223.002.patch, 
> HADOOP-15223.003.patch, HADOOP-15223.004.patch, HADOOP-15223.005.patch
>
>
> The use of {{Collections.EMPTY_SET}} and {{Collections.EMPTY_MAP}} often 
> causes unchecked assignment and it should be replaced with 
> {{Collections.emptySet()}} and {{Collections.emptyMap()}}. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15236) Fix typo in RequestHedgingProxyProvider and RequestHedgingRMFailoverProxyProvider

2018-02-22 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15236:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.10.0
   3.1.0
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-3, branch-3.1, and branch-2. Thanks 
[~gabor.bota] for the contribution!

> Fix typo in RequestHedgingProxyProvider and 
> RequestHedgingRMFailoverProxyProvider
> -
>
> Key: HADOOP-15236
> URL: https://issues.apache.org/jira/browse/HADOOP-15236
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Gabor Bota
>Priority: Trivial
>  Labels: newbie
> Fix For: 3.1.0, 2.10.0
>
> Attachments: HADOOP-15236.001.patch, HADOOP-15236.002.patch
>
>
> Typo 'configred' in RequestHedgingProxyProvider and 
> RequestHedgingRMFailoverProxyProvider.
> {noformat}
>  * standbys. Once it receive a response from any one of the configred proxies,
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15236) Fix typo in RequestHedgingProxyProvider and RequestHedgingRMFailoverProxyProvider

2018-02-22 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373863#comment-16373863
 ] 

Akira Ajisaka commented on HADOOP-15236:


+1

> Fix typo in RequestHedgingProxyProvider and 
> RequestHedgingRMFailoverProxyProvider
> -
>
> Key: HADOOP-15236
> URL: https://issues.apache.org/jira/browse/HADOOP-15236
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Gabor Bota
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-15236.001.patch, HADOOP-15236.002.patch
>
>
> Typo 'configred' in RequestHedgingProxyProvider and 
> RequestHedgingRMFailoverProxyProvider.
> {noformat}
>  * standbys. Once it receive a response from any one of the configred proxies,
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15254) Correct the wrong word spelling 'intialize'

2018-02-22 Thread fang zhenyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang zhenyi updated HADOOP-15254:
-
Description: The correct wording of 'intialize' should be 'initialize'.

> Correct the wrong word spelling 'intialize'
> ---
>
> Key: HADOOP-15254
> URL: https://issues.apache.org/jira/browse/HADOOP-15254
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: fang zhenyi
>Assignee: fang zhenyi
>Priority: Minor
> Attachments: HADOOP-15254.001.patch
>
>
> The correct wording of 'intialize' should be 'initialize'.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15254) Correct the wrong word spelling 'intialize'

2018-02-22 Thread fang zhenyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang zhenyi updated HADOOP-15254:
-
Status: Patch Available  (was: Open)

> Correct the wrong word spelling 'intialize'
> ---
>
> Key: HADOOP-15254
> URL: https://issues.apache.org/jira/browse/HADOOP-15254
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: fang zhenyi
>Assignee: fang zhenyi
>Priority: Minor
> Attachments: HADOOP-15254.001.patch
>
>
> The correct wording of 'intialize' should be 'initialize'.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15254) Correct the wrong word spelling 'intialize'

2018-02-22 Thread fang zhenyi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

fang zhenyi updated HADOOP-15254:
-
Attachment: HADOOP-15254.001.patch

> Correct the wrong word spelling 'intialize'
> ---
>
> Key: HADOOP-15254
> URL: https://issues.apache.org/jira/browse/HADOOP-15254
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: fang zhenyi
>Assignee: fang zhenyi
>Priority: Minor
> Attachments: HADOOP-15254.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15254) Correct the wrong word spelling 'intialize'

2018-02-22 Thread fang zhenyi (JIRA)
fang zhenyi created HADOOP-15254:


 Summary: Correct the wrong word spelling 'intialize'
 Key: HADOOP-15254
 URL: https://issues.apache.org/jira/browse/HADOOP-15254
 Project: Hadoop Common
  Issue Type: Bug
Reporter: fang zhenyi
Assignee: fang zhenyi






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15253) Should update maxQueueSize when refresh call queue

2018-02-22 Thread Tao Jie (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15253?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tao Jie updated HADOOP-15253:
-
Attachment: HADOOP-15253.001.patch

> Should update maxQueueSize when refresh call queue
> --
>
> Key: HADOOP-15253
> URL: https://issues.apache.org/jira/browse/HADOOP-15253
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Tao Jie
>Assignee: Tao Jie
>Priority: Minor
> Attachments: HADOOP-15253.001.patch
>
>
> When calling {{dfsadmin -refreshCallQueue}} to update CallQueue instance, 
> {{maxQueueSize}} should also be updated.
> In case of changing CallQueue instance to FairCallQueue, the length of each 
> queue in FairCallQueue would be 1/priorityLevels of original length of 
> DefaultCallQueue. So it would be helpful for us to set the length of 
> callqueue to a proper value.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15253) Should update maxQueueSize when refresh call queue

2018-02-22 Thread Tao Jie (JIRA)
Tao Jie created HADOOP-15253:


 Summary: Should update maxQueueSize when refresh call queue
 Key: HADOOP-15253
 URL: https://issues.apache.org/jira/browse/HADOOP-15253
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Tao Jie
Assignee: Tao Jie


When calling {{dfsadmin -refreshCallQueue}} to update CallQueue instance, 
{{maxQueueSize}} should also be updated.
In case of changing CallQueue instance to FairCallQueue, the length of each 
queue in FairCallQueue would be 1/priorityLevels of original length of 
DefaultCallQueue. So it would be helpful for us to set the length of callqueue 
to a proper value.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14903) Add json-smart explicitly to pom.xml

2018-02-22 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373819#comment-16373819
 ] 

Brahma Reddy Battula commented on HADOOP-14903:
---

done.thanks

> Add json-smart explicitly to pom.xml
> 
>
> Key: HADOOP-14903
> URL: https://issues.apache.org/jira/browse/HADOOP-14903
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Major
> Fix For: 3.0.0-beta1, 2.10.0, 2.9.1, 2.8.4
>
> Attachments: HADOOP-14903-003-branch-2.patch, 
> HADOOP-14903-branch-2-003.patch, 
> HADOOP-14903-branch-2-004-ForExecutingTests.patch, 
> HADOOP-14903-branch-2-004.patch, HADOOP-14903.001.patch, 
> HADOOP-14903.002.patch, HADOOP-14903.003.patch
>
>
> With the library update in HADOOP-14799, maven knows how to pull in 
> net.minidev:json-smart for tests, but not for packaging.  This needs to be 
> added to the main project pom in order to avoid this warning:
> {noformat}
> [WARNING] The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT is missing, no 
> dependency information available
> {noformat}
> This is pulled in from a few places:
> {noformat}
> [INFO] |  +- org.apache.hadoop:hadoop-auth:jar:3.1.0-SNAPSHOT:compile
> [INFO] |  |  +- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |  |  +- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |  |  \- net.minidev:json-smart:jar:2.3:compile
> [INFO] |  |  \- org.apache.kerby:token-provider:jar:1.0.1:compile
> [INFO] |  | \- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |+- 
> com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |\- net.minidev:json-smart:jar:2.3:compile
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14903) Add json-smart explicitly to pom.xml

2018-02-22 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-14903:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

> Add json-smart explicitly to pom.xml
> 
>
> Key: HADOOP-14903
> URL: https://issues.apache.org/jira/browse/HADOOP-14903
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Major
> Fix For: 2.10.0, 2.9.1, 2.8.4, 3.0.0-beta1
>
> Attachments: HADOOP-14903-003-branch-2.patch, 
> HADOOP-14903-branch-2-003.patch, 
> HADOOP-14903-branch-2-004-ForExecutingTests.patch, 
> HADOOP-14903-branch-2-004.patch, HADOOP-14903.001.patch, 
> HADOOP-14903.002.patch, HADOOP-14903.003.patch
>
>
> With the library update in HADOOP-14799, maven knows how to pull in 
> net.minidev:json-smart for tests, but not for packaging.  This needs to be 
> added to the main project pom in order to avoid this warning:
> {noformat}
> [WARNING] The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT is missing, no 
> dependency information available
> {noformat}
> This is pulled in from a few places:
> {noformat}
> [INFO] |  +- org.apache.hadoop:hadoop-auth:jar:3.1.0-SNAPSHOT:compile
> [INFO] |  |  +- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |  |  +- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |  |  \- net.minidev:json-smart:jar:2.3:compile
> [INFO] |  |  \- org.apache.kerby:token-provider:jar:1.0.1:compile
> [INFO] |  | \- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |+- 
> com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |\- net.minidev:json-smart:jar:2.3:compile
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15250) Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong

2018-02-22 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373778#comment-16373778
 ] 

Greg Senia commented on HADOOP-15250:
-

[~ste...@apache.org] so I have some good news after doing some digging this 
afternoon. I originally found this IPC Client bind-address issue in an attempt 
to *distcp* between two clusters that have no explicit cross-realm kerberos 
trust between them other than they both have a one-way cross realm trust to our 
Active Directory environment. The example I list below shows how I was able to 
perform a *distcp* without a cross-realm trust between the two clusters. But I 
still would like to verify what is occurring here with IPC/NetUtils outbound 
client connections especially in regards to firewalls and containers.

 

Example/Details: 

*Distcp with Kerberos between Secure Clusters without Cross-realm 
Authentication*


Two clusters with the realms: *Source Realm (PROD.HDP.EXAMPLE.COM) and 
Destination Realm (MODL.HDP.EXAMPLE.COM)*

Data moves from the *Source Cluster (hdfs://prod) to the Destination Cluster 
(hdfs://modl)*


A *one-way cross-realm* trust exists between *Source Realm 
(PROD.HDP.EXAMPLE.COM and Active Directory (NT.EXAMPLE.COM), and Destination 
Realm (MODL.HDP.EXAMPLE.COM) and Active Directory (NT.EXAMPLE.COM).*


Both the Source Cluster (prod) and Destination Cluster (modl) are running a 
Hadoop Distribution with the following patches: 

https://issues.apache.org/jira/browse/HDFS-7546 

https://issues.apache.org/jira/browse/YARN-3021

We set mapreduce.job.hdfs-servers.token-renewal.exclude property which 
apparently instructs the Resource Managers on either cluster to skip or perform 
delegation token renewal for NameNode hosts and we also set the 
dfs.namenode.kerberos.principal.pattern property to * to allow distcp 
irrespective of the principal patterns of the source and destination clusters

*Example of Command that works:*

hadoop distcp -Ddfs.namenode.kerberos.principal.pattern=* 
-Dmapreduce.job.hdfs-servers.token-renewal.exclude=modl hdfs:///public_data 
hdfs://modl/public_data/gss_test2

  

> Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong
> --
>
> Key: HADOOP-15250
> URL: https://issues.apache.org/jira/browse/HADOOP-15250
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, net
>Affects Versions: 2.7.3, 2.9.0, 3.0.0
> Environment: Multihome cluster with split DNS and rDNS lookup of 
> localhost returning non-routable IPAddr
>Reporter: Greg Senia
>Priority: Critical
> Attachments: HADOOP-15250.patch
>
>
> We run our Hadoop clusters with two networks attached to each node. These 
> network are as follows a server network that is firewalled with firewalld 
> allowing inbound traffic: only SSH and things like Knox and Hiveserver2 and 
> the HTTP YARN RM/ATS and MR History Server. The second network is the cluster 
> network on the second network interface this uses Jumbo frames and is open no 
> restrictions and allows all cluster traffic to flow between nodes. 
>  
> To resolve DNS within the Hadoop Cluster we use DNS Views via BIND so if the 
> traffic is originating from nodes with cluster networks we return the 
> internal DNS record for the nodes. This all works fine with all the 
> multi-homing features added to Hadoop 2.x
>  Some logic around views:
> a. The internal view is used by cluster machines when performing lookups. So 
> hosts on the cluster network should get answers from the internal view in DNS
> b. The external view is used by non-local-cluster machines when performing 
> lookups. So hosts not on the cluster network should get answers from the 
> external view in DNS
>  
> So this brings me to our problem. We created some firewall rules to allow 
> inbound traffic from each clusters server network to allow distcp to occur. 
> But we noticed a problem almost immediately that when YARN attempted to talk 
> to the Remote Cluster it was binding outgoing traffic to the cluster network 
> interface which IS NOT routable. So after researching the code we noticed the 
> following in NetUtils.java and Client.java 
> Basically in Client.java it looks as if it takes whatever the hostname is and 
> attempts to bind to whatever the hostname is resolved to. This is not valid 
> in a multi-homed network with one routable interface and one non routable 
> interface. After reading through the java.net.Socket documentation it is 
> valid to perform socket.bind(null) which will allow the OS routing table and 
> DNS to send the traffic to the correct interface. I will also attach the 
> nework traces and a test patch for 2.7.x and 3.x code base. I have this test 
> fix below in my Hadoop Test Cluster.
> Client.java:
>     

[jira] [Comment Edited] (HADOOP-9747) Reduce unnecessary UGI synchronization

2018-02-22 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373760#comment-16373760
 ] 

Bharat Viswanadham edited comment on HADOOP-9747 at 2/23/18 1:01 AM:
-

Hi [~daryn]

Thanks for the updated patch.

I am +1 on the latest patch V04.

 Regarding my earlier synchronization question, you can ignore it, it has been 
addressed by the patch and the comments are clear to understand.

I have one question in general, this is an already existing issue, if multiple 
threads try to call getLoginUser() (And system properties

KRB5CCNAME is set), so each thread tries to spawn Auto renewal thread here. So, 
is it required for renew for each thread?


was (Author: bharatviswa):
Hi [~daryn]

Thanks for the updated patch.

I am +1 on the latest patch V04.

 

I have one question in general, this is an already existing issue, if multiple 
threads try to call getLoginUser() (And system properties

KRB5CCNAME is set), so each thread tries to spawn Auto renewal thread here. So, 
is it required for renew for each thread?

> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9747-trunk-03.patch, HADOOP-9747-trunk-04.patch, 
> HADOOP-9747-trunk.01.patch, HADOOP-9747-trunk.02.patch, 
> HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9747) Reduce unnecessary UGI synchronization

2018-02-22 Thread Bharat Viswanadham (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373760#comment-16373760
 ] 

Bharat Viswanadham commented on HADOOP-9747:


Hi [~daryn]

Thanks for the updated patch.

I am +1 on the latest patch V04.

 

I have one question in general, this is an already existing issue, if multiple 
threads try to call getLoginUser() (And system properties

KRB5CCNAME is set), so each thread tries to spawn Auto renewal thread here. So, 
is it required for renew for each thread?

> Reduce unnecessary UGI synchronization
> --
>
> Key: HADOOP-9747
> URL: https://issues.apache.org/jira/browse/HADOOP-9747
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9747-trunk-03.patch, HADOOP-9747-trunk-04.patch, 
> HADOOP-9747-trunk.01.patch, HADOOP-9747-trunk.02.patch, 
> HADOOP-9747.2.branch-2.patch, HADOOP-9747.2.trunk.patch, 
> HADOOP-9747.branch-2.patch, HADOOP-9747.trunk.patch
>
>
> Jstacks of heavily loaded NNs show up to dozens of threads blocking in the 
> UGI.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373730#comment-16373730
 ] 

genericqa commented on HADOOP-13761:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 59s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 1 
new + 13 unchanged - 1 fixed = 14 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
44s{color} | {color:red} hadoop-tools/hadoop-aws generated 7 new + 0 unchanged 
- 0 fixed = 7 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
54s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m 25s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-aws |
|  |  org.apache.hadoop.fs.s3a.InconsistentS3Object is Serializable; consider 
declaring a serialVersionUID  At InconsistentS3Object.java:a serialVersionUID  
At InconsistentS3Object.java:[lines 39-181] |
|  |  The field org.apache.hadoop.fs.s3a.InconsistentS3Object.wrapped is 
transient but isn't set by deserialization  In InconsistentS3Object.java:but 
isn't set by deserialization  In InconsistentS3Object.java |
|  |  Inconsistent synchronization of 
org.apache.hadoop.fs.s3a.S3AInputStream.contentRangeFinish; locked 88% of time  
Unsynchronized access at S3AInputStream.java:88% of time  Unsynchronized access 
at S3AInputStream.java:[line 299] |
|  |  Inconsistent synchronization of 
org.apache.hadoop.fs.s3a.S3AInputStream.nextReadPos; locked 93% of time  
Unsynchronized access at S3AInputStream.java:93% of time  Unsynchronized access 
at S3AInputStream.java:[line 361] |
|  |  Inconsistent synchronization of 
org.apache.hadoop.fs.s3a.S3AInputStream.pos; locked 56% of time  Unsynchronized 
access at S3AInputStream.java:56% of time  Unsynchronized access at 
S3AInputStream.java:[line 244] |
|  |  

[jira] [Commented] (HADOOP-15251) Upgrade surefire version in branch-2

2018-02-22 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373717#comment-16373717
 ] 

Chris Douglas commented on HADOOP-15251:


The reverts were in trunk, also. After it was committed to trunk, stability 
[improved|https://issues.apache.org/jira/browse/HADOOP-13514?focusedCommentId=16259532=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16259532].

> Upgrade surefire version in branch-2
> 
>
> Key: HADOOP-15251
> URL: https://issues.apache.org/jira/browse/HADOOP-15251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Douglas
>Priority: Major
> Attachments: HADOOP-15251-branch-2.001.patch
>
>
> Tests in branch-2 are not running reliably in Jenkins, and due to 
> SUREFIRE-524, these are not being cleaned up properly (see HADOOP-15153).
> Upgrading to a more recent version of the surefire plugin will help make the 
> problem easier to address in branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15007) Stabilize and document Configuration element

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373694#comment-16373694
 ] 

genericqa commented on HADOOP-15007:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
12m  6s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 22s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
28s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
35s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 87m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15007 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911619/HADOOP-15007.003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  xml  |
| uname | Linux 7ee1dcc3c89e 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 95904f6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14192/testReport/ |
| Max. process+thread count | 1397 (vs. ulimit of 1) |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14192/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> 

[jira] [Updated] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-02-22 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13761:
--
Attachment: HADOOP-13761-009.patch

> S3Guard: implement retries for DDB failures and throttling; translate 
> exceptions
> 
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Attachments: HADOOP-13761-004-to-005.patch, 
> HADOOP-13761-005-to-006-approx.diff.txt, HADOOP-13761-005.patch, 
> HADOOP-13761-006.patch, HADOOP-13761-007.patch, HADOOP-13761-008.patch, 
> HADOOP-13761-009.patch, HADOOP-13761.001.patch, HADOOP-13761.002.patch, 
> HADOOP-13761.003.patch, HADOOP-13761.004.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.
> In HADOOP-13651, I added TODO comments in most of the places retry loops are 
> needed, including:
> - open(path).  If MetadataStore reflects recent create/move of file path, but 
> we fail to read it from S3, retry.
> - delete(path).  If deleteObject() on S3 fails, but MetadataStore shows the 
> file exists, retry.
> - rename(src,dest).  If source path is not visible in S3 yet, retry.
> - listFiles(). Skip for now. Not currently implemented in S3Guard. I will 
> create a separate JIRA for this as it will likely require interface changes 
> (i.e. prefix or subtree scan).
> We may miss some cases initially and we should do failure injection testing 
> to make sure we're covered.  Failure injection tests can be a separate JIRA 
> to make this easier to review.
> We also need basic configuration parameters around retry policy.  There 
> should be a way to specify maximum retry duration, as some applications would 
> prefer to receive an error eventually, than waiting indefinitely.  We should 
> also be keeping statistics when inconsistency is detected and we enter a 
> retry loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14734) add option to tag DDB table(s) created

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373623#comment-16373623
 ] 

genericqa commented on HADOOP-14734:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m  5s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 40s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
55s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 51m  6s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-14734 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911617/HADOOP-14734-003.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7a5f3212bb29 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 95904f6 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14190/testReport/ |
| Max. process+thread count | 340 (vs. ulimit of 1) |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14190/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> add option to tag DDB table(s) created
> --
>
> Key: HADOOP-14734
> URL: 

[jira] [Commented] (HADOOP-14903) Add json-smart explicitly to pom.xml

2018-02-22 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373581#comment-16373581
 ] 

Jitendra Nath Pandey commented on HADOOP-14903:
---

Can this be resolved as fixed?

> Add json-smart explicitly to pom.xml
> 
>
> Key: HADOOP-14903
> URL: https://issues.apache.org/jira/browse/HADOOP-14903
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Major
> Fix For: 3.0.0-beta1, 2.10.0, 2.9.1, 2.8.4
>
> Attachments: HADOOP-14903-003-branch-2.patch, 
> HADOOP-14903-branch-2-003.patch, 
> HADOOP-14903-branch-2-004-ForExecutingTests.patch, 
> HADOOP-14903-branch-2-004.patch, HADOOP-14903.001.patch, 
> HADOOP-14903.002.patch, HADOOP-14903.003.patch
>
>
> With the library update in HADOOP-14799, maven knows how to pull in 
> net.minidev:json-smart for tests, but not for packaging.  This needs to be 
> added to the main project pom in order to avoid this warning:
> {noformat}
> [WARNING] The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT is missing, no 
> dependency information available
> {noformat}
> This is pulled in from a few places:
> {noformat}
> [INFO] |  +- org.apache.hadoop:hadoop-auth:jar:3.1.0-SNAPSHOT:compile
> [INFO] |  |  +- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |  |  +- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |  |  \- net.minidev:json-smart:jar:2.3:compile
> [INFO] |  |  \- org.apache.kerby:token-provider:jar:1.0.1:compile
> [INFO] |  | \- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |+- 
> com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |\- net.minidev:json-smart:jar:2.3:compile
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15007) Stabilize and document Configuration element

2018-02-22 Thread Ajay Kumar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ajay Kumar updated HADOOP-15007:

Attachment: HADOOP-15007.003.patch

> Stabilize and document Configuration  element
> --
>
> Key: HADOOP-15007
> URL: https://issues.apache.org/jira/browse/HADOOP-15007
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Blocker
> Attachments: HADOOP-15007.000.patch, HADOOP-15007.001.patch, 
> HADOOP-15007.002.patch, HADOOP-15007.003.patch
>
>
> HDFS-12350 (moved to HADOOP-15005). Adds the ability to tag properties with a 
>  value.
> We need to make sure that this feature is backwards compatible & usable in 
> production. That's docs, testing, marshalling etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15007) Stabilize and document Configuration element

2018-02-22 Thread Ajay Kumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373563#comment-16373563
 ] 

Ajay Kumar commented on HADOOP-15007:
-

Patch v3 to address checkstyle issue.

> Stabilize and document Configuration  element
> --
>
> Key: HADOOP-15007
> URL: https://issues.apache.org/jira/browse/HADOOP-15007
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Affects Versions: 3.2.0
>Reporter: Steve Loughran
>Assignee: Ajay Kumar
>Priority: Blocker
> Attachments: HADOOP-15007.000.patch, HADOOP-15007.001.patch, 
> HADOOP-15007.002.patch, HADOOP-15007.003.patch
>
>
> HDFS-12350 (moved to HADOOP-15005). Adds the ability to tag properties with a 
>  value.
> We need to make sure that this feature is backwards compatible & usable in 
> production. That's docs, testing, marshalling etc.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15146) Remove DataOutputByteBuffer

2018-02-22 Thread BELUGA BEHR (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373561#comment-16373561
 ] 

BELUGA BEHR commented on HADOOP-15146:
--

[~ste...@apache.org] :)

> Remove DataOutputByteBuffer
> ---
>
> Key: HADOOP-15146
> URL: https://issues.apache.org/jira/browse/HADOOP-15146
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: common
>Affects Versions: 3.0.0
>Reporter: BELUGA BEHR
>Assignee: BELUGA BEHR
>Priority: Minor
> Attachments: HADOOP-15146.1.patch, HADOOP-15146.2.patch, 
> HADOOP-15146.3.patch, HADOOP-15146.4.patch, HADOOP-15146.5.patch
>
>
> I can't seem to find any references to {{DataOutputByteBuffer}} maybe it 
> should be deprecated or simply removed?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15252) Checkstyle version is not compatible with IDEA's checkstyle plugin

2018-02-22 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15252:
--
Status: Patch Available  (was: Open)

> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-15252
> URL: https://issues.apache.org/jira/browse/HADOOP-15252
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-15252.001.patch, idea_checkstyle_settings.png
>
>
> After upgrading to the latest IDEA the IDE throws error messages in every few 
> minutes like
> {code:java}
> The Checkstyle rules file could not be parsed.
> SuppressionCommentFilter is not allowed as a child in Checker
> The file has been blacklisted for 60s.{code}
> This is caused by some backward incompatible changes in checkstyle source 
> code:
>  [http://checkstyle.sourceforge.net/releasenotes.html]
>  * 8.1: Make SuppressionCommentFilter and SuppressWithNearbyCommentFilter 
> children of TreeWalker.
>  * 8.2: remove FileContentsHolder module as FileContents object is available 
> for filters on TreeWalker in TreeWalkerAudit Event.
> IDEA uses checkstyle 8.8
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14734) add option to tag DDB table(s) created

2018-02-22 Thread Abraham Fine (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373543#comment-16373543
 ] 

Abraham Fine commented on HADOOP-14734:
---

[~ste...@apache.org]-
* I think i fixed the checkstyle issue.
* I reverted dynamoDBDocumentClient to dynamoDB
* {{tagTable}} is called from {{createTable}}
* I believe I fixed the import ordering
* Can you expand on "test needs to skip when"?
* I don't think this change will make moving off of {{TestMetadataStore}} much 
harder. I wanted to put this code in that class but tagging is not supported on 
local dynamo instances, so I needed to create a new ITest. Unless I missed an 
ITest there is no other appropriate place to put this test.

> add option to tag DDB table(s) created
> --
>
> Key: HADOOP-14734
> URL: https://issues.apache.org/jira/browse/HADOOP-14734
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Abraham Fine
>Priority: Minor
> Attachments: HADOOP-14734-001.patch, HADOOP-14734-002.patch, 
> HADOOP-14734-003.patch
>
>
> Many organisations have a "no untagged" resource policy; s3guard runs into 
> this when a table is created untagged. If there's a strict "delete untagged 
> resources" policy, the tables will go without warning.
> Proposed: we add an option which can be used to declare the tags for a table 
> when created, use it in creation. No need to worry about updating/viewing 
> tags, as the AWS console can do that



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14734) add option to tag DDB table(s) created

2018-02-22 Thread Abraham Fine (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abraham Fine updated HADOOP-14734:
--
Attachment: HADOOP-14734-003.patch

> add option to tag DDB table(s) created
> --
>
> Key: HADOOP-14734
> URL: https://issues.apache.org/jira/browse/HADOOP-14734
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Assignee: Abraham Fine
>Priority: Minor
> Attachments: HADOOP-14734-001.patch, HADOOP-14734-002.patch, 
> HADOOP-14734-003.patch
>
>
> Many organisations have a "no untagged" resource policy; s3guard runs into 
> this when a table is created untagged. If there's a strict "delete untagged 
> resources" policy, the tables will go without warning.
> Proposed: we add an option which can be used to declare the tags for a table 
> when created, use it in creation. No need to worry about updating/viewing 
> tags, as the AWS console can do that



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373534#comment-16373534
 ] 

genericqa commented on HADOOP-13761:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 7 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
30s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 17s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 13s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 13 
new + 13 unchanged - 1 fixed = 26 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 42s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-tools/hadoop-aws generated 7 new + 0 unchanged 
- 0 fixed = 7 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
39s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 47m 10s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-aws |
|  |  org.apache.hadoop.fs.s3a.InconsistentS3Object is Serializable; consider 
declaring a serialVersionUID  At InconsistentS3Object.java:a serialVersionUID  
At InconsistentS3Object.java:[lines 39-181] |
|  |  The field org.apache.hadoop.fs.s3a.InconsistentS3Object.wrapped is 
transient but isn't set by deserialization  In InconsistentS3Object.java:but 
isn't set by deserialization  In InconsistentS3Object.java |
|  |  Inconsistent synchronization of 
org.apache.hadoop.fs.s3a.S3AInputStream.contentRangeFinish; locked 88% of time  
Unsynchronized access at S3AInputStream.java:88% of time  Unsynchronized access 
at S3AInputStream.java:[line 299] |
|  |  Inconsistent synchronization of 
org.apache.hadoop.fs.s3a.S3AInputStream.nextReadPos; locked 93% of time  
Unsynchronized access at S3AInputStream.java:93% of time  Unsynchronized access 
at S3AInputStream.java:[line 361] |
|  |  Inconsistent synchronization of 
org.apache.hadoop.fs.s3a.S3AInputStream.pos; locked 56% of time  Unsynchronized 
access at S3AInputStream.java:56% of time  Unsynchronized access at 
S3AInputStream.java:[line 244] |
|  |  

[jira] [Commented] (HADOOP-15251) Upgrade surefire version in branch-2

2018-02-22 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373504#comment-16373504
 ] 

Gabor Bota commented on HADOOP-15251:
-

I checked branch-2, and the following appears: 
381e5161479 [Steve Loughran] 2016-10-27 Revert "HADOOP-13514. Upgrade maven 
surefire plugin to 2.19.1. Contributed by Ewan Higgs."
5d80b49602f [Steve Loughran] 2016-10-27 Revert "Addendum patch for HADOOP-13514 
Upgrade maven surefire plugin to 2.19.1. Contributed by Akira Ajisaka."
0c96ceaca9d [Wei-Chiu Chuang] 2016-10-26 Addendum patch for HADOOP-13514 
Upgrade maven surefire plugin to 2.19.1. Contributed by Akira Ajisaka.
6bb23a14b6f [Akira Ajisaka] 2016-10-25 HADOOP-13514. Upgrade maven surefire 
plugin to 2.19.1. Contributed by Ewan Higgs.

So HADOOP-13514 has been committed and then reverted. I don't know if it would 
be a good idea to apply again.

> Upgrade surefire version in branch-2
> 
>
> Key: HADOOP-15251
> URL: https://issues.apache.org/jira/browse/HADOOP-15251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Douglas
>Priority: Major
> Attachments: HADOOP-15251-branch-2.001.patch
>
>
> Tests in branch-2 are not running reliably in Jenkins, and due to 
> SUREFIRE-524, these are not being cleaned up properly (see HADOOP-15153).
> Upgrading to a more recent version of the surefire plugin will help make the 
> problem easier to address in branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15251) Upgrade surefire version in branch-2

2018-02-22 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373471#comment-16373471
 ] 

Gabor Bota commented on HADOOP-15251:
-

I'll try to apply HADOOP-13514 to branch-2 then run the integration tests 
against hadoop-aws and hadoop-aws failsafe.
I have not set up an environment yet to run the integration tests, but I'll 
look into it. 

> Upgrade surefire version in branch-2
> 
>
> Key: HADOOP-15251
> URL: https://issues.apache.org/jira/browse/HADOOP-15251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Douglas
>Priority: Major
> Attachments: HADOOP-15251-branch-2.001.patch
>
>
> Tests in branch-2 are not running reliably in Jenkins, and due to 
> SUREFIRE-524, these are not being cleaned up properly (see HADOOP-15153).
> Upgrading to a more recent version of the surefire plugin will help make the 
> problem easier to address in branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15251) Upgrade surefire version in branch-2

2018-02-22 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373471#comment-16373471
 ] 

Gabor Bota edited comment on HADOOP-15251 at 2/22/18 9:21 PM:
--

I'll try to apply HADOOP-13514 to branch-2 then run the integration tests 
against hadoop-aws.
I have not set up an environment yet to run the integration tests, but I'll 
look into it. 


was (Author: gabor.bota):
I'll try to apply HADOOP-13514 to branch-2 then run the integration tests 
against hadoop-aws and hadoop-aws failsafe.
I have not set up an environment yet to run the integration tests, but I'll 
look into it. 

> Upgrade surefire version in branch-2
> 
>
> Key: HADOOP-15251
> URL: https://issues.apache.org/jira/browse/HADOOP-15251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Douglas
>Priority: Major
> Attachments: HADOOP-15251-branch-2.001.patch
>
>
> Tests in branch-2 are not running reliably in Jenkins, and due to 
> SUREFIRE-524, these are not being cleaned up properly (see HADOOP-15153).
> Upgrading to a more recent version of the surefire plugin will help make the 
> problem easier to address in branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-02-22 Thread Aaron Fabbri (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aaron Fabbri updated HADOOP-13761:
--
Attachment: HADOOP-13761-008.patch

> S3Guard: implement retries for DDB failures and throttling; translate 
> exceptions
> 
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Attachments: HADOOP-13761-004-to-005.patch, 
> HADOOP-13761-005-to-006-approx.diff.txt, HADOOP-13761-005.patch, 
> HADOOP-13761-006.patch, HADOOP-13761-007.patch, HADOOP-13761-008.patch, 
> HADOOP-13761.001.patch, HADOOP-13761.002.patch, HADOOP-13761.003.patch, 
> HADOOP-13761.004.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.
> In HADOOP-13651, I added TODO comments in most of the places retry loops are 
> needed, including:
> - open(path).  If MetadataStore reflects recent create/move of file path, but 
> we fail to read it from S3, retry.
> - delete(path).  If deleteObject() on S3 fails, but MetadataStore shows the 
> file exists, retry.
> - rename(src,dest).  If source path is not visible in S3 yet, retry.
> - listFiles(). Skip for now. Not currently implemented in S3Guard. I will 
> create a separate JIRA for this as it will likely require interface changes 
> (i.e. prefix or subtree scan).
> We may miss some cases initially and we should do failure injection testing 
> to make sure we're covered.  Failure injection tests can be a separate JIRA 
> to make this easier to review.
> We also need basic configuration parameters around retry policy.  There 
> should be a way to specify maximum retry duration, as some applications would 
> prefer to receive an error eventually, than waiting indefinitely.  We should 
> also be keeping statistics when inconsistency is detected and we enter a 
> retry loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13761) S3Guard: implement retries for DDB failures and throttling; translate exceptions

2018-02-22 Thread Aaron Fabbri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373419#comment-16373419
 ] 

Aaron Fabbri commented on HADOOP-13761:
---

Cleaning up checkstyle / findbugs now.. new patch soon.

> S3Guard: implement retries for DDB failures and throttling; translate 
> exceptions
> 
>
> Key: HADOOP-13761
> URL: https://issues.apache.org/jira/browse/HADOOP-13761
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0-beta1
>Reporter: Aaron Fabbri
>Assignee: Aaron Fabbri
>Priority: Blocker
> Attachments: HADOOP-13761-004-to-005.patch, 
> HADOOP-13761-005-to-006-approx.diff.txt, HADOOP-13761-005.patch, 
> HADOOP-13761-006.patch, HADOOP-13761-007.patch, HADOOP-13761.001.patch, 
> HADOOP-13761.002.patch, HADOOP-13761.003.patch, HADOOP-13761.004.patch
>
>
> Following the S3AFileSystem integration patch in HADOOP-13651, we need to add 
> retry logic.
> In HADOOP-13651, I added TODO comments in most of the places retry loops are 
> needed, including:
> - open(path).  If MetadataStore reflects recent create/move of file path, but 
> we fail to read it from S3, retry.
> - delete(path).  If deleteObject() on S3 fails, but MetadataStore shows the 
> file exists, retry.
> - rename(src,dest).  If source path is not visible in S3 yet, retry.
> - listFiles(). Skip for now. Not currently implemented in S3Guard. I will 
> create a separate JIRA for this as it will likely require interface changes 
> (i.e. prefix or subtree scan).
> We may miss some cases initially and we should do failure injection testing 
> to make sure we're covered.  Failure injection tests can be a separate JIRA 
> to make this easier to review.
> We also need basic configuration parameters around retry policy.  There 
> should be a way to specify maximum retry duration, as some applications would 
> prefer to receive an error eventually, than waiting indefinitely.  We should 
> also be keeping statistics when inconsistency is detected and we enter a 
> retry loop.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15251) Upgrade surefire version in branch-2

2018-02-22 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373341#comment-16373341
 ] 

Chris Douglas commented on HADOOP-15251:


bq. So I should try to apply HADOOP-13514 to branch-2 and also include this 
version upgrade?
Yup, sounds good. I looked for surefire JIRAs, but somehow missed HADOOP-13514.

bq. I'd also like to see the results of running the maven integration tests 
against hadoop-aws and hadoop-aws as they use failsafe and depend on property 
passdown
Are these backported to branch-2? Are you set up to run these?

> Upgrade surefire version in branch-2
> 
>
> Key: HADOOP-15251
> URL: https://issues.apache.org/jira/browse/HADOOP-15251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Douglas
>Priority: Major
> Attachments: HADOOP-15251-branch-2.001.patch
>
>
> Tests in branch-2 are not running reliably in Jenkins, and due to 
> SUREFIRE-524, these are not being cleaned up properly (see HADOOP-15153).
> Upgrading to a more recent version of the surefire plugin will help make the 
> problem easier to address in branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13972) ADLS to support per-store configuration

2018-02-22 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373283#comment-16373283
 ] 

ASF GitHub Bot commented on HADOOP-13972:
-

Github user steveloughran commented on the issue:

https://github.com/apache/hadoop/pull/339
  
@ssonker can you close this PR now it's been merged in? thanks


> ADLS to support per-store configuration
> ---
>
> Key: HADOOP-13972
> URL: https://issues.apache.org/jira/browse/HADOOP-13972
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: Sharad Sonker
>Priority: Major
> Fix For: 3.1.0, 2.10.0, 3.0.2
>
>
> Useful when distcp needs to access 2 Data Lake stores with different SPIs.
> Of course, a workaround is to grant the same SPI access permission to both 
> stores, but sometimes it might not be feasible.
> One idea is to embed the store name in the configuration property names, 
> e.g., {{dfs.adls.oauth2..client.id}}. Per-store keys will be consulted 
> first, then fall back to the global keys.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15183) S3Guard store becomes inconsistent after partial failure of rename

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373230#comment-16373230
 ] 

genericqa commented on HADOOP-15183:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
30s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 18m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 15m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
14m 54s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
25s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 14m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 14m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
10m 24s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
33s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  4m 
22s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
33s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}103m 23s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15183 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911572/HADOOP-15183-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 5a4c17d16e17 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3132709 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14187/testReport/ |
| Max. process+thread count | 1362 (vs. ulimit of 5500) |
| modules | C: 

[jira] [Commented] (HADOOP-15200) Missing DistCpOptions constructor breaks downstream DistCp projects in 3.0

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373157#comment-16373157
 ] 

genericqa commented on HADOOP-15200:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 15s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 12s{color} | {color:orange} hadoop-tools/hadoop-distcp: The patch generated 
1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 28s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 14m 
51s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
24s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 60m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15200 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911574/HADOOP-15200.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 7a624e4b9c86 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3132709 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14188/artifact/out/diff-checkstyle-hadoop-tools_hadoop-distcp.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14188/testReport/ |
| Max. process+thread count | 334 (vs. ulimit of 5500) |
| modules | C: hadoop-tools/hadoop-distcp U: hadoop-tools/hadoop-distcp |
| Console output | 

[jira] [Updated] (HADOOP-15200) Missing DistCpOptions constructor breaks downstream DistCp projects in 3.0

2018-02-22 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated HADOOP-15200:
-
Status: Patch Available  (was: Open)

I am not super happy about the copy constructor that I had to tweak since 
members of the class are now final. Appreciate any comments while I clean this 
up a bit more.

> Missing DistCpOptions constructor breaks downstream DistCp projects in 3.0
> --
>
> Key: HADOOP-15200
> URL: https://issues.apache.org/jira/browse/HADOOP-15200
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.0.0
>Reporter: Kuhu Shukla
>Priority: Critical
> Attachments: HADOOP-15200.001.patch
>
>
> Post HADOOP-14267, the constructor for DistCpOptions was removed and will 
> break any project using it for java based implementation/usage of DistCp. 
> This JIRA would track next steps required to reconcile/fix this 
> incompatibility. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15200) Missing DistCpOptions constructor breaks downstream DistCp projects in 3.0

2018-02-22 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla updated HADOOP-15200:
-
Attachment: HADOOP-15200.001.patch

> Missing DistCpOptions constructor breaks downstream DistCp projects in 3.0
> --
>
> Key: HADOOP-15200
> URL: https://issues.apache.org/jira/browse/HADOOP-15200
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.0.0
>Reporter: Kuhu Shukla
>Priority: Critical
> Attachments: HADOOP-15200.001.patch
>
>
> Post HADOOP-14267, the constructor for DistCpOptions was removed and will 
> break any project using it for java based implementation/usage of DistCp. 
> This JIRA would track next steps required to reconcile/fix this 
> incompatibility. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15200) Missing DistCpOptions constructor breaks downstream DistCp projects in 3.0

2018-02-22 Thread Kuhu Shukla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kuhu Shukla reassigned HADOOP-15200:


Assignee: Kuhu Shukla

> Missing DistCpOptions constructor breaks downstream DistCp projects in 3.0
> --
>
> Key: HADOOP-15200
> URL: https://issues.apache.org/jira/browse/HADOOP-15200
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools/distcp
>Affects Versions: 3.0.0
>Reporter: Kuhu Shukla
>Assignee: Kuhu Shukla
>Priority: Critical
> Attachments: HADOOP-15200.001.patch
>
>
> Post HADOOP-14267, the constructor for DistCpOptions was removed and will 
> break any project using it for java based implementation/usage of DistCp. 
> This JIRA would track next steps required to reconcile/fix this 
> incompatibility. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15183) S3Guard store becomes inconsistent after partial failure of rename

2018-02-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15183:

Status: Patch Available  (was: Open)

Testing, s3A ireland.

ITestAssumeRole does fail in rename with s3guard turned on, which is what I 
expect: the tombstone marker from the first delete is causing confusion.

But the enhanced delete test isn't being picked up yet, even though the 
multidelete exception is being thrown before the delete is being picked up, and 
all the successfully deleted files are no longer there. the metadata shouldn't 
have been updated yet.

{code}
[ERROR]   
ITestAssumeRole.testRestrictedRenameReadOnlyData:499->executeRenameReadOnlyData:583->assertFileCount:864->Assert.fail:88
 files copied to the destination: expected 11 files in 
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlyData/renameDest but 
got 10
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlyData/renameDest/file-1
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlyData/renameDest/file-10
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlyData/renameDest/file-2
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlyData/renameDest/file-3
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlyData/renameDest/file-4
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlyData/renameDest/file-5
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlyData/renameDest/file-6
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlyData/renameDest/file-7
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlyData/renameDest/file-8
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlyData/renameDest/file-9
[ERROR]   
ITestAssumeRole.testRestrictedRenameReadOnlySingleDelete:507->executeRenameReadOnlyData:583->assertFileCount:864->Assert.fail:88
 files copied to the destination: expected 11 files in 
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlySingleDelete/renameDest
 but got 10
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-1
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-10
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-2
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-3
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-4
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-5
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-6
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-7
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-8
s3a://hwdev-steve-london/test/testRestrictedRenameReadOnlySingleDelete/renameDest/file-9
{code}

> S3Guard store becomes inconsistent after partial failure of rename
> --
>
> Key: HADOOP-15183
> URL: https://issues.apache.org/jira/browse/HADOOP-15183
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15183-001.patch, HADOOP-15183-002.patch, 
> org.apache.hadoop.fs.s3a.auth.ITestAssumeRole-output.txt
>
>
> If an S3A rename() operation fails partway through, such as when the user 
> doesn't have permissions to delete the source files after copying to the 
> destination, then the s3guard view of the world ends up inconsistent. In 
> particular the sequence
>  (assuming src/file* is a list of files file1...file10 and read only to 
> caller)
>
> # create file rename src/file1 dest/ ; expect AccessDeniedException in the 
> delete, dest/file1 will exist
> # delete file dest/file1
> # rename src/file* dest/  ; expect failure
> # list dest; you will not see dest/file1
> You will not see file1 in the listing, presumably because it will have a 
> tombstone marker and the update at the end of the rename() didn't take place: 
> the old data is still there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15183) S3Guard store becomes inconsistent after partial failure of rename

2018-02-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373049#comment-16373049
 ] 

Steve Loughran edited comment on HADOOP-15183 at 2/22/18 4:43 PM:
--

Patch 002.

Trying to make the s3guard inconsistency visible in a unit test. Failing. Will 
need to either go for 1K+ test files, or allow the page size for delete 
requests to be configured (best).

For setting the page size, two options
# Add a secret fs.s3a.x-test.delete.page.size. Simple, may get used, may need 
to be maintained.
# make it a config option of the inconsistent FS client, so iff you switch to 
the inconsistent client do you get to turn it on.

#2 is a bit more convoluted, but I like it. Initially though I'll do the secret 
config option






was (Author: ste...@apache.org):
Patch 003.

Trying to make the s3guard inconsistency visible in a unit test. Failing. Will 
need to either go for 1K+ test files, or allow the page size for delete 
requests to be configured (best).

For setting the page size, two options
# Add a secret fs.s3a.x-test.delete.page.size. Simple, may get used, may need 
to be maintained.
# make it a config option of the inconsistent FS client, so iff you switch to 
the inconsistent client do you get to turn it on.

#2 is a bit more convoluted, but I like it. Initially though I'll do the secret 
config option





> S3Guard store becomes inconsistent after partial failure of rename
> --
>
> Key: HADOOP-15183
> URL: https://issues.apache.org/jira/browse/HADOOP-15183
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15183-001.patch, HADOOP-15183-002.patch, 
> org.apache.hadoop.fs.s3a.auth.ITestAssumeRole-output.txt
>
>
> If an S3A rename() operation fails partway through, such as when the user 
> doesn't have permissions to delete the source files after copying to the 
> destination, then the s3guard view of the world ends up inconsistent. In 
> particular the sequence
>  (assuming src/file* is a list of files file1...file10 and read only to 
> caller)
>
> # create file rename src/file1 dest/ ; expect AccessDeniedException in the 
> delete, dest/file1 will exist
> # delete file dest/file1
> # rename src/file* dest/  ; expect failure
> # list dest; you will not see dest/file1
> You will not see file1 in the listing, presumably because it will have a 
> tombstone marker and the update at the end of the rename() didn't take place: 
> the old data is still there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15183) S3Guard store becomes inconsistent after partial failure of rename

2018-02-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15183?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373049#comment-16373049
 ] 

Steve Loughran commented on HADOOP-15183:
-

Patch 003.

Trying to make the s3guard inconsistency visible in a unit test. Failing. Will 
need to either go for 1K+ test files, or allow the page size for delete 
requests to be configured (best).

For setting the page size, two options
# Add a secret fs.s3a.x-test.delete.page.size. Simple, may get used, may need 
to be maintained.
# make it a config option of the inconsistent FS client, so iff you switch to 
the inconsistent client do you get to turn it on.

#2 is a bit more convoluted, but I like it. Initially though I'll do the secret 
config option





> S3Guard store becomes inconsistent after partial failure of rename
> --
>
> Key: HADOOP-15183
> URL: https://issues.apache.org/jira/browse/HADOOP-15183
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15183-001.patch, HADOOP-15183-002.patch, 
> org.apache.hadoop.fs.s3a.auth.ITestAssumeRole-output.txt
>
>
> If an S3A rename() operation fails partway through, such as when the user 
> doesn't have permissions to delete the source files after copying to the 
> destination, then the s3guard view of the world ends up inconsistent. In 
> particular the sequence
>  (assuming src/file* is a list of files file1...file10 and read only to 
> caller)
>
> # create file rename src/file1 dest/ ; expect AccessDeniedException in the 
> delete, dest/file1 will exist
> # delete file dest/file1
> # rename src/file* dest/  ; expect failure
> # list dest; you will not see dest/file1
> You will not see file1 in the listing, presumably because it will have a 
> tombstone marker and the update at the end of the rename() didn't take place: 
> the old data is still there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15183) S3Guard store becomes inconsistent after partial failure of rename

2018-02-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15183:

Status: Open  (was: Patch Available)

> S3Guard store becomes inconsistent after partial failure of rename
> --
>
> Key: HADOOP-15183
> URL: https://issues.apache.org/jira/browse/HADOOP-15183
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15183-001.patch, HADOOP-15183-002.patch, 
> org.apache.hadoop.fs.s3a.auth.ITestAssumeRole-output.txt
>
>
> If an S3A rename() operation fails partway through, such as when the user 
> doesn't have permissions to delete the source files after copying to the 
> destination, then the s3guard view of the world ends up inconsistent. In 
> particular the sequence
>  (assuming src/file* is a list of files file1...file10 and read only to 
> caller)
>
> # create file rename src/file1 dest/ ; expect AccessDeniedException in the 
> delete, dest/file1 will exist
> # delete file dest/file1
> # rename src/file* dest/  ; expect failure
> # list dest; you will not see dest/file1
> You will not see file1 in the listing, presumably because it will have a 
> tombstone marker and the update at the end of the rename() didn't take place: 
> the old data is still there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15183) S3Guard store becomes inconsistent after partial failure of rename

2018-02-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15183:

Attachment: HADOOP-15183-002.patch

> S3Guard store becomes inconsistent after partial failure of rename
> --
>
> Key: HADOOP-15183
> URL: https://issues.apache.org/jira/browse/HADOOP-15183
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Blocker
> Attachments: HADOOP-15183-001.patch, HADOOP-15183-002.patch, 
> org.apache.hadoop.fs.s3a.auth.ITestAssumeRole-output.txt
>
>
> If an S3A rename() operation fails partway through, such as when the user 
> doesn't have permissions to delete the source files after copying to the 
> destination, then the s3guard view of the world ends up inconsistent. In 
> particular the sequence
>  (assuming src/file* is a list of files file1...file10 and read only to 
> caller)
>
> # create file rename src/file1 dest/ ; expect AccessDeniedException in the 
> delete, dest/file1 will exist
> # delete file dest/file1
> # rename src/file* dest/  ; expect failure
> # list dest; you will not see dest/file1
> You will not see file1 in the listing, presumably because it will have a 
> tombstone marker and the update at the end of the rename() didn't take place: 
> the old data is still there.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15250) Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong

2018-02-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16373043#comment-16373043
 ] 

Steve Loughran commented on HADOOP-15250:
-

bq. These parameters the two DNS ones would never work in a splitview DNS 
environment with DNS servers correctly configured to determine the hostsname. 
Remember in our scenerio we want all local traffic to the cluster to use the 
cluster network and anything destined to the datacenter networks to use the 
server interfaces. This means you should NOT be binding to a non-routable 
address. 

I guess what we are looking at is that specific situation of "bind to all nics, 
rather than just the specific one whose IP address matches what an nslookup of 
your hostname says it is bonded to". That is different, AFAIK the multihome 
stuff in there right now, which supports multihomed servers but only brings up 
the cluster on one of the set of interfaces

I think I see the problem now, but like I said, I'm not the one to be reviewing 
changes to Client. I'm also wondering if this situation is replicated 
elsewhere. Specifically, if you want HDFS to be remotely visible, then the DN 
block services need to be exported on both interfaces, YARN apps, etc, etc. If 
so, you don't want whatever fixes are needed to go into IPC Client; doing it in 
netutil would be the place for broader adoption.

bq. This also raises the question that anyone putting in values to /etc/hosts 
would cause that to bind incorrectly say like 127.0.0.10. 

An [eternal problem|https://wiki.apache.org/hadoop/ConnectionRefused], 
especially on Ubuntu.

> Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong
> --
>
> Key: HADOOP-15250
> URL: https://issues.apache.org/jira/browse/HADOOP-15250
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, net
>Affects Versions: 2.7.3, 2.9.0, 3.0.0
> Environment: Multihome cluster with split DNS and rDNS lookup of 
> localhost returning non-routable IPAddr
>Reporter: Greg Senia
>Priority: Critical
> Attachments: HADOOP-15250.patch
>
>
> We run our Hadoop clusters with two networks attached to each node. These 
> network are as follows a server network that is firewalled with firewalld 
> allowing inbound traffic: only SSH and things like Knox and Hiveserver2 and 
> the HTTP YARN RM/ATS and MR History Server. The second network is the cluster 
> network on the second network interface this uses Jumbo frames and is open no 
> restrictions and allows all cluster traffic to flow between nodes. 
>  
> To resolve DNS within the Hadoop Cluster we use DNS Views via BIND so if the 
> traffic is originating from nodes with cluster networks we return the 
> internal DNS record for the nodes. This all works fine with all the 
> multi-homing features added to Hadoop 2.x
>  Some logic around views:
> a. The internal view is used by cluster machines when performing lookups. So 
> hosts on the cluster network should get answers from the internal view in DNS
> b. The external view is used by non-local-cluster machines when performing 
> lookups. So hosts not on the cluster network should get answers from the 
> external view in DNS
>  
> So this brings me to our problem. We created some firewall rules to allow 
> inbound traffic from each clusters server network to allow distcp to occur. 
> But we noticed a problem almost immediately that when YARN attempted to talk 
> to the Remote Cluster it was binding outgoing traffic to the cluster network 
> interface which IS NOT routable. So after researching the code we noticed the 
> following in NetUtils.java and Client.java 
> Basically in Client.java it looks as if it takes whatever the hostname is and 
> attempts to bind to whatever the hostname is resolved to. This is not valid 
> in a multi-homed network with one routable interface and one non routable 
> interface. After reading through the java.net.Socket documentation it is 
> valid to perform socket.bind(null) which will allow the OS routing table and 
> DNS to send the traffic to the correct interface. I will also attach the 
> nework traces and a test patch for 2.7.x and 3.x code base. I have this test 
> fix below in my Hadoop Test Cluster.
> Client.java:
>       
> |/*|
> | | * Bind the socket to the host specified in the principal name of the|
> | | * client, to ensure Server matching address of the client connection|
> | | * to host name in principal passed.|
> | | */|
> | |InetSocketAddress bindAddr = null;|
> | |if (ticket != null && ticket.hasKerberosCredentials()) {|
> | |KerberosInfo krbInfo =|
> | |remoteId.getProtocol().getAnnotation(KerberosInfo.class);|
> | |if (krbInfo != null) {|
> | |String principal = ticket.getUserName();|
> | |String host = 

[jira] [Commented] (HADOOP-15250) Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong

2018-02-22 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372935#comment-16372935
 ] 

Greg Senia commented on HADOOP-15250:
-

These parameters the two DNS ones would never work in a splitview DNS 
environment with DNS servers correctly configured to determine the hostsname. 
Remember in our scenerio we want all local traffic to the cluster to use the 
cluster network and anything destined to the datacenter networks to use the 
server interfaces. This means you should NOT be binding to a non-routable 
address. This also raises the question that anyone putting in values to 
/etc/hosts would cause that to bind incorrectly say like 127.0.0.10. Hadoop 
should be relying on the OS for DNS and IP routing information like other 
software stacks in the middleware space do. So I guess the question is. Is 
Hadoop going to support this multi-homing configuration which is similar to the 
SDN/Docker setup and our setup here. Nothing in the HWX articles state that it 
won't:
https://community.hortonworks.com/articles/24277/parameters-for-multi-homing.html


hadoop.security.dns.nameserver

The host name or IP address of the name server (DNS) which a service Node 
should use to determine its own host name for Kerberos Login. Requires 
hadoop.security.dns.interface. Most clusters will not require this setting.

hadoop.security.dns.interface 
 The name of the Network Interface from which the service should determine its 
host name for Kerberos login. e.g. eth2. In a multi-homed environment, the 
setting can be used to affect the _HOST substitution in the service Kerberos 
principal. If this configuration value is not set, the service will use its 
default hostname as returned by 
InetAddress.getLocalHost().getCanonicalHostName(). Most clusters will not 
require this setting.

In regards to hadoop.rpc.protection shouldn't this be what is guarding against 
the man in the middle not a null check on if a hostname has an ip asscoaited 
with to bind outbound???

hadoop.rpc.protection privacy
 A comma-separated list of protection values for secured sasl connections. 
Possible values are authentication, integrity and privacy. authentication means 
authentication only and no integrity or privacy; integrity implies 
authentication and integrity are enabled; and privacy implies all of 
authentication, integrity and privacy are enabled. 
hadoop.security.saslproperties.resolver.class can be used to override the 
hadoop.rpc.protection for a connection at the server side.

Here are all of our values for binding to support multi-homed networks per the 
documentation. Unforunately using the DNS options is not a valid solution with 
our network design. We did our due diligence spent almost a week forumulating a 
solution to this problem do not just assume we didn't set the parameters.

 

core-site.xml:
ipc.client.nobind.local.addr=true
hadoop.rpc.protection=privacy

hdfs-site.xml:
dfs.client.use.datanode.hostname=true
dfs.datanode.use.datanode.hostname=true
dfs.namenode.http-bind-host=0.0.0.0
dfs.namenode.https-bind-host=0.0.0.0
dfs.namenode.rpc-bind-host=0.0.0.0
dfs.namenode.lifeline.rpc-bind-host=0.0.0.0
dfs.namenode.servicerpc-bind-host=0.0.0.0
dfs.datanode.address=0.0.0.0:1019
dfs.datanode.http.address=0.0.0.0:1022
dfs.datanode.https.address=0.0.0.0:50475
dfs.datanode.ipc.address=0.0.0.0:8010
dfs.journalnode.http-address=0.0.0.0:8480
dfs.journalnode.https-address=0.0.0.0:8481
dfs.namenode.http-address.tech.nn1=ha21t51nn.tech.hdp.example.com:50070
dfs.namenode.http-address.tech.nn2=ha21t52nn.tech.hdp.example.com:50070
dfs.namenode.http-address.unit.nn1=ha21d51nn.unit.hdp.example.com:50070
dfs.namenode.http-address.unit.nn2=ha21d52nn.unit.hdp.example.com:50070
dfs.namenode.https-address.tech.nn1=ha21t51nn.tech.hdp.example.com:50470
dfs.namenode.https-address.tech.nn2=ha21t52nn.tech.hdp.example.com:50470
dfs.namenode.lifeline.rpc-address.tech.nn1=ha21t51nn.tech.hdp.example.com:8050
dfs.namenode.lifeline.rpc-address.tech.nn2=ha21t52nn.tech.hdp.example.com:8050
dfs.namenode.rpc-address.tech.nn1=ha21t51nn.tech.hdp.example.com:8020
dfs.namenode.rpc-address.tech.nn2=ha21t52nn.tech.hdp.example.com:8020
dfs.namenode.rpc-address.unit.nn1=ha21d51nn.unit.hdp.example.com:8020
dfs.namenode.rpc-address.unit.nn2=ha21d52nn.unit.hdp.example.com:8020
dfs.namenode.servicerpc-address.tech.nn1=ha21t51nn.tech.hdp.example.com:8040
dfs.namenode.servicerpc-address.tech.nn2=ha21t52nn.tech.hdp.example.com:8040
dfs.namenode.servicerpc-address.unit.nn1=ha21d51nn.unit.hdp.example.com:8040
dfs.namenode.servicerpc-address.unit.nn2=ha21d52nn.unit.hdp.example.com:8040

hbase-site.xml:
hbase.master.ipc.address=0.0.0.0
hbase.regionserver.ipc.address=0.0.0.0
hbase.master.info.bindAddress=0.0.0.0

mapred-site.xml:
mapreduce.jobhistory.bind-host=0.0.0.0
mapreduce.jobhistory.address=ha21t52mn.tech.hdp.example.com:10020

[jira] [Comment Edited] (HADOOP-15250) Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong

2018-02-22 Thread Axton Grams (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372896#comment-16372896
 ] 

Axton Grams edited comment on HADOOP-15250 at 2/22/18 3:11 PM:
---

I work with Greg on the same clusters.  To add some color to the DNS/split view 
configuration:
 * DNS is configured with 2 views:
 ** Internal: Used by cluster machines to resolve Hadoop nodes to cluster 
network segment IP
 ** External: Used by non-cluster machines to resolve Hadoop nodes to routable 
network segment IP
 * All nodes with a presence on the cluster network resolve machines to the 
cluster (non-routable) IP address
 * All nodes without a presence on the cluster network resolve machines to the 
routable IP address

We implemented this pattern for the following reasons:
 * We can allow unfettered access (iptables/firewalld) between cluster nodes
 * We use jumbo frames on the cluster network to ease network load

You have to understand that the interface the service binds to is conditional 
depending on the origin of the traffic, not how the server knows itself 
according to DNS or Kerberos.  Different nodes know the server by the same name 
with different IP addresses, depending on whether they have a presence on the 
cluster network segment.  All Hadoop nodes know themselves by the cluster IP 
address, which is non-routable.

This design is compatible with the Linux network stack, DNS view practices, 
multi-homing practices, and all other related technology domains, just not 
Hadoop.

We operate with the following assumptions:
 * The network stack provided by the OS knows how to properly route traffic
 * The information in DNS is properly managed and accurate
 * The hostname matches the Kerberos principal name, but the IP answer is 
different different for different clients


was (Author: agrams):
I work with Greg on the same clusters.  To add some color to the DNS/split view 
configuration:
 * DNS is configured with 2 views:
 ** Internal: Used by cluster machines to resolve Hadoop nodes to cluster 
network segment IP
 ** External: Used by non-cluster machines to resolve Hadoop nodes to routable 
network segment IP
 * All nodes with a presence on the cluster network resolve machines to the 
cluster (non-routable) IP address
 * All nodes without a presence on the cluster network resolve machines to the 
routable IP address

We implemented this pattern for the following reasons:
 * We can allow unfettered access (iptables/firewalld) between cluster nodes
 * We use jumbo frames on the cluster network to ease network load

You have to understand that the interface the service binds to is conditional 
depending on the origin of the traffic, not how the server knows itself 
according to DNS or Kerberos.  Different nodes know the server by the same name 
with different IP addresses, depending on whether they have a presence on the 
cluster network segment.  All Hadoop nodes know themselves by the cluster IP 
address, which is non-routable.

This design is compatible with the Linux network stack, DNS view practices, 
multi-homing practices, and all other related technology domains, just not 
Hadoop.

> Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong
> --
>
> Key: HADOOP-15250
> URL: https://issues.apache.org/jira/browse/HADOOP-15250
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, net
>Affects Versions: 2.7.3, 2.9.0, 3.0.0
> Environment: Multihome cluster with split DNS and rDNS lookup of 
> localhost returning non-routable IPAddr
>Reporter: Greg Senia
>Priority: Critical
> Attachments: HADOOP-15250.patch
>
>
> We run our Hadoop clusters with two networks attached to each node. These 
> network are as follows a server network that is firewalled with firewalld 
> allowing inbound traffic: only SSH and things like Knox and Hiveserver2 and 
> the HTTP YARN RM/ATS and MR History Server. The second network is the cluster 
> network on the second network interface this uses Jumbo frames and is open no 
> restrictions and allows all cluster traffic to flow between nodes. 
>  
> To resolve DNS within the Hadoop Cluster we use DNS Views via BIND so if the 
> traffic is originating from nodes with cluster networks we return the 
> internal DNS record for the nodes. This all works fine with all the 
> multi-homing features added to Hadoop 2.x
>  Some logic around views:
> a. The internal view is used by cluster machines when performing lookups. So 
> hosts on the cluster network should get answers from the internal view in DNS
> b. The external view is used by non-local-cluster machines when performing 
> lookups. So hosts not on the cluster network should get answers from the 
> external view in 

[jira] [Commented] (HADOOP-15250) Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong

2018-02-22 Thread Axton Grams (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372896#comment-16372896
 ] 

Axton Grams commented on HADOOP-15250:
--

I work with Greg on the same clusters.  To add some color to the DNS/split view 
configuration:
 * DNS is configured with 2 views:
 ** Internal: Used by cluster machines to resolve Hadoop nodes to cluster 
network segment IP
 ** External: Used by non-cluster machines to resolve Hadoop nodes to routable 
network segment IP
 * All nodes with a presence on the cluster network resolve machines to the 
cluster (non-routable) IP address
 * All nodes without a presence on the cluster network resolve machines to the 
routable IP address

We implemented this pattern for the following reasons:
 * We can allow unfettered access (iptables/firewalld) between cluster nodes
 * We use jumbo frames on the cluster network to ease network load

You have to understand that the interface the service binds to is conditional 
depending on the origin of the traffic, not how the server knows itself 
according to DNS or Kerberos.  Different nodes know the server by the same name 
with different IP addresses, depending on whether they have a presence on the 
cluster network segment.  All Hadoop nodes know themselves by the cluster IP 
address, which is non-routable.

This design is compatible with the Linux network stack, DNS view practices, 
multi-homing practices, and all other related technology domains, just not 
Hadoop.

> Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong
> --
>
> Key: HADOOP-15250
> URL: https://issues.apache.org/jira/browse/HADOOP-15250
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, net
>Affects Versions: 2.7.3, 2.9.0, 3.0.0
> Environment: Multihome cluster with split DNS and rDNS lookup of 
> localhost returning non-routable IPAddr
>Reporter: Greg Senia
>Priority: Critical
> Attachments: HADOOP-15250.patch
>
>
> We run our Hadoop clusters with two networks attached to each node. These 
> network are as follows a server network that is firewalled with firewalld 
> allowing inbound traffic: only SSH and things like Knox and Hiveserver2 and 
> the HTTP YARN RM/ATS and MR History Server. The second network is the cluster 
> network on the second network interface this uses Jumbo frames and is open no 
> restrictions and allows all cluster traffic to flow between nodes. 
>  
> To resolve DNS within the Hadoop Cluster we use DNS Views via BIND so if the 
> traffic is originating from nodes with cluster networks we return the 
> internal DNS record for the nodes. This all works fine with all the 
> multi-homing features added to Hadoop 2.x
>  Some logic around views:
> a. The internal view is used by cluster machines when performing lookups. So 
> hosts on the cluster network should get answers from the internal view in DNS
> b. The external view is used by non-local-cluster machines when performing 
> lookups. So hosts not on the cluster network should get answers from the 
> external view in DNS
>  
> So this brings me to our problem. We created some firewall rules to allow 
> inbound traffic from each clusters server network to allow distcp to occur. 
> But we noticed a problem almost immediately that when YARN attempted to talk 
> to the Remote Cluster it was binding outgoing traffic to the cluster network 
> interface which IS NOT routable. So after researching the code we noticed the 
> following in NetUtils.java and Client.java 
> Basically in Client.java it looks as if it takes whatever the hostname is and 
> attempts to bind to whatever the hostname is resolved to. This is not valid 
> in a multi-homed network with one routable interface and one non routable 
> interface. After reading through the java.net.Socket documentation it is 
> valid to perform socket.bind(null) which will allow the OS routing table and 
> DNS to send the traffic to the correct interface. I will also attach the 
> nework traces and a test patch for 2.7.x and 3.x code base. I have this test 
> fix below in my Hadoop Test Cluster.
> Client.java:
>       
> |/*|
> | | * Bind the socket to the host specified in the principal name of the|
> | | * client, to ensure Server matching address of the client connection|
> | | * to host name in principal passed.|
> | | */|
> | |InetSocketAddress bindAddr = null;|
> | |if (ticket != null && ticket.hasKerberosCredentials()) {|
> | |KerberosInfo krbInfo =|
> | |remoteId.getProtocol().getAnnotation(KerberosInfo.class);|
> | |if (krbInfo != null) {|
> | |String principal = ticket.getUserName();|
> | |String host = SecurityUtil.getHostFromPrincipal(principal);|
> | |// If host name is a valid local address then bind socket to it|
> | 

[jira] [Commented] (HADOOP-15250) Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong

2018-02-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372858#comment-16372858
 ] 

Steve Loughran commented on HADOOP-15250:
-

+cut the Java source code from the comment. Please don't put oracle source in 
here as complicates provenance of patches. Just assume everyone has a copy of 
the source and all you need to put in is a ref to the class+method. thx

> Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong
> --
>
> Key: HADOOP-15250
> URL: https://issues.apache.org/jira/browse/HADOOP-15250
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, net
>Affects Versions: 2.7.3, 2.9.0, 3.0.0
> Environment: Multihome cluster with split DNS and rDNS lookup of 
> localhost returning non-routable IPAddr
>Reporter: Greg Senia
>Priority: Critical
> Attachments: HADOOP-15250.patch
>
>
> We run our Hadoop clusters with two networks attached to each node. These 
> network are as follows a server network that is firewalled with firewalld 
> allowing inbound traffic: only SSH and things like Knox and Hiveserver2 and 
> the HTTP YARN RM/ATS and MR History Server. The second network is the cluster 
> network on the second network interface this uses Jumbo frames and is open no 
> restrictions and allows all cluster traffic to flow between nodes. 
>  
> To resolve DNS within the Hadoop Cluster we use DNS Views via BIND so if the 
> traffic is originating from nodes with cluster networks we return the 
> internal DNS record for the nodes. This all works fine with all the 
> multi-homing features added to Hadoop 2.x
>  Some logic around views:
> a. The internal view is used by cluster machines when performing lookups. So 
> hosts on the cluster network should get answers from the internal view in DNS
> b. The external view is used by non-local-cluster machines when performing 
> lookups. So hosts not on the cluster network should get answers from the 
> external view in DNS
>  
> So this brings me to our problem. We created some firewall rules to allow 
> inbound traffic from each clusters server network to allow distcp to occur. 
> But we noticed a problem almost immediately that when YARN attempted to talk 
> to the Remote Cluster it was binding outgoing traffic to the cluster network 
> interface which IS NOT routable. So after researching the code we noticed the 
> following in NetUtils.java and Client.java 
> Basically in Client.java it looks as if it takes whatever the hostname is and 
> attempts to bind to whatever the hostname is resolved to. This is not valid 
> in a multi-homed network with one routable interface and one non routable 
> interface. After reading through the java.net.Socket documentation it is 
> valid to perform socket.bind(null) which will allow the OS routing table and 
> DNS to send the traffic to the correct interface. I will also attach the 
> nework traces and a test patch for 2.7.x and 3.x code base. I have this test 
> fix below in my Hadoop Test Cluster.
> Client.java:
>       
> |/*|
> | | * Bind the socket to the host specified in the principal name of the|
> | | * client, to ensure Server matching address of the client connection|
> | | * to host name in principal passed.|
> | | */|
> | |InetSocketAddress bindAddr = null;|
> | |if (ticket != null && ticket.hasKerberosCredentials()) {|
> | |KerberosInfo krbInfo =|
> | |remoteId.getProtocol().getAnnotation(KerberosInfo.class);|
> | |if (krbInfo != null) {|
> | |String principal = ticket.getUserName();|
> | |String host = SecurityUtil.getHostFromPrincipal(principal);|
> | |// If host name is a valid local address then bind socket to it|
> | |{color:#FF}*InetAddress localAddr = 
> NetUtils.getLocalInetAddress(host);*{color}|
> |{color:#FF} ** {color}|if (localAddr != null) {|
> | |this.socket.setReuseAddress(true);|
> | |if (LOG.isDebugEnabled()) {|
> | |LOG.debug("Binding " + principal + " to " + localAddr);|
> | |}|
> | |*{color:#FF}bindAddr = new InetSocketAddress(localAddr, 0);{color}*|
> | *{color:#FF}{color}* |*{color:#FF}}{color}*|
> | |}|
> | |}|
>  
> So in my Hadoop 2.7.x Cluster I made the following changes and traffic flows 
> correctly out the correct interfaces:
>  
> diff --git 
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
>  
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> index e1be271..c5b4a42 100644
> --- 
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> @@ -305,6 +305,9 @@
>    public 

[jira] [Commented] (HADOOP-15252) Checkstyle version is not compatible with IDEA's checkstyle plugin

2018-02-22 Thread Zsolt Venczel (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372854#comment-16372854
 ] 

Zsolt Venczel commented on HADOOP-15252:


While I agree on such an upgrade IDEA checkstyle plugin can handle old version 
by explicitly providing the version number:  !idea_checkstyle_settings.png!

There is one odd behavior we noticed with IDEA though: due to some caching 
issue you should set the Checkstyle version then hit OK, reopen settings then 
add the config file.

> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-15252
> URL: https://issues.apache.org/jira/browse/HADOOP-15252
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-15252.001.patch, idea_checkstyle_settings.png
>
>
> After upgrading to the latest IDEA the IDE throws error messages in every few 
> minutes like
> {code:java}
> The Checkstyle rules file could not be parsed.
> SuppressionCommentFilter is not allowed as a child in Checker
> The file has been blacklisted for 60s.{code}
> This is caused by some backward incompatible changes in checkstyle source 
> code:
>  [http://checkstyle.sourceforge.net/releasenotes.html]
>  * 8.1: Make SuppressionCommentFilter and SuppressWithNearbyCommentFilter 
> children of TreeWalker.
>  * 8.2: remove FileContentsHolder module as FileContents object is available 
> for filters on TreeWalker in TreeWalkerAudit Event.
> IDEA uses checkstyle 8.8
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15250) Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong

2018-02-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372853#comment-16372853
 ] 

Steve Loughran commented on HADOOP-15250:
-

OK, it's specifically about split-DNS...subtly different than just multihomed. 
Added to text and env.

Have you tried with setting "hadoop.security.dns.nameserver" to the IPAddr of 
the specific DNS server you want to use? As that should be how the DNS server 
to tell your machine what its IP address is chosen in the case of > 1 DNS server

I'm going to point everyone at [multihomed 
HDFS}https://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-hdfs/HdfsMultihoming.html]
 and [Multihome 
YARN|https://hortonworks.com/blog/multihoming-on-hadoop-yarn-clusters/]. Before 
worrying about changes to the code (with review, issues of testing, 
backporting, etc, its important to make sure that ll the documented options 
have been exhausted.

> Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong
> --
>
> Key: HADOOP-15250
> URL: https://issues.apache.org/jira/browse/HADOOP-15250
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, net
>Affects Versions: 2.7.3, 2.9.0, 3.0.0
> Environment: Multihome cluster with split DNS and rDNS lookup of 
> localhost returning non-routable IPAddr
>Reporter: Greg Senia
>Priority: Critical
> Attachments: HADOOP-15250.patch
>
>
> We run our Hadoop clusters with two networks attached to each node. These 
> network are as follows a server network that is firewalled with firewalld 
> allowing inbound traffic: only SSH and things like Knox and Hiveserver2 and 
> the HTTP YARN RM/ATS and MR History Server. The second network is the cluster 
> network on the second network interface this uses Jumbo frames and is open no 
> restrictions and allows all cluster traffic to flow between nodes. 
>  
> To resolve DNS within the Hadoop Cluster we use DNS Views via BIND so if the 
> traffic is originating from nodes with cluster networks we return the 
> internal DNS record for the nodes. This all works fine with all the 
> multi-homing features added to Hadoop 2.x
>  Some logic around views:
> a. The internal view is used by cluster machines when performing lookups. So 
> hosts on the cluster network should get answers from the internal view in DNS
> b. The external view is used by non-local-cluster machines when performing 
> lookups. So hosts not on the cluster network should get answers from the 
> external view in DNS
>  
> So this brings me to our problem. We created some firewall rules to allow 
> inbound traffic from each clusters server network to allow distcp to occur. 
> But we noticed a problem almost immediately that when YARN attempted to talk 
> to the Remote Cluster it was binding outgoing traffic to the cluster network 
> interface which IS NOT routable. So after researching the code we noticed the 
> following in NetUtils.java and Client.java 
> Basically in Client.java it looks as if it takes whatever the hostname is and 
> attempts to bind to whatever the hostname is resolved to. This is not valid 
> in a multi-homed network with one routable interface and one non routable 
> interface. After reading through the java.net.Socket documentation it is 
> valid to perform socket.bind(null) which will allow the OS routing table and 
> DNS to send the traffic to the correct interface. I will also attach the 
> nework traces and a test patch for 2.7.x and 3.x code base. I have this test 
> fix below in my Hadoop Test Cluster.
> Client.java:
>       
> |/*|
> | | * Bind the socket to the host specified in the principal name of the|
> | | * client, to ensure Server matching address of the client connection|
> | | * to host name in principal passed.|
> | | */|
> | |InetSocketAddress bindAddr = null;|
> | |if (ticket != null && ticket.hasKerberosCredentials()) {|
> | |KerberosInfo krbInfo =|
> | |remoteId.getProtocol().getAnnotation(KerberosInfo.class);|
> | |if (krbInfo != null) {|
> | |String principal = ticket.getUserName();|
> | |String host = SecurityUtil.getHostFromPrincipal(principal);|
> | |// If host name is a valid local address then bind socket to it|
> | |{color:#FF}*InetAddress localAddr = 
> NetUtils.getLocalInetAddress(host);*{color}|
> |{color:#FF} ** {color}|if (localAddr != null) {|
> | |this.socket.setReuseAddress(true);|
> | |if (LOG.isDebugEnabled()) {|
> | |LOG.debug("Binding " + principal + " to " + localAddr);|
> | |}|
> | |*{color:#FF}bindAddr = new InetSocketAddress(localAddr, 0);{color}*|
> | *{color:#FF}{color}* |*{color:#FF}}{color}*|
> | |}|
> | |}|
>  
> So in my Hadoop 2.7.x Cluster I made the following changes and traffic flows 
> correctly out the correct interfaces:

[jira] [Updated] (HADOOP-15252) Checkstyle version is not compatible with IDEA's checkstyle plugin

2018-02-22 Thread Zsolt Venczel (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zsolt Venczel updated HADOOP-15252:
---
Attachment: idea_checkstyle_settings.png

> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-15252
> URL: https://issues.apache.org/jira/browse/HADOOP-15252
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-15252.001.patch, idea_checkstyle_settings.png
>
>
> After upgrading to the latest IDEA the IDE throws error messages in every few 
> minutes like
> {code:java}
> The Checkstyle rules file could not be parsed.
> SuppressionCommentFilter is not allowed as a child in Checker
> The file has been blacklisted for 60s.{code}
> This is caused by some backward incompatible changes in checkstyle source 
> code:
>  [http://checkstyle.sourceforge.net/releasenotes.html]
>  * 8.1: Make SuppressionCommentFilter and SuppressWithNearbyCommentFilter 
> children of TreeWalker.
>  * 8.2: remove FileContentsHolder module as FileContents object is available 
> for filters on TreeWalker in TreeWalkerAudit Event.
> IDEA uses checkstyle 8.8
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15250) Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong

2018-02-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372818#comment-16372818
 ] 

Steve Loughran edited comment on HADOOP-15250 at 2/22/18 2:16 PM:
--

[~ste...@apache.org] I think by binding to what is resolved by DNS you 
inherently break the ability to do Multi-homing with a routable and 
non-routable networks where you have split view DNS as in our case. We actually 
ran this design two years ago up the support chain of our Hadoop Vendor and was 
passed as being just fine. So the thought with splitview DNS is as follows: the 
server interface is resolved to ha21d52mn.unit.hdp.example.com when outside the 
cluster when inside the cluster ha21d52mn.unit.hdp.newyorklife.com is resolved 
to the cluster networks interface this was why we went with the DNS split views 
to support multi-homing correctly. If this functionality is NOT supported than 
Hadoop Project should remove the multi-homing features as things going to 
remote clusters will not work as shown by my trace/logs above. As Unit/Dev 
Cluster Network is not routable to our Tech/Test Cluster Network. So traffic 
would have to go out the server interfaces to get to another cluster hence why 
the splitview DNS is valid and a fix along these lines should at least allow a 
flag like the server components do to bind to 0.0.0.0. I do see HADOOP-7215 is 
what introduced this but I don't think multi-homing was thought about when this 
code was implemented and this code is really is only checking for null's.  So 
if this is not the right place than I think a solution needs to be determined 
to determine how to allow a Client to bind without using a specific IP/hostname 
as that is not a valid method with a multi-homed network. Also hadoop traffic 
destined for other hosts in the cluster would go via the cluster network and 
allow from use of jumbo frames as its not routable.

So when my co-workers and I decided to test this scenario out we modifed the 
code to allow traffic to bind to a valid local address depending on where the 
traffic needs to be destined this patch is working in our tech/test cluster. As 
the java.net.Socket doc shows you can run bind and it will bind to a valid 
local address. I mean attempting to bind outside of the OS opens up risks like 
we are hitting with distcp attempting to bind to an invalid source address that 
is non-routable

[https://docs.oracle.com/javase/8/docs/api/java/net/Socket.html#bind-java.net.SocketAddress-]



was (Author: gss2002):
[~ste...@apache.org] I think by binding to what is resolved by DNS you 
inherently break the ability to do Multi-homing with a routable and 
non-routable networks where you have split view DNS as in our case. We actually 
ran this design two years ago up the support chain of our Hadoop Vendor and was 
passed as being just fine. So the thought with splitview DNS is as follows: the 
server interface is resolved to ha21d52mn.unit.hdp.example.com when outside the 
cluster when inside the cluster ha21d52mn.unit.hdp.newyorklife.com is resolved 
to the cluster networks interface this was why we went with the DNS split views 
to support multi-homing correctly. If this functionality is NOT supported than 
Hadoop Project should remove the multi-homing features as things going to 
remote clusters will not work as shown by my trace/logs above. As Unit/Dev 
Cluster Network is not routable to our Tech/Test Cluster Network. So traffic 
would have to go out the server interfaces to get to another cluster hence why 
the splitview DNS is valid and a fix along these lines should at least allow a 
flag like the server components do to bind to 0.0.0.0. I do see HADOOP-7215 is 
what introduced this but I don't think multi-homing was thought about when this 
code was implemented and this code is really is only checking for null's.  So 
if this is not the right place than I think a solution needs to be determined 
to determine how to allow a Client to bind without using a specific IP/hostname 
as that is not a valid method with a multi-homed network. Also hadoop traffic 
destined for other hosts in the cluster would go via the cluster network and 
allow from use of jumbo frames as its not routable.

So when my co-workers and I decided to test this scenario out we modifed the 
code to allow traffic to bind to a valid local address depending on where the 
traffic needs to be destined this patch is working in our tech/test cluster. As 
the java.net.Socket doc shows you can run bind and it will bind to a valid 
local address. I mean attempting to bind outside of the OS opens up risks like 
we are hitting with distcp attempting to bind to an invalid source address that 
is non-routable

[https://docs.oracle.com/javase/8/docs/api/java/net/Socket.html#bind-java.net.SocketAddress-]

 

public void bind(


[jira] [Updated] (HADOOP-15250) Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong

2018-02-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15250:

Environment: Multihome cluster with split DNS and rDNS lookup of localhost 
returning non-routable IPAddr

> Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong
> --
>
> Key: HADOOP-15250
> URL: https://issues.apache.org/jira/browse/HADOOP-15250
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, net
>Affects Versions: 2.7.3, 2.9.0, 3.0.0
> Environment: Multihome cluster with split DNS and rDNS lookup of 
> localhost returning non-routable IPAddr
>Reporter: Greg Senia
>Priority: Critical
> Attachments: HADOOP-15250.patch
>
>
> We run our Hadoop clusters with two networks attached to each node. These 
> network are as follows a server network that is firewalled with firewalld 
> allowing inbound traffic: only SSH and things like Knox and Hiveserver2 and 
> the HTTP YARN RM/ATS and MR History Server. The second network is the cluster 
> network on the second network interface this uses Jumbo frames and is open no 
> restrictions and allows all cluster traffic to flow between nodes. 
>  
> To resolve DNS within the Hadoop Cluster we use DNS Views via BIND so if the 
> traffic is originating from nodes with cluster networks we return the 
> internal DNS record for the nodes. This all works fine with all the 
> multi-homing features added to Hadoop 2.x
>  Some logic around views:
> a. The internal view is used by cluster machines when performing lookups. So 
> hosts on the cluster network should get answers from the internal view in DNS
> b. The external view is used by non-local-cluster machines when performing 
> lookups. So hosts not on the cluster network should get answers from the 
> external view in DNS
>  
> So this brings me to our problem. We created some firewall rules to allow 
> inbound traffic from each clusters server network to allow distcp to occur. 
> But we noticed a problem almost immediately that when YARN attempted to talk 
> to the Remote Cluster it was binding outgoing traffic to the cluster network 
> interface which IS NOT routable. So after researching the code we noticed the 
> following in NetUtils.java and Client.java 
> Basically in Client.java it looks as if it takes whatever the hostname is and 
> attempts to bind to whatever the hostname is resolved to. This is not valid 
> in a multi-homed network with one routable interface and one non routable 
> interface. After reading through the java.net.Socket documentation it is 
> valid to perform socket.bind(null) which will allow the OS routing table and 
> DNS to send the traffic to the correct interface. I will also attach the 
> nework traces and a test patch for 2.7.x and 3.x code base. I have this test 
> fix below in my Hadoop Test Cluster.
> Client.java:
>       
> |/*|
> | | * Bind the socket to the host specified in the principal name of the|
> | | * client, to ensure Server matching address of the client connection|
> | | * to host name in principal passed.|
> | | */|
> | |InetSocketAddress bindAddr = null;|
> | |if (ticket != null && ticket.hasKerberosCredentials()) {|
> | |KerberosInfo krbInfo =|
> | |remoteId.getProtocol().getAnnotation(KerberosInfo.class);|
> | |if (krbInfo != null) {|
> | |String principal = ticket.getUserName();|
> | |String host = SecurityUtil.getHostFromPrincipal(principal);|
> | |// If host name is a valid local address then bind socket to it|
> | |{color:#FF}*InetAddress localAddr = 
> NetUtils.getLocalInetAddress(host);*{color}|
> |{color:#FF} ** {color}|if (localAddr != null) {|
> | |this.socket.setReuseAddress(true);|
> | |if (LOG.isDebugEnabled()) {|
> | |LOG.debug("Binding " + principal + " to " + localAddr);|
> | |}|
> | |*{color:#FF}bindAddr = new InetSocketAddress(localAddr, 0);{color}*|
> | *{color:#FF}{color}* |*{color:#FF}}{color}*|
> | |}|
> | |}|
>  
> So in my Hadoop 2.7.x Cluster I made the following changes and traffic flows 
> correctly out the correct interfaces:
>  
> diff --git 
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
>  
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> index e1be271..c5b4a42 100644
> --- 
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> @@ -305,6 +305,9 @@
>    public static final String  IPC_CLIENT_FALLBACK_TO_SIMPLE_AUTH_ALLOWED_KEY 
> = "ipc.client.fallback-to-simple-auth-allowed";
>    public static final boolean 
> 

[jira] [Commented] (HADOOP-15250) Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong

2018-02-22 Thread Shane Kumpf (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372838#comment-16372838
 ] 

Shane Kumpf commented on HADOOP-15250:
--

[~gss2002] Thanks for opening this. It's comical that this was opened 
yesterday, considering I spent a good portion of yesterday trying to work 
around this limitation. In my case I was trying to run a HDFS client in a 
Docker container configured to use an overlay network. This HDFS client is 
trying to talk to a non-containerized/bare metal secure HDFS cluster.

The reason I hit this is that Docker's overlay network uses two network 
interfaces; the overlay network for cross container communications 
(10.0.0.X/eth0) and a NAT'd bridge (172.18.0.0/eth1) for egress out of the 
container. When using an overlay network, Docker runs an embedded DNS server 
with limited configuration capabilities. Docker also manages /etc/hosts and it 
is populated with two entries for the single hostname:
{code:java}
10.0.0.4 hadoop-client.foo.site
172.18.0.4 hadoop-client.foo.site
{code}
The {{NetUtils.getLocalInetAddress(host)}} call will always return 10.0.0.4 in 
this case because it is in the first /etc/hosts entry, but all traffic is 
routing through 172.18.0.4 - which results in the issue called out here. 
Modifying /etc/hosts isn't an option.

Instead of just avoiding binding all together, I was thinking we could have a 
setting to allow us to set the routable interface (eth1 in my case) and use 
{{DNS.getIPsAsInetAddressList(bindInterface, false)}} to obtain the IP of that 
interface and then bind to that IP. Is your interface name consistent across 
nodes?

I'm not sure if that approach satisfies the security requirements to avoid 
MITM, but SDN usage continues to grow, resulting in very non-traditional 
network deployments and a need for more flexibility here (assuming it can be 
done safely).

> Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong
> --
>
> Key: HADOOP-15250
> URL: https://issues.apache.org/jira/browse/HADOOP-15250
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, net
>Affects Versions: 2.7.3, 2.9.0, 3.0.0
> Environment: Multihome cluster with split DNS and rDNS lookup of 
> localhost returning non-routable IPAddr
>Reporter: Greg Senia
>Priority: Critical
> Attachments: HADOOP-15250.patch
>
>
> We run our Hadoop clusters with two networks attached to each node. These 
> network are as follows a server network that is firewalled with firewalld 
> allowing inbound traffic: only SSH and things like Knox and Hiveserver2 and 
> the HTTP YARN RM/ATS and MR History Server. The second network is the cluster 
> network on the second network interface this uses Jumbo frames and is open no 
> restrictions and allows all cluster traffic to flow between nodes. 
>  
> To resolve DNS within the Hadoop Cluster we use DNS Views via BIND so if the 
> traffic is originating from nodes with cluster networks we return the 
> internal DNS record for the nodes. This all works fine with all the 
> multi-homing features added to Hadoop 2.x
>  Some logic around views:
> a. The internal view is used by cluster machines when performing lookups. So 
> hosts on the cluster network should get answers from the internal view in DNS
> b. The external view is used by non-local-cluster machines when performing 
> lookups. So hosts not on the cluster network should get answers from the 
> external view in DNS
>  
> So this brings me to our problem. We created some firewall rules to allow 
> inbound traffic from each clusters server network to allow distcp to occur. 
> But we noticed a problem almost immediately that when YARN attempted to talk 
> to the Remote Cluster it was binding outgoing traffic to the cluster network 
> interface which IS NOT routable. So after researching the code we noticed the 
> following in NetUtils.java and Client.java 
> Basically in Client.java it looks as if it takes whatever the hostname is and 
> attempts to bind to whatever the hostname is resolved to. This is not valid 
> in a multi-homed network with one routable interface and one non routable 
> interface. After reading through the java.net.Socket documentation it is 
> valid to perform socket.bind(null) which will allow the OS routing table and 
> DNS to send the traffic to the correct interface. I will also attach the 
> nework traces and a test patch for 2.7.x and 3.x code base. I have this test 
> fix below in my Hadoop Test Cluster.
> Client.java:
>       
> |/*|
> | | * Bind the socket to the host specified in the principal name of the|
> | | * client, to ensure Server matching address of the client connection|
> | | * to host name in principal passed.|
> | | */|
> | 

[jira] [Updated] (HADOOP-15250) Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong

2018-02-22 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-15250:

Summary: Split-DNS MultiHomed Server Network Cluster Network IPC Client 
Bind Addr Wrong  (was: MultiHomed Server Network Cluster Network IPC Client 
Bind Addr Wrong)

> Split-DNS MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong
> --
>
> Key: HADOOP-15250
> URL: https://issues.apache.org/jira/browse/HADOOP-15250
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, net
>Affects Versions: 2.7.3, 2.9.0, 3.0.0
>Reporter: Greg Senia
>Priority: Critical
> Attachments: HADOOP-15250.patch
>
>
> We run our Hadoop clusters with two networks attached to each node. These 
> network are as follows a server network that is firewalled with firewalld 
> allowing inbound traffic: only SSH and things like Knox and Hiveserver2 and 
> the HTTP YARN RM/ATS and MR History Server. The second network is the cluster 
> network on the second network interface this uses Jumbo frames and is open no 
> restrictions and allows all cluster traffic to flow between nodes. 
>  
> To resolve DNS within the Hadoop Cluster we use DNS Views via BIND so if the 
> traffic is originating from nodes with cluster networks we return the 
> internal DNS record for the nodes. This all works fine with all the 
> multi-homing features added to Hadoop 2.x
>  Some logic around views:
> a. The internal view is used by cluster machines when performing lookups. So 
> hosts on the cluster network should get answers from the internal view in DNS
> b. The external view is used by non-local-cluster machines when performing 
> lookups. So hosts not on the cluster network should get answers from the 
> external view in DNS
>  
> So this brings me to our problem. We created some firewall rules to allow 
> inbound traffic from each clusters server network to allow distcp to occur. 
> But we noticed a problem almost immediately that when YARN attempted to talk 
> to the Remote Cluster it was binding outgoing traffic to the cluster network 
> interface which IS NOT routable. So after researching the code we noticed the 
> following in NetUtils.java and Client.java 
> Basically in Client.java it looks as if it takes whatever the hostname is and 
> attempts to bind to whatever the hostname is resolved to. This is not valid 
> in a multi-homed network with one routable interface and one non routable 
> interface. After reading through the java.net.Socket documentation it is 
> valid to perform socket.bind(null) which will allow the OS routing table and 
> DNS to send the traffic to the correct interface. I will also attach the 
> nework traces and a test patch for 2.7.x and 3.x code base. I have this test 
> fix below in my Hadoop Test Cluster.
> Client.java:
>       
> |/*|
> | | * Bind the socket to the host specified in the principal name of the|
> | | * client, to ensure Server matching address of the client connection|
> | | * to host name in principal passed.|
> | | */|
> | |InetSocketAddress bindAddr = null;|
> | |if (ticket != null && ticket.hasKerberosCredentials()) {|
> | |KerberosInfo krbInfo =|
> | |remoteId.getProtocol().getAnnotation(KerberosInfo.class);|
> | |if (krbInfo != null) {|
> | |String principal = ticket.getUserName();|
> | |String host = SecurityUtil.getHostFromPrincipal(principal);|
> | |// If host name is a valid local address then bind socket to it|
> | |{color:#FF}*InetAddress localAddr = 
> NetUtils.getLocalInetAddress(host);*{color}|
> |{color:#FF} ** {color}|if (localAddr != null) {|
> | |this.socket.setReuseAddress(true);|
> | |if (LOG.isDebugEnabled()) {|
> | |LOG.debug("Binding " + principal + " to " + localAddr);|
> | |}|
> | |*{color:#FF}bindAddr = new InetSocketAddress(localAddr, 0);{color}*|
> | *{color:#FF}{color}* |*{color:#FF}}{color}*|
> | |}|
> | |}|
>  
> So in my Hadoop 2.7.x Cluster I made the following changes and traffic flows 
> correctly out the correct interfaces:
>  
> diff --git 
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
>  
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> index e1be271..c5b4a42 100644
> --- 
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> @@ -305,6 +305,9 @@
>    public static final String  IPC_CLIENT_FALLBACK_TO_SIMPLE_AUTH_ALLOWED_KEY 
> = "ipc.client.fallback-to-simple-auth-allowed";
>    public static final boolean 
> IPC_CLIENT_FALLBACK_TO_SIMPLE_AUTH_ALLOWED_DEFAULT = false;
>  
> 

[jira] [Commented] (HADOOP-15250) MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong

2018-02-22 Thread Greg Senia (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372818#comment-16372818
 ] 

Greg Senia commented on HADOOP-15250:
-

[~ste...@apache.org] I think by binding to what is resolved by DNS you 
inherently break the ability to do Multi-homing with a routable and 
non-routable networks where you have split view DNS as in our case. We actually 
ran this design two years ago up the support chain of our Hadoop Vendor and was 
passed as being just fine. So the thought with splitview DNS is as follows: the 
server interface is resolved to ha21d52mn.unit.hdp.example.com when outside the 
cluster when inside the cluster ha21d52mn.unit.hdp.newyorklife.com is resolved 
to the cluster networks interface this was why we went with the DNS split views 
to support multi-homing correctly. If this functionality is NOT supported than 
Hadoop Project should remove the multi-homing features as things going to 
remote clusters will not work as shown by my trace/logs above. As Unit/Dev 
Cluster Network is not routable to our Tech/Test Cluster Network. So traffic 
would have to go out the server interfaces to get to another cluster hence why 
the splitview DNS is valid and a fix along these lines should at least allow a 
flag like the server components do to bind to 0.0.0.0. I do see HADOOP-7215 is 
what introduced this but I don't think multi-homing was thought about when this 
code was implemented and this code is really is only checking for null's.  So 
if this is not the right place than I think a solution needs to be determined 
to determine how to allow a Client to bind without using a specific IP/hostname 
as that is not a valid method with a multi-homed network. Also hadoop traffic 
destined for other hosts in the cluster would go via the cluster network and 
allow from use of jumbo frames as its not routable.

So when my co-workers and I decided to test this scenario out we modifed the 
code to allow traffic to bind to a valid local address depending on where the 
traffic needs to be destined this patch is working in our tech/test cluster. As 
the java.net.Socket doc shows you can run bind and it will bind to a valid 
local address. I mean attempting to bind outside of the OS opens up risks like 
we are hitting with distcp attempting to bind to an invalid source address that 
is non-routable

[https://docs.oracle.com/javase/8/docs/api/java/net/Socket.html#bind-java.net.SocketAddress-]

 

public void bind(

[SocketAddress|https://docs.oracle.com/javase/8/docs/api/java/net/SocketAddress.html]

 bindpoint) throws

[IOException|https://docs.oracle.com/javase/8/docs/api/java/io/IOException.html]
Binds the socket to a local address.If the address is {{null}}, then the system 
will pick up an ephemeral port and a valid local address to bind the socket.

 

 

 
|/**|
| | * Checks if {@code host} is a local host name and return {@link 
InetAddress}|
| | * corresponding to that address.|
| | * |
| | * @param host the specified host|
| | * @return a valid local {@link InetAddress} or null|
| | * @throws SocketException if an I/O error occurs|
| | */|
| |public static InetAddress getLocalInetAddress(String host)|
| |throws SocketException {|
| |if (host == null) {|
| |return null;|
| |}|
| |InetAddress addr = null;|
| |try {|
| |addr = SecurityUtil.getByName(host);|
| |if (NetworkInterface.getByInetAddress(addr) == null) {|
| |addr = null; // Not a local address|
| |}|
| |} catch (UnknownHostException ignore) \{ }|
| |return addr;|
| |}|

> MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong
> 
>
> Key: HADOOP-15250
> URL: https://issues.apache.org/jira/browse/HADOOP-15250
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, net
>Affects Versions: 2.7.3, 2.9.0, 3.0.0
>Reporter: Greg Senia
>Priority: Critical
> Attachments: HADOOP-15250.patch
>
>
> We run our Hadoop clusters with two networks attached to each node. These 
> network are as follows a server network that is firewalled with firewalld 
> allowing inbound traffic: only SSH and things like Knox and Hiveserver2 and 
> the HTTP YARN RM/ATS and MR History Server. The second network is the cluster 
> network on the second network interface this uses Jumbo frames and is open no 
> restrictions and allows all cluster traffic to flow between nodes. 
>  
> To resolve DNS within the Hadoop Cluster we use DNS Views via BIND so if the 
> traffic is originating from nodes with cluster networks we return the 
> internal DNS record for the nodes. This all works fine with all the 
> multi-homing features added to Hadoop 2.x
>  Some logic around views:
> a. The internal view is used by cluster machines when performing lookups. So 
> hosts on the cluster network should get 

[jira] [Comment Edited] (HADOOP-14799) Update nimbus-jose-jwt to 4.41.1

2018-02-22 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372793#comment-16372793
 ] 

Brahma Reddy Battula edited comment on HADOOP-14799 at 2/22/18 1:29 PM:


Committed to branch-2,branch-2.9 and branch-2.8. [~vinayrpet] thnaks for  
review.


was (Author: brahmareddy):
Committed to branch-2*. thanks [~vinayrpet] for great  Vinayakumar B for quick 
review.

> Update nimbus-jose-jwt to 4.41.1
> 
>
> Key: HADOOP-14799
> URL: https://issues.apache.org/jira/browse/HADOOP-14799
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Major
> Fix For: 3.0.0-beta1, 2.10.0, 2.9.1, 2.8.4
>
> Attachments: HADOOP-14799.001.patch, HADOOP-14799.002.patch, 
> HADOOP-14799.003.patch
>
>
> Update the dependency
> com.nimbusds:nimbus-jose-jwt:3.9
> to the latest (4.41.1)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14903) Add json-smart explicitly to pom.xml

2018-02-22 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372800#comment-16372800
 ] 

Brahma Reddy Battula commented on HADOOP-14903:
---

Committed to branch-2,branch-2.9 and branch-2.8. [~vinayrpet] thanks for quick 
review.

> Add json-smart explicitly to pom.xml
> 
>
> Key: HADOOP-14903
> URL: https://issues.apache.org/jira/browse/HADOOP-14903
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Major
> Fix For: 3.0.0-beta1, 2.10.0, 2.9.1, 2.8.4
>
> Attachments: HADOOP-14903-003-branch-2.patch, 
> HADOOP-14903-branch-2-003.patch, 
> HADOOP-14903-branch-2-004-ForExecutingTests.patch, 
> HADOOP-14903-branch-2-004.patch, HADOOP-14903.001.patch, 
> HADOOP-14903.002.patch, HADOOP-14903.003.patch
>
>
> With the library update in HADOOP-14799, maven knows how to pull in 
> net.minidev:json-smart for tests, but not for packaging.  This needs to be 
> added to the main project pom in order to avoid this warning:
> {noformat}
> [WARNING] The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT is missing, no 
> dependency information available
> {noformat}
> This is pulled in from a few places:
> {noformat}
> [INFO] |  +- org.apache.hadoop:hadoop-auth:jar:3.1.0-SNAPSHOT:compile
> [INFO] |  |  +- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |  |  +- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |  |  \- net.minidev:json-smart:jar:2.3:compile
> [INFO] |  |  \- org.apache.kerby:token-provider:jar:1.0.1:compile
> [INFO] |  | \- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |+- 
> com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |\- net.minidev:json-smart:jar:2.3:compile
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14903) Add json-smart explicitly to pom.xml

2018-02-22 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-14903:
--
Fix Version/s: 2.8.4
   2.9.1
   2.10.0

> Add json-smart explicitly to pom.xml
> 
>
> Key: HADOOP-14903
> URL: https://issues.apache.org/jira/browse/HADOOP-14903
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Major
> Fix For: 3.0.0-beta1, 2.10.0, 2.9.1, 2.8.4
>
> Attachments: HADOOP-14903-003-branch-2.patch, 
> HADOOP-14903-branch-2-003.patch, 
> HADOOP-14903-branch-2-004-ForExecutingTests.patch, 
> HADOOP-14903-branch-2-004.patch, HADOOP-14903.001.patch, 
> HADOOP-14903.002.patch, HADOOP-14903.003.patch
>
>
> With the library update in HADOOP-14799, maven knows how to pull in 
> net.minidev:json-smart for tests, but not for packaging.  This needs to be 
> added to the main project pom in order to avoid this warning:
> {noformat}
> [WARNING] The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT is missing, no 
> dependency information available
> {noformat}
> This is pulled in from a few places:
> {noformat}
> [INFO] |  +- org.apache.hadoop:hadoop-auth:jar:3.1.0-SNAPSHOT:compile
> [INFO] |  |  +- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |  |  +- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |  |  \- net.minidev:json-smart:jar:2.3:compile
> [INFO] |  |  \- org.apache.kerby:token-provider:jar:1.0.1:compile
> [INFO] |  | \- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |+- 
> com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |\- net.minidev:json-smart:jar:2.3:compile
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12020) Support AWS S3 reduced redundancy storage class

2018-02-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372796#comment-16372796
 ] 

Steve Loughran commented on HADOOP-12020:
-

correction, hadoop 3.2+

For writing in different storage levels, we could say "builder API can set 
dynamically", as well as with a default value for the FS

> Support AWS S3 reduced redundancy storage class
> ---
>
> Key: HADOOP-12020
> URL: https://issues.apache.org/jira/browse/HADOOP-12020
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.7.0
> Environment: Hadoop on AWS
>Reporter: Yann Landrin-Schweitzer
>Priority: Major
>
> Amazon S3 uses, by default, the NORMAL_STORAGE class for s3 objects.
> This offers, according to Amazon's material, 99.% reliability.
> For many applications, however, the 99.99% reliability offered by the 
> REDUCED_REDUNDANCY storage class is amply sufficient, and comes with a 
> significant cost saving.
> HDFS, when using the legacy s3n protocol, or the new s3a scheme, should 
> support overriding the default storage class of created s3 objects so that 
> users can take advantage of this cost benefit.
> This would require minor changes of the s3n and s3a drivers, using 
> a configuration property fs.s3n.storage.class to override the default storage 
> when desirable. 
> This override could be implemented in Jets3tNativeFileSystemStore with:
>   S3Object object = new S3Object(key);
>   ...
>   if(storageClass!=null)  object.setStorageClass(storageClass);
> It would take a more complex form in s3a, e.g. setting:
> InitiateMultipartUploadRequest initiateMPURequest =
> new InitiateMultipartUploadRequest(bucket, key, om);
> if(storageClass !=null ) {
> initiateMPURequest = 
> initiateMPURequest.withStorageClass(storageClass);
> }
> and similar statements in various places.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14799) Update nimbus-jose-jwt to 4.41.1

2018-02-22 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372793#comment-16372793
 ] 

Brahma Reddy Battula commented on HADOOP-14799:
---

Committed to branch-2*. thanks [~vinayrpet] for great  Vinayakumar B for quick 
review.

> Update nimbus-jose-jwt to 4.41.1
> 
>
> Key: HADOOP-14799
> URL: https://issues.apache.org/jira/browse/HADOOP-14799
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Major
> Fix For: 3.0.0-beta1, 2.10.0, 2.9.1, 2.8.4
>
> Attachments: HADOOP-14799.001.patch, HADOOP-14799.002.patch, 
> HADOOP-14799.003.patch
>
>
> Update the dependency
> com.nimbusds:nimbus-jose-jwt:3.9
> to the latest (4.41.1)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14799) Update nimbus-jose-jwt to 4.41.1

2018-02-22 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-14799:
--
Fix Version/s: 2.8.4
   2.9.1
   2.10.0

> Update nimbus-jose-jwt to 4.41.1
> 
>
> Key: HADOOP-14799
> URL: https://issues.apache.org/jira/browse/HADOOP-14799
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Major
> Fix For: 3.0.0-beta1, 2.10.0, 2.9.1, 2.8.4
>
> Attachments: HADOOP-14799.001.patch, HADOOP-14799.002.patch, 
> HADOOP-14799.003.patch
>
>
> Update the dependency
> com.nimbusds:nimbus-jose-jwt:3.9
> to the latest (4.41.1)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15252) Checkstyle version is not compatible with IDEA's checkstyle plugin

2018-02-22 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-15252:
--
Attachment: HADOOP-15252.001.patch

> Checkstyle version is not compatible with IDEA's checkstyle plugin
> --
>
> Key: HADOOP-15252
> URL: https://issues.apache.org/jira/browse/HADOOP-15252
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>Priority: Major
> Attachments: HADOOP-15252.001.patch
>
>
> After upgrading to the latest IDEA the IDE throws error messages in every few 
> minutes like
> {code:java}
> The Checkstyle rules file could not be parsed.
> SuppressionCommentFilter is not allowed as a child in Checker
> The file has been blacklisted for 60s.{code}
> This is caused by some backward incompatible changes in checkstyle source 
> code:
>  [http://checkstyle.sourceforge.net/releasenotes.html]
>  * 8.1: Make SuppressionCommentFilter and SuppressWithNearbyCommentFilter 
> children of TreeWalker.
>  * 8.2: remove FileContentsHolder module as FileContents object is available 
> for filters on TreeWalker in TreeWalkerAudit Event.
> IDEA uses checkstyle 8.8
> We should upgrade our checkstyle version to be compatible with IDEA's 
> checkstyle plugin.
>  Also it's a good time to upgrade maven-checkstyle-plugin as well to brand 
> new 3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-15252) Checkstyle version is not compatible with IDEA's checkstyle plugin

2018-02-22 Thread Andras Bokor (JIRA)
Andras Bokor created HADOOP-15252:
-

 Summary: Checkstyle version is not compatible with IDEA's 
checkstyle plugin
 Key: HADOOP-15252
 URL: https://issues.apache.org/jira/browse/HADOOP-15252
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Andras Bokor
Assignee: Andras Bokor


After upgrading to the latest IDEA the IDE throws error messages in every few 
minutes like
{code:java}
The Checkstyle rules file could not be parsed.
SuppressionCommentFilter is not allowed as a child in Checker
The file has been blacklisted for 60s.{code}
This is caused by some backward incompatible changes in checkstyle source code:
 [http://checkstyle.sourceforge.net/releasenotes.html]
 * 8.1: Make SuppressionCommentFilter and SuppressWithNearbyCommentFilter 
children of TreeWalker.
 * 8.2: remove FileContentsHolder module as FileContents object is available 
for filters on TreeWalker in TreeWalkerAudit Event.

IDEA uses checkstyle 8.8

We should upgrade our checkstyle version to be compatible with IDEA's 
checkstyle plugin.
 Also it's a good time to upgrade maven-checkstyle-plugin as well to brand new 
3.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15236) Fix typo in RequestHedgingProxyProvider and RequestHedgingRMFailoverProxyProvider

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372770#comment-16372770
 ] 

genericqa commented on HADOOP-15236:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 19m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
17m 42s{color} | {color:green} branch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
46s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
24s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedclient {color} | {color:green} 
11m 55s{color} | {color:green} patch has no errors when building and testing 
our client artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
55s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
54s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
46s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
46s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}116m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:5b98639 |
| JIRA Issue | HADOOP-15236 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911533/HADOOP-15236.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  findbugs  checkstyle  |
| uname | Linux 28b088107fa3 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | trunk / 3132709 |
| maven | version: Apache Maven 3.3.9 |
| Default Java | 1.8.0_151 |
| findbugs | v3.1.0-RC1 |
|  Test Results | 

[jira] [Commented] (HADOOP-14799) Update nimbus-jose-jwt to 4.41.1

2018-02-22 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372740#comment-16372740
 ] 

Vinayakumar B commented on HADOOP-14799:


HADOOP-14903 is almost done, this also can go in for branch-2*

> Update nimbus-jose-jwt to 4.41.1
> 
>
> Key: HADOOP-14799
> URL: https://issues.apache.org/jira/browse/HADOOP-14799
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Major
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14799.001.patch, HADOOP-14799.002.patch, 
> HADOOP-14799.003.patch
>
>
> Update the dependency
> com.nimbusds:nimbus-jose-jwt:3.9
> to the latest (4.41.1)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14903) Add json-smart explicitly to pom.xml

2018-02-22 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14903?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372738#comment-16372738
 ] 

Vinayakumar B commented on HADOOP-14903:


+1 for branch-v4 patch

> Add json-smart explicitly to pom.xml
> 
>
> Key: HADOOP-14903
> URL: https://issues.apache.org/jira/browse/HADOOP-14903
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Affects Versions: 3.0.0-beta1
>Reporter: Ray Chiang
>Assignee: Ray Chiang
>Priority: Major
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-14903-003-branch-2.patch, 
> HADOOP-14903-branch-2-003.patch, 
> HADOOP-14903-branch-2-004-ForExecutingTests.patch, 
> HADOOP-14903-branch-2-004.patch, HADOOP-14903.001.patch, 
> HADOOP-14903.002.patch, HADOOP-14903.003.patch
>
>
> With the library update in HADOOP-14799, maven knows how to pull in 
> net.minidev:json-smart for tests, but not for packaging.  This needs to be 
> added to the main project pom in order to avoid this warning:
> {noformat}
> [WARNING] The POM for net.minidev:json-smart:jar:2.3-SNAPSHOT is missing, no 
> dependency information available
> {noformat}
> This is pulled in from a few places:
> {noformat}
> [INFO] |  +- org.apache.hadoop:hadoop-auth:jar:3.1.0-SNAPSHOT:compile
> [INFO] |  |  +- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |  |  +- com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |  |  \- net.minidev:json-smart:jar:2.3:compile
> [INFO] |  |  \- org.apache.kerby:token-provider:jar:1.0.1:compile
> [INFO] |  | \- com.nimbusds:nimbus-jose-jwt:jar:4.41.1:compile
> [INFO] |  |+- 
> com.github.stephenc.jcip:jcip-annotations:jar:1.0-1:compile
> [INFO] |  |\- net.minidev:json-smart:jar:2.3:compile
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15251) Upgrade surefire version in branch-2

2018-02-22 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372702#comment-16372702
 ] 

Gabor Bota commented on HADOOP-15251:
-

So I should try to apply HADOOP-13514 to branch-2 and also include this version 
upgrade? (try to apply 
https://issues.apache.org/jira/secure/attachment/12896642/HADOOP-13514.006.patch
 ?)

> Upgrade surefire version in branch-2
> 
>
> Key: HADOOP-15251
> URL: https://issues.apache.org/jira/browse/HADOOP-15251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Douglas
>Priority: Major
> Attachments: HADOOP-15251-branch-2.001.patch
>
>
> Tests in branch-2 are not running reliably in Jenkins, and due to 
> SUREFIRE-524, these are not being cleaned up properly (see HADOOP-15153).
> Upgrading to a more recent version of the surefire plugin will help make the 
> problem easier to address in branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15251) Upgrade surefire version in branch-2

2018-02-22 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-15251:
---

Assignee: (was: Gabor Bota)

> Upgrade surefire version in branch-2
> 
>
> Key: HADOOP-15251
> URL: https://issues.apache.org/jira/browse/HADOOP-15251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Douglas
>Priority: Major
> Attachments: HADOOP-15251-branch-2.001.patch
>
>
> Tests in branch-2 are not running reliably in Jenkins, and due to 
> SUREFIRE-524, these are not being cleaned up properly (see HADOOP-15153).
> Upgrading to a more recent version of the surefire plugin will help make the 
> problem easier to address in branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15251) Upgrade surefire version in branch-2

2018-02-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372696#comment-16372696
 ] 

Steve Loughran commented on HADOOP-15251:
-

-1

If you want the surefire upgrade, you need the whole of HADOOP-13514 pulled in. 

It does seem to be working nicely on trunk, so there's no fundamental reason 
not to do it, just due diligence. Again, that JIRA shows the problems which 
surfaced on making sure that the hdfs and yarn test suites ran.

I'd also like to see the results of running the maven integration tests against 
hadoop-aws and hadoop-aws as they use failsafe and depend on property passdown

I'd expect [~aw] to have opinions here. 

> Upgrade surefire version in branch-2
> 
>
> Key: HADOOP-15251
> URL: https://issues.apache.org/jira/browse/HADOOP-15251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Douglas
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15251-branch-2.001.patch
>
>
> Tests in branch-2 are not running reliably in Jenkins, and due to 
> SUREFIRE-524, these are not being cleaned up properly (see HADOOP-15153).
> Upgrading to a more recent version of the surefire plugin will help make the 
> problem easier to address in branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15250) MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong

2018-02-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372693#comment-16372693
 ] 

Steve Loughran commented on HADOOP-15250:
-

I'm not going to review this (sorry), as I don't have enough knowledge of the 
deep IPC internals to be safe to. Sorry

But I do know that we try to bind up on the hostname of the kerberos principal, 
so that your service is who it claims to be, the caller can avoid MITM 
problems. This kind of opens things up again, doesn't it? 

> MultiHomed Server Network Cluster Network IPC Client Bind Addr Wrong
> 
>
> Key: HADOOP-15250
> URL: https://issues.apache.org/jira/browse/HADOOP-15250
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc, net
>Affects Versions: 2.7.3, 2.9.0, 3.0.0
>Reporter: Greg Senia
>Priority: Critical
> Attachments: HADOOP-15250.patch
>
>
> We run our Hadoop clusters with two networks attached to each node. These 
> network are as follows a server network that is firewalled with firewalld 
> allowing inbound traffic: only SSH and things like Knox and Hiveserver2 and 
> the HTTP YARN RM/ATS and MR History Server. The second network is the cluster 
> network on the second network interface this uses Jumbo frames and is open no 
> restrictions and allows all cluster traffic to flow between nodes. 
>  
> To resolve DNS within the Hadoop Cluster we use DNS Views via BIND so if the 
> traffic is originating from nodes with cluster networks we return the 
> internal DNS record for the nodes. This all works fine with all the 
> multi-homing features added to Hadoop 2.x
>  Some logic around views:
> a. The internal view is used by cluster machines when performing lookups. So 
> hosts on the cluster network should get answers from the internal view in DNS
> b. The external view is used by non-local-cluster machines when performing 
> lookups. So hosts not on the cluster network should get answers from the 
> external view in DNS
>  
> So this brings me to our problem. We created some firewall rules to allow 
> inbound traffic from each clusters server network to allow distcp to occur. 
> But we noticed a problem almost immediately that when YARN attempted to talk 
> to the Remote Cluster it was binding outgoing traffic to the cluster network 
> interface which IS NOT routable. So after researching the code we noticed the 
> following in NetUtils.java and Client.java 
> Basically in Client.java it looks as if it takes whatever the hostname is and 
> attempts to bind to whatever the hostname is resolved to. This is not valid 
> in a multi-homed network with one routable interface and one non routable 
> interface. After reading through the java.net.Socket documentation it is 
> valid to perform socket.bind(null) which will allow the OS routing table and 
> DNS to send the traffic to the correct interface. I will also attach the 
> nework traces and a test patch for 2.7.x and 3.x code base. I have this test 
> fix below in my Hadoop Test Cluster.
> Client.java:
>       
> |/*|
> | | * Bind the socket to the host specified in the principal name of the|
> | | * client, to ensure Server matching address of the client connection|
> | | * to host name in principal passed.|
> | | */|
> | |InetSocketAddress bindAddr = null;|
> | |if (ticket != null && ticket.hasKerberosCredentials()) {|
> | |KerberosInfo krbInfo =|
> | |remoteId.getProtocol().getAnnotation(KerberosInfo.class);|
> | |if (krbInfo != null) {|
> | |String principal = ticket.getUserName();|
> | |String host = SecurityUtil.getHostFromPrincipal(principal);|
> | |// If host name is a valid local address then bind socket to it|
> | |{color:#FF}*InetAddress localAddr = 
> NetUtils.getLocalInetAddress(host);*{color}|
> |{color:#FF} ** {color}|if (localAddr != null) {|
> | |this.socket.setReuseAddress(true);|
> | |if (LOG.isDebugEnabled()) {|
> | |LOG.debug("Binding " + principal + " to " + localAddr);|
> | |}|
> | |*{color:#FF}bindAddr = new InetSocketAddress(localAddr, 0);{color}*|
> | *{color:#FF}{color}* |*{color:#FF}}{color}*|
> | |}|
> | |}|
>  
> So in my Hadoop 2.7.x Cluster I made the following changes and traffic flows 
> correctly out the correct interfaces:
>  
> diff --git 
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
>  
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> index e1be271..c5b4a42 100644
> --- 
> a/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> +++ 
> b/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeys.java
> @@ -305,6 +305,9 @@
>    public static final String  

[jira] [Updated] (HADOOP-15236) Fix typo in RequestHedgingProxyProvider and RequestHedgingRMFailoverProxyProvider

2018-02-22 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15236:

Status: Open  (was: Patch Available)

> Fix typo in RequestHedgingProxyProvider and 
> RequestHedgingRMFailoverProxyProvider
> -
>
> Key: HADOOP-15236
> URL: https://issues.apache.org/jira/browse/HADOOP-15236
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Gabor Bota
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-15236.001.patch, HADOOP-15236.002.patch
>
>
> Typo 'configred' in RequestHedgingProxyProvider and 
> RequestHedgingRMFailoverProxyProvider.
> {noformat}
>  * standbys. Once it receive a response from any one of the configred proxies,
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15236) Fix typo in RequestHedgingProxyProvider and RequestHedgingRMFailoverProxyProvider

2018-02-22 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15236:

Status: Patch Available  (was: Open)

You are right, missed it, I've searched for the string and no other result came 
up just the RequestHedgingProxyProvider. I've submitted the new patch which 
fixes both.

> Fix typo in RequestHedgingProxyProvider and 
> RequestHedgingRMFailoverProxyProvider
> -
>
> Key: HADOOP-15236
> URL: https://issues.apache.org/jira/browse/HADOOP-15236
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Gabor Bota
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-15236.001.patch, HADOOP-15236.002.patch
>
>
> Typo 'configred' in RequestHedgingProxyProvider and 
> RequestHedgingRMFailoverProxyProvider.
> {noformat}
>  * standbys. Once it receive a response from any one of the configred proxies,
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15236) Fix typo in RequestHedgingProxyProvider and RequestHedgingRMFailoverProxyProvider

2018-02-22 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15236:

Attachment: HADOOP-15236.002.patch

> Fix typo in RequestHedgingProxyProvider and 
> RequestHedgingRMFailoverProxyProvider
> -
>
> Key: HADOOP-15236
> URL: https://issues.apache.org/jira/browse/HADOOP-15236
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Gabor Bota
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-15236.001.patch, HADOOP-15236.002.patch
>
>
> Typo 'configred' in RequestHedgingProxyProvider and 
> RequestHedgingRMFailoverProxyProvider.
> {noformat}
>  * standbys. Once it receive a response from any one of the configred proxies,
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15251) Upgrade surefire version in branch-2

2018-02-22 Thread genericqa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372646#comment-16372646
 ] 

genericqa commented on HADOOP-15251:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 22m 
37s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
20s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} branch-2 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
10s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hadoop:17213a0 |
| JIRA Issue | HADOOP-15251 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12911519/HADOOP-15251-branch-2.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  shadedclient  xml  |
| uname | Linux 6c1542e206b0 3.13.0-135-generic #184-Ubuntu SMP Wed Oct 18 
11:55:51 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/provided.sh |
| git revision | branch-2 / f7e5e45 |
| maven | version: Apache Maven 3.3.9 
(bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T16:41:47+00:00) |
| Default Java | 1.7.0_151 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14185/testReport/ |
| Max. process+thread count | 66 (vs. ulimit of 5500) |
| modules | C: hadoop-project U: hadoop-project |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/14185/console |
| Powered by | Apache Yetus 0.8.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Upgrade surefire version in branch-2
> 
>
> Key: HADOOP-15251
> URL: https://issues.apache.org/jira/browse/HADOOP-15251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Douglas
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15251-branch-2.001.patch
>
>
> Tests in branch-2 are not running reliably in Jenkins, and due to 
> SUREFIRE-524, these are not being cleaned up properly (see HADOOP-15153).
> Upgrading to a more recent version of the surefire plugin will help make the 
> problem easier to address in branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HADOOP-15193) add bulk delete call to metastore API & DDB impl

2018-02-22 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372638#comment-16372638
 ] 

Steve Loughran commented on HADOOP-15193:
-

I'm thinking of doing this with some explicit {{BulkOperationInfo extends 
Closeable}} class and

{code}
BulkOperationInfo initiateDirectoryDelete(path)
void deleteBatch(BulkOperationInfo, List) // every path must be under the 
path specified in the bulk operation
void completeBulkOperation(BulkOperationInfo, boolean wasSuccessful) 
{code}

This lines us up for setting up other bulk ops, like an explicit rename.

Why this way? It allows us to tell the store that the batches are all part of 
the same rmdir call, and that there is little/no need to create any parent dir 
markers, etc, etc, because everything is expected to work. The complete call 
can do that and choose what to use as a success/failure marker.

The base impl will do nothing but very that in a batch delete, all paths are 
valid, then issue 1 by 1; nothing done in complete()



> add bulk delete call to metastore API & DDB impl
> 
>
> Key: HADOOP-15193
> URL: https://issues.apache.org/jira/browse/HADOOP-15193
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Major
>
> recursive dir delete (and any future bulk delete API like HADOOP-15191) 
> benefits from using the DDB bulk table delete call, which takes a list of 
> deletes and executes. Hopefully this will offer better perf. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-15236) Fix typo in RequestHedgingProxyProvider and RequestHedgingRMFailoverProxyProvider

2018-02-22 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372628#comment-16372628
 ] 

Akira Ajisaka edited comment on HADOOP-15236 at 2/22/18 10:26 AM:
--

This issue is the ticket. Would you include the fix into the patch to fix the 
typo in this issue?


was (Author: ajisakaa):
There is no ticket for it. Would you include the fix into the patch to fix the 
typo in this issue?

> Fix typo in RequestHedgingProxyProvider and 
> RequestHedgingRMFailoverProxyProvider
> -
>
> Key: HADOOP-15236
> URL: https://issues.apache.org/jira/browse/HADOOP-15236
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Gabor Bota
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-15236.001.patch
>
>
> Typo 'configred' in RequestHedgingProxyProvider and 
> RequestHedgingRMFailoverProxyProvider.
> {noformat}
>  * standbys. Once it receive a response from any one of the configred proxies,
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15236) Fix typo in RequestHedgingProxyProvider and RequestHedgingRMFailoverProxyProvider

2018-02-22 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372628#comment-16372628
 ] 

Akira Ajisaka commented on HADOOP-15236:


There is no ticket for it. Would you include the fix into the patch to fix the 
typo in this issue?

> Fix typo in RequestHedgingProxyProvider and 
> RequestHedgingRMFailoverProxyProvider
> -
>
> Key: HADOOP-15236
> URL: https://issues.apache.org/jira/browse/HADOOP-15236
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Gabor Bota
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-15236.001.patch
>
>
> Typo 'configred' in RequestHedgingProxyProvider and 
> RequestHedgingRMFailoverProxyProvider.
> {noformat}
>  * standbys. Once it receive a response from any one of the configred proxies,
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15236) Fix typo in RequestHedgingProxyProvider and RequestHedgingRMFailoverProxyProvider

2018-02-22 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-15236:
---
Summary: Fix typo in RequestHedgingProxyProvider and 
RequestHedgingRMFailoverProxyProvider  (was: Fix typo in 
RequestHedgingProxyProvider)

> Fix typo in RequestHedgingProxyProvider and 
> RequestHedgingRMFailoverProxyProvider
> -
>
> Key: HADOOP-15236
> URL: https://issues.apache.org/jira/browse/HADOOP-15236
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Gabor Bota
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-15236.001.patch
>
>
> Typo 'configred' in RequestHedgingProxyProvider and 
> RequestHedgingRMFailoverProxyProvider.
> {noformat}
>  * standbys. Once it receive a response from any one of the configred proxies,
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15251) Upgrade surefire version in branch-2

2018-02-22 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15251:

Status: Patch Available  (was: Open)

Updated to 2.20.1 as this is the version in trunk, also includes SUREFIRE-524 
fix.

> Upgrade surefire version in branch-2
> 
>
> Key: HADOOP-15251
> URL: https://issues.apache.org/jira/browse/HADOOP-15251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Douglas
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15251-branch-2.001.patch
>
>
> Tests in branch-2 are not running reliably in Jenkins, and due to 
> SUREFIRE-524, these are not being cleaned up properly (see HADOOP-15153).
> Upgrading to a more recent version of the surefire plugin will help make the 
> problem easier to address in branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15251) Upgrade surefire version in branch-2

2018-02-22 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15251:

Attachment: HADOOP-15251-branch-2.001.patch

> Upgrade surefire version in branch-2
> 
>
> Key: HADOOP-15251
> URL: https://issues.apache.org/jira/browse/HADOOP-15251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Douglas
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15251-branch-2.001.patch
>
>
> Tests in branch-2 are not running reliably in Jenkins, and due to 
> SUREFIRE-524, these are not being cleaned up properly (see HADOOP-15153).
> Upgrading to a more recent version of the surefire plugin will help make the 
> problem easier to address in branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15251) Upgrade surefire version in branch-2

2018-02-22 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15251:

Attachment: (was: HADOOP-15251.001.patch)

> Upgrade surefire version in branch-2
> 
>
> Key: HADOOP-15251
> URL: https://issues.apache.org/jira/browse/HADOOP-15251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Douglas
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15251-branch-2.001.patch
>
>
> Tests in branch-2 are not running reliably in Jenkins, and due to 
> SUREFIRE-524, these are not being cleaned up properly (see HADOOP-15153).
> Upgrading to a more recent version of the surefire plugin will help make the 
> problem easier to address in branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-15251) Upgrade surefire version in branch-2

2018-02-22 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota updated HADOOP-15251:

Attachment: HADOOP-15251.001.patch

> Upgrade surefire version in branch-2
> 
>
> Key: HADOOP-15251
> URL: https://issues.apache.org/jira/browse/HADOOP-15251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Douglas
>Assignee: Gabor Bota
>Priority: Major
> Attachments: HADOOP-15251-branch-2.001.patch
>
>
> Tests in branch-2 are not running reliably in Jenkins, and due to 
> SUREFIRE-524, these are not being cleaned up properly (see HADOOP-15153).
> Upgrading to a more recent version of the surefire plugin will help make the 
> problem easier to address in branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-15251) Upgrade surefire version in branch-2

2018-02-22 Thread Gabor Bota (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-15251?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gabor Bota reassigned HADOOP-15251:
---

Assignee: Gabor Bota

> Upgrade surefire version in branch-2
> 
>
> Key: HADOOP-15251
> URL: https://issues.apache.org/jira/browse/HADOOP-15251
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Chris Douglas
>Assignee: Gabor Bota
>Priority: Major
>
> Tests in branch-2 are not running reliably in Jenkins, and due to 
> SUREFIRE-524, these are not being cleaned up properly (see HADOOP-15153).
> Upgrading to a more recent version of the surefire plugin will help make the 
> problem easier to address in branch-2



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-15100) Configuration#Resource constructor change broke Hive tests

2018-02-22 Thread Daniel Voros (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372591#comment-16372591
 ] 

Daniel Voros commented on HADOOP-15100:
---

Hive fix is going to be HIVE-18327.

> Configuration#Resource constructor change broke Hive tests
> --
>
> Key: HADOOP-15100
> URL: https://issues.apache.org/jira/browse/HADOOP-15100
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.3, 2.7.5, 3.0.0, 2.9.1
>Reporter: Xiao Chen
>Priority: Critical
>
> In CDH's C6 rebased testing, the following Hive tests started failing:
> {noformat}
> org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie.org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie
> org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie.org.apache.hive.minikdc.TestJdbcWithMiniKdcCookie
> org.apache.hive.minikdc.TestHiveAuthFactory.org.apache.hive.minikdc.TestHiveAuthFactory
> org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthHttp.org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthHttp
> org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthHttp.org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthHttp
> org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs
> org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs.org.apache.hive.minikdc.TestJdbcWithDBTokenStoreNoDoAs
> org.apache.hive.minikdc.TestJdbcWithMiniKdc.org.apache.hive.minikdc.TestJdbcWithMiniKdc
> org.apache.hive.minikdc.TestJdbcWithMiniKdc.org.apache.hive.minikdc.TestJdbcWithMiniKdc
> org.apache.hive.minikdc.TestHs2HooksWithMiniKdc.org.apache.hive.minikdc.TestHs2HooksWithMiniKdc
> org.apache.hive.minikdc.TestHs2HooksWithMiniKdc.org.apache.hive.minikdc.TestHs2HooksWithMiniKdc
> org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc
> org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc.org.apache.hive.minikdc.TestJdbcNonKrbSASLWithMiniKdc
> org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthBinary.org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthBinary
> org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthBinary.org.apache.hive.minikdc.TestJdbcWithMiniKdcSQLAuthBinary
> org.apache.hive.minikdc.TestMiniHiveKdc.testLogin
> org.apache.hive.minikdc.TestMiniHiveKdc.testLogin
> org.apache.hive.minikdc.TestJdbcWithDBTokenStore.org.apache.hive.minikdc.TestJdbcWithDBTokenStore
> org.apache.hive.minikdc.TestJdbcWithDBTokenStore.org.apache.hive.minikdc.TestJdbcWithDBTokenStore
> org.apache.hadoop.hive.ql.TestMetaStoreLimitPartitionRequest.testQueryWithInWithFallbackToORM
> org.apache.hive.jdbc.TestJdbcWithMiniHS2.testSelectThriftSerializeInTasks
> org.apache.hive.jdbc.TestJdbcWithMiniHS2.testEmptyResultsetThriftSerializeInTasks
> org.apache.hive.jdbc.TestJdbcWithMiniHS2.testParallelCompilation2
> org.apache.hive.jdbc.TestJdbcWithMiniHS2.testJoinThriftSerializeInTasks
> org.apache.hive.jdbc.TestJdbcWithMiniHS2.testParallelCompilation
> org.apache.hive.jdbc.TestJdbcWithMiniHS2.testConcurrentStatements
> org.apache.hive.jdbc.TestJdbcWithMiniHS2.testFloatCast2DoubleThriftSerializeInTasks
> org.apache.hive.jdbc.TestJdbcWithMiniHS2.testEnableThriftSerializeInTasks
> org.apache.hive.service.cli.TestEmbeddedThriftBinaryCLIService.testExecuteStatementParallel
> {noformat}
> The exception is
> {noformat}
> java.lang.ExceptionInInitializerError: null
>   at sun.security.krb5.Config.getRealmFromDNS(Config.java:1102)
>   at sun.security.krb5.Config.getDefaultRealm(Config.java:987)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:483)
>   at 
> org.apache.hadoop.security.authentication.util.KerberosUtil.getDefaultRealm(KerberosUtil.java:110)
>   at 
> org.apache.hadoop.security.HadoopKerberosName.setConfiguration(HadoopKerberosName.java:63)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:332)
>   at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:317)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:907)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:873)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:740)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.getRestrictParserDefault(Configuration.java:261)
>   at 
> org.apache.hadoop.conf.Configuration$Resource.(Configuration.java:229)
>   at 
> 

[jira] [Commented] (HADOOP-15236) Fix typo in RequestHedgingProxyProvider

2018-02-22 Thread Gabor Bota (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-15236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372573#comment-16372573
 ] 

Gabor Bota commented on HADOOP-15236:
-

Yes, sure. Is there a ticket for it? 


> Fix typo in RequestHedgingProxyProvider
> ---
>
> Key: HADOOP-15236
> URL: https://issues.apache.org/jira/browse/HADOOP-15236
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: Akira Ajisaka
>Assignee: Gabor Bota
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-15236.001.patch
>
>
> Typo 'configred' in RequestHedgingProxyProvider and 
> RequestHedgingRMFailoverProxyProvider.
> {noformat}
>  * standbys. Once it receive a response from any one of the configred proxies,
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10571) Use Log.*(Object, Throwable) overload to log exceptions

2018-02-22 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10571?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16372550#comment-16372550
 ] 

Andras Bokor commented on HADOOP-10571:
---

Thanks for the review and commit!

> Use Log.*(Object, Throwable) overload to log exceptions
> ---
>
> Key: HADOOP-10571
> URL: https://issues.apache.org/jira/browse/HADOOP-10571
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.0
>Reporter: Arpit Agarwal
>Assignee: Andras Bokor
>Priority: Major
> Fix For: 3.1.0, 3.0.1
>
> Attachments: HADOOP-10571-branch-3.0.001.patch, 
> HADOOP-10571-branch-3.0.002.patch, HADOOP-10571.01.patch, 
> HADOOP-10571.01.patch, HADOOP-10571.02.patch, HADOOP-10571.03.patch, 
> HADOOP-10571.04.patch, HADOOP-10571.05.patch, HADOOP-10571.06.patch, 
> HADOOP-10571.07.patch
>
>
> When logging an exception, we often convert the exception to string or call 
> {{.getMessage}}. Instead we can use the log method overloads which take 
> {{Throwable}} as a parameter.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org