[jira] [Commented] (HADOOP-9888) KerberosName static initialization gets default realm, which is unneeded in non-secure deployment.

2016-01-06 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085280#comment-15085280
 ] 

Kai Zheng commented on HADOOP-9888:
---

The patch looks good to me. I would suggest we move the following block along 
the static value to {{KerberosUtil}} so that all the places that need the 
default realm and call {{KerberosUtil#getDefaultRealm()}} can be updated to 
share the value.
{code}
+  public static synchronized String getDefaultRealm() {
+if (defaultRealm == null) {
+  try {
+defaultRealm = KerberosUtil.getDefaultRealm();
+  } catch (Exception ke) {
+LOG.debug("Kerberos krb5 configuration not found, setting default 
realm to empty");
+defaultRealm = "";
+  }
+}
 return defaultRealm;
   }
{code}

> KerberosName static initialization gets default realm, which is unneeded in 
> non-secure deployment.
> --
>
> Key: HADOOP-9888
> URL: https://issues.apache.org/jira/browse/HADOOP-9888
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.1.1-beta, 3.0.0
>Reporter: Chris Nauroth
>Assignee: Dmytro Kabakchei
> Attachments: HADOOP-9888.001.patch
>
>
> {{KerberosName}} has a static initialization block that looks up the default 
> realm.  Running with Oracle JDK7, this code path triggers a DNS query.  In 
> some environments, we've seen this DNS query block and time out after 30 
> seconds.  This is part of static initialization, and the class is referenced 
> from {{UserGroupInformation#initialize}}, so every daemon and every shell 
> command experiences this delay.  This occurs even for non-secure deployments, 
> which don't need the default realm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9888) KerberosName static initialization gets default realm, which is unneeded in non-secure deployment.

2016-01-06 Thread Dmytro Kabakchei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085390#comment-15085390
 ] 

Dmytro Kabakchei commented on HADOOP-9888:
--

Got your point. Completely agree with you.
I'll move that block to KerberosUtil and reorganize everything to catch up 
changes.

> KerberosName static initialization gets default realm, which is unneeded in 
> non-secure deployment.
> --
>
> Key: HADOOP-9888
> URL: https://issues.apache.org/jira/browse/HADOOP-9888
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.1.1-beta, 3.0.0
>Reporter: Chris Nauroth
>Assignee: Dmytro Kabakchei
> Attachments: HADOOP-9888.001.patch
>
>
> {{KerberosName}} has a static initialization block that looks up the default 
> realm.  Running with Oracle JDK7, this code path triggers a DNS query.  In 
> some environments, we've seen this DNS query block and time out after 30 
> seconds.  This is part of static initialization, and the class is referenced 
> from {{UserGroupInformation#initialize}}, so every daemon and every shell 
> command experiences this delay.  This occurs even for non-secure deployments, 
> which don't need the default realm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12687) Timeout for tests in TestYarnClient, TestAMRMClient and TestNMClient

2016-01-06 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085438#comment-15085438
 ] 

Rohith Sharma K S commented on HADOOP-12687:


Hi [~sunilg], small update can you do for the patch?
# Instead of catching UnknownHostException , can you move down {{addr = 
InetAddress.getByName(host);}} like below. So need not catch 
UnknownHostException. And add a comment there.
{code}

  addr = getByNameWithSearch(host);
  if (addr == null) {
addr = getByExactName(host);
if (addr == null) {
// comment
  addr = InetAddress.getByName(host);
}
  }

{code}
# Not related to patch, need to change the summary of this JIRA that reflect 
actual code change in  Hadoop Common.

> Timeout for tests in TestYarnClient, TestAMRMClient and TestNMClient
> 
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11828) Implement the Hitchhiker erasure coding algorithm

2016-01-06 Thread jack liuquan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11828?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085301#comment-15085301
 ] 

jack liuquan commented on HADOOP-11828:
---

Hi Kai,
I have seen the update in HADOOP-12685.
I will update a new patch this week.


> Implement the Hitchhiker erasure coding algorithm
> -
>
> Key: HADOOP-11828
> URL: https://issues.apache.org/jira/browse/HADOOP-11828
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Zhe Zhang
>Assignee: jack liuquan
> Attachments: 7715-hitchhikerXOR-v2-testcode.patch, 
> 7715-hitchhikerXOR-v2.patch, HADOOP-11828-hitchhikerXOR-V3.patch, 
> HADOOP-11828-hitchhikerXOR-V4.patch, HADOOP-11828-hitchhikerXOR-V5.patch, 
> HDFS-7715-hhxor-decoder.patch, HDFS-7715-hhxor-encoder.patch
>
>
> [Hitchhiker | 
> http://www.eecs.berkeley.edu/~nihar/publications/Hitchhiker_SIGCOMM14.pdf] is 
> a new erasure coding algorithm developed as a research project at UC 
> Berkeley. It has been shown to reduce network traffic and disk I/O by 25%-45% 
> during data reconstruction while retaining the same storage capacity and 
> failure tolerance capability as RS codes. This JIRA aims to introduce 
> Hitchhiker to the HDFS-EC framework, as one of the pluggable codec algorithms.
> The existing implementation is based on HDFS-RAID. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname incase multiple loopback addresses are present in etc/hosts

2016-01-06 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated HADOOP-12687:
-
Description: 
>From 
>https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
> we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
>timeout which can be reproduced locally.

When {{/etc/hosts}} has multiple loopback entries, 



  was:
>From 
>https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
> we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
>timeout which can be reproduced locally.




> SecureUtil#getByName should also try to resolve direct hostname incase 
> multiple loopback addresses are present in etc/hosts
> ---
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.
> When {{/etc/hosts}} has multiple loopback entries, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname incase multiple loopback addresses are present in etc/hosts

2016-01-06 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated HADOOP-12687:
-
Description: 
>From 
>https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
> we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
>timeout which can be reproduced locally.



  was:From 
https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
 we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
timeout which can be reproduced locally.


> SecureUtil#getByName should also try to resolve direct hostname incase 
> multiple loopback addresses are present in etc/hosts
> ---
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12687) Timeout for tests in TestYarnClient, TestAMRMClient and TestNMClient

2016-01-06 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated HADOOP-12687:
-
Attachment: 0003-HADOOP-12687.patch

Yes [~rohithsharma]
That change looks fine for me. Updating by changing as per comment. 

> Timeout for tests in TestYarnClient, TestAMRMClient and TestNMClient
> 
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname incase multiple loopback addresses are present in etc/hosts

2016-01-06 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated HADOOP-12687:
-
Summary: SecureUtil#getByName should also try to resolve direct hostname 
incase multiple loopback addresses are present in etc/hosts  (was: Timeout for 
tests in TestYarnClient, TestAMRMClient and TestNMClient)

> SecureUtil#getByName should also try to resolve direct hostname incase 
> multiple loopback addresses are present in etc/hosts
> ---
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname incase multiple loopback addresses are present in etc/hosts

2016-01-06 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated HADOOP-12687:
-
Description: 
>From 
>https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
> we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
>timeout which can be reproduced locally.

When {{/etc/hosts}} has multiple loopback entries, 
{{InetAddress.getByName(null)}} will be returning the first entry present in 
etc/hosts. Hence its possible that machine hostname can be second in list and 
cause 



  was:
>From 
>https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
> we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
>timeout which can be reproduced locally.

When {{/etc/hosts}} has multiple loopback entries, 




> SecureUtil#getByName should also try to resolve direct hostname incase 
> multiple loopback addresses are present in etc/hosts
> ---
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.
> When {{/etc/hosts}} has multiple loopback entries, 
> {{InetAddress.getByName(null)}} will be returning the first entry present in 
> etc/hosts. Hence its possible that machine hostname can be second in list and 
> cause 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname incase multiple loopback addresses are present in etc/hosts

2016-01-06 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated HADOOP-12687:
-
Description: 
>From 
>https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
> we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
>timeout which can be reproduced locally.

When {{/etc/hosts}} has multiple loopback entries, 
{{InetAddress.getByName(null)}} will be returning the first entry present in 
etc/hosts. Hence its possible that machine hostname can be second in list and 
cause {{UnKnownHostException}}.

Suggesting a direct resolve for such hostname scenarios.


  was:
>From 
>https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
> we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
>timeout which can be reproduced locally.

When {{/etc/hosts}} has multiple loopback entries, 
{{InetAddress.getByName(null)}} will be returning the first entry present in 
etc/hosts. Hence its possible that machine hostname can be second in list and 
cause 




> SecureUtil#getByName should also try to resolve direct hostname incase 
> multiple loopback addresses are present in etc/hosts
> ---
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.
> When {{/etc/hosts}} has multiple loopback entries, 
> {{InetAddress.getByName(null)}} will be returning the first entry present in 
> etc/hosts. Hence its possible that machine hostname can be second in list and 
> cause {{UnKnownHostException}}.
> Suggesting a direct resolve for such hostname scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2016-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085718#comment-15085718
 ] 

Hadoop QA commented on HADOOP-12678:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
28s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 19s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 4 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
32s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 0s 
{color} | {color:red} hadoop-tools_hadoop-azure-jdk1.8.0_66 with JDK v1.8.0_66 
generated 18 new issues (was 26, now 26). {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 26s 
{color} | {color:red} hadoop-tools_hadoop-azure-jdk1.7.0_91 with JDK v1.7.0_91 
generated 1 new issues (was 1, now 1). {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 10s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 14s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 42s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12780770/HADOOP-12678.004.patch
 |
| JIRA Issue | HADOOP-12678 |
| Optional Tests |  

[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2016-01-06 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12678:
---
Status: Open  (was: Patch Available)

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9888) KerberosName static initialization gets default realm, which is unneeded in non-secure deployment.

2016-01-06 Thread Dmytro Kabakchei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9888?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085705#comment-15085705
 ] 

Dmytro Kabakchei commented on HADOOP-9888:
--

I've finished reviewing of what changes refactoring proposed by Kai Zheng would 
require.
It requires changes to KerberosUtil, KerberosName, HadoopKerberosName, 
RegistrySecurity classes and some tests. Although such changes are possible, 
but solutions to resolve exceptions handing logic for those classes are very 
ugly. I'm afraid that such refactoring would bring some overhead and make code 
very ugly.
Nevertheless, DNS lookup stays as it was, but now it is skipped for non-secure 
deployments.

Somebody, please, review the patch and approve or reject it with explanation.

> KerberosName static initialization gets default realm, which is unneeded in 
> non-secure deployment.
> --
>
> Key: HADOOP-9888
> URL: https://issues.apache.org/jira/browse/HADOOP-9888
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.1.1-beta, 3.0.0
>Reporter: Chris Nauroth
>Assignee: Dmytro Kabakchei
> Attachments: HADOOP-9888.001.patch
>
>
> {{KerberosName}} has a static initialization block that looks up the default 
> realm.  Running with Oracle JDK7, this code path triggers a DNS query.  In 
> some environments, we've seen this DNS query block and time out after 30 
> seconds.  This is part of static initialization, and the class is referenced 
> from {{UserGroupInformation#initialize}}, so every daemon and every shell 
> command experiences this delay.  This occurs even for non-secure deployments, 
> which don't need the default realm.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2016-01-06 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12678:
---
Status: Patch Available  (was: Open)

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch, HADOOP-12678.004.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2016-01-06 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12678:
---
Attachment: HADOOP-12678.004.patch

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch, HADOOP-12678.004.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-01-06 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085909#comment-15085909
 ] 

Allen Wittenauer commented on HADOOP-12563:
---

bq. I'm also curious about the choice of protobuf for the token rather than 
JWT. I'd like to understand the differences in portability that you see between 
the two. JWT has become a very popular format for such things.

* extremely portable; hooks for almost every language you can think of
* if the app is doing RPC (probably the majority case today for most DT file 
usage), protobuf libraries are already available
* changing from one serialization format to another is a fairly trivial change; 
the content is left mostly untouched so avoid the conversation of what goes 
where
* can be evolved to support more fields (e.g., service aliasing, something 
we've been discussing internally) as necessary

The ability to support more than one format is part of the design here.  If 
protobuf isn't sufficient to handle all uses cases another format could be 
added easily enough.  e.g., there's no reason why JWT couldn't be added as a 
third option at a later date.  

> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2016-01-06 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12678:
---
Status: Open  (was: Patch Available)

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch, HADOOP-12678.004.patch, HADOOP-12678.005.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2016-01-06 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12678:
---
Attachment: HADOOP-12678.005.patch

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch, HADOOP-12678.004.patch, HADOOP-12678.005.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2016-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085898#comment-15085898
 ] 

Hadoop QA commented on HADOOP-12678:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 17s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 23s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
17s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 12s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 20s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
37s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 3s 
{color} | {color:red} hadoop-tools_hadoop-azure-jdk1.8.0_66 with JDK v1.8.0_66 
generated 18 new issues (was 26, now 26). {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 32s 
{color} | {color:red} hadoop-tools_hadoop-azure-jdk1.7.0_91 with JDK v1.7.0_91 
generated 1 new issues (was 1, now 1). {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 12s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 23s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
17s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 0s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12780788/HADOOP-12678.005.patch
 |
| JIRA Issue | HADOOP-12678 |
| Optional Tests |  asflicense  compile  javac  javadoc  

[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2016-01-06 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12678:
---
Status: Patch Available  (was: Open)

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch, HADOOP-12678.004.patch, HADOOP-12678.005.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2016-01-06 Thread madhumita chakraborty (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15085902#comment-15085902
 ] 

madhumita chakraborty commented on HADOOP-12678:


[~cnauroth] I have addressed your comments. Could you please take a look?

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch, HADOOP-12678.004.patch, HADOOP-12678.005.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12041) Implement another Reed-Solomon coder in pure Java

2016-01-06 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086391#comment-15086391
 ] 

Zhe Zhang commented on HADOOP-12041:


Thanks Kai for the work. I'm still reviewing the patch but would like to 
discuss a high level question. 

bq. The old HDFS-RAID originated coder will still be there for comparing, and 
converting old data from HDFS-RAID systems.
The old and new coders should generate the same results, right? I don't think 
we need the old coder to port data? I guess it depends on whether 
{{GaloisField}} has the same matrix at size 256 as the new {{GF256}}?

bq. The new Java RS coder will be favored and used by default
If that's our position, we should rename the existing coder as 
{{RSRawEncoderLegacy}}} and name the new one as {{RSRawEncoder}} (same for 
decoder). Alternatively, if we think the stability of the new coder needs more 
testing, we can keep the current naming in v5 patch, implying that the new 
coder is in "beta" mode.

Some unused methods: {{genReedSolomonMatrix}}, {{gfBase}}, {{gfLogBase}}



> Implement another Reed-Solomon coder in pure Java
> -
>
> Key: HADOOP-12041
> URL: https://issues.apache.org/jira/browse/HADOOP-12041
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12041-v1.patch, HADOOP-12041-v2.patch, 
> HADOOP-12041-v3.patch, HADOOP-12041-v4.patch, HADOOP-12041-v5.patch
>
>
> Currently existing Java RS coders based on {{GaloisField}} implementation 
> have some drawbacks or limitations:
> * The decoder computes not really erased units unnecessarily (HADOOP-11871);
> * The decoder requires parity units + data units order for the inputs in the 
> decode API (HADOOP-12040);
> * Need to support or align with native erasure coders regarding concrete 
> coding algorithms and matrix, so Java coders and native coders can be easily 
> swapped in/out and transparent to HDFS (HADOOP-12010);
> * It's unnecessarily flexible but incurs some overhead, as HDFS erasure 
> coding is totally a byte based data system, we don't need to consider other 
> symbol size instead of 256.
> This desires to implement another  RS coder in pure Java, in addition to the 
> existing {{GaliosField}} from HDFS-RAID. The new Java RS coder will be 
> favored and used by default to resolve the related issues. The old HDFS-RAID 
> originated coder will still be there for comparing, and converting old data 
> from HDFS-RAID systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12546) Improve TestKMS

2016-01-06 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086481#comment-15086481
 ] 

Zhe Zhang commented on HADOOP-12546:


I think the conflict is from HADOOP-12682. I tried applying v03 patch but 
failed.

> Improve TestKMS
> ---
>
> Key: HADOOP-12546
> URL: https://issues.apache.org/jira/browse/HADOOP-12546
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HADOOP-12546.001.patch, HADOOP-12546.002.patch, 
> HADOOP-12546.003.patch
>
>
> The TestKMS class has some issues:
> * It swallows some exceptions' stack traces
> * It swallows some exceptions altogether
> * Some of the tests aren't as tight as they could be
> * Asserts lack messages
> * Code style is a bit hodgepodge
> This JIRA is to clean all that up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12546) Improve TestKMS

2016-01-06 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086424#comment-15086424
 ] 

Daniel Templeton commented on HADOOP-12546:
---

TestKMS hasn't changed since Feb.  It shouldn't need a rebase.

> Improve TestKMS
> ---
>
> Key: HADOOP-12546
> URL: https://issues.apache.org/jira/browse/HADOOP-12546
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HADOOP-12546.001.patch, HADOOP-12546.002.patch, 
> HADOOP-12546.003.patch
>
>
> The TestKMS class has some issues:
> * It swallows some exceptions' stack traces
> * It swallows some exceptions altogether
> * Some of the tests aren't as tight as they could be
> * Asserts lack messages
> * Code style is a bit hodgepodge
> This JIRA is to clean all that up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12041) Implement another Reed-Solomon coder in pure Java

2016-01-06 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12041?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086713#comment-15086713
 ] 

Kai Zheng commented on HADOOP-12041:


Thanks Zhe for the good questions.
bq. The old and new coders should generate the same results, right?
Unfortunately not. That's why I would propose and work on another pure Java 
coder here. Even both so called {{Reed-Solomon}} coder, the HDFS-RAID one and 
ISA-L one uses different coding forms internally. Both use GF256 as the 
targeted HDFS is a byte-based data system. The existing GaliosField facility 
used by HDFS-RAID also favours other symbol size than 256 but as this isn't 
needed in fact, so the new GF256 facility is much simplified. This new Java 
coder is developed to be compatible with the ISA-L coder in case native library 
isn't available in both development and experimental environment. The HDFS-RAID 
one isn't compatible but can be used to port existing data from legacy system 
in case it's needed.

bq. we should rename the existing coder as RSRawEncoderLegacy} and name the new 
one as RSRawEncoder
Excellent idea! Thanks.

bq. Some unused methods: genReedSolomonMatrix, gfBase, gfLogBase
{{GF256}} serves as a complete GF basic facility class I would suggest we keep 
them even unused for now. {{genReedSolomonMatrix}} will be needed because 
people may want to support that coding matrix generation in the algorithm.

Look forward to your more review comments. :)

> Implement another Reed-Solomon coder in pure Java
> -
>
> Key: HADOOP-12041
> URL: https://issues.apache.org/jira/browse/HADOOP-12041
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12041-v1.patch, HADOOP-12041-v2.patch, 
> HADOOP-12041-v3.patch, HADOOP-12041-v4.patch, HADOOP-12041-v5.patch
>
>
> Currently existing Java RS coders based on {{GaloisField}} implementation 
> have some drawbacks or limitations:
> * The decoder computes not really erased units unnecessarily (HADOOP-11871);
> * The decoder requires parity units + data units order for the inputs in the 
> decode API (HADOOP-12040);
> * Need to support or align with native erasure coders regarding concrete 
> coding algorithms and matrix, so Java coders and native coders can be easily 
> swapped in/out and transparent to HDFS (HADOOP-12010);
> * It's unnecessarily flexible but incurs some overhead, as HDFS erasure 
> coding is totally a byte based data system, we don't need to consider other 
> symbol size instead of 256.
> This desires to implement another  RS coder in pure Java, in addition to the 
> existing {{GaliosField}} from HDFS-RAID. The new Java RS coder will be 
> favored and used by default to resolve the related issues. The old HDFS-RAID 
> originated coder will still be there for comparing, and converting old data 
> from HDFS-RAID systems.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname incase multiple loopback addresses are present in etc/hosts

2016-01-06 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated HADOOP-12687:
---
Status: Patch Available  (was: Open)

> SecureUtil#getByName should also try to resolve direct hostname incase 
> multiple loopback addresses are present in etc/hosts
> ---
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.
> When {{/etc/hosts}} has multiple loopback entries, 
> {{InetAddress.getByName(null)}} will be returning the first entry present in 
> etc/hosts. Hence its possible that machine hostname can be second in list and 
> cause {{UnKnownHostException}}.
> Suggesting a direct resolve for such hostname scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname incase multiple loopback addresses are present in etc/hosts

2016-01-06 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated HADOOP-12687:
---
Status: Open  (was: Patch Available)

> SecureUtil#getByName should also try to resolve direct hostname incase 
> multiple loopback addresses are present in etc/hosts
> ---
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.
> When {{/etc/hosts}} has multiple loopback entries, 
> {{InetAddress.getByName(null)}} will be returning the first entry present in 
> etc/hosts. Hence its possible that machine hostname can be second in list and 
> cause {{UnKnownHostException}}.
> Suggesting a direct resolve for such hostname scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname incase multiple loopback addresses are present in etc/hosts

2016-01-06 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated HADOOP-12687:
-
Attachment: 0004-HADOOP-12687.patch

Reattaching patch to trigger jenkins again.

> SecureUtil#getByName should also try to resolve direct hostname incase 
> multiple loopback addresses are present in etc/hosts
> ---
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.
> When {{/etc/hosts}} has multiple loopback entries, 
> {{InetAddress.getByName(null)}} will be returning the first entry present in 
> etc/hosts. Hence its possible that machine hostname can be second in list and 
> cause {{UnKnownHostException}}.
> Suggesting a direct resolve for such hostname scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2016-01-06 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12678:
---
Attachment: HADOOP-12678.006.patch

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch, HADOOP-12678.004.patch, HADOOP-12678.005.patch, 
> HADOOP-12678.006.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2016-01-06 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12678:
---
Status: Patch Available  (was: Open)

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch, HADOOP-12678.004.patch, HADOOP-12678.005.patch, 
> HADOOP-12678.006.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2016-01-06 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12678:
---
Status: Open  (was: Patch Available)

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch, HADOOP-12678.004.patch, HADOOP-12678.005.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2016-01-06 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12678:
---
Attachment: (was: HADOOP-12678.006.patch)

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch, HADOOP-12678.004.patch, HADOOP-12678.005.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12692) Maven's DependencyConvergence rule failed

2016-01-06 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12692:
-
Description: 
I am seeing a Maven warning in Jenkins:
https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/761/console

This nightly job failed because of a Maven rule failed
{noformat}
[WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
failed with message:
Failed while enforcing releasability the error(s) are [
Dependency convergence error for org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT 
paths to dependency are:
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
and
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
+-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
and
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
+-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
and
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-auth:3.0.0-20160107.005725-7960
]
{noformat}

{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-enforcer-plugin:1.3.1:enforce (depcheck) on 
project hadoop-hdfs-httpfs: Some Enforcer rules have failed. Look above for 
specific messages explaining why the rule failed. -> [Help 1]
{noformat}

Looks like httpfs depends on two versions of hadoop-auth: 3.0.0-SNAPSHOT and a 
timestamp based.

I think this can be fixed by updating one of the pom.xml files. But I am not 
exactly sure how to do it. Need a Maven expert here.

  was:
I am seeing a Maven warning in Jenkins:
https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/761/console

This nightly job failed because of a Maven rule failed
{noformat}
[WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
failed with message:
Failed while enforcing releasability the error(s) are [
Dependency convergence error for org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT 
paths to dependency are:
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
and
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
+-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
and
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
+-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
and
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-auth:3.0.0-20160107.005725-7960
]
{noformat}

{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-enforcer-plugin:1.3.1:enforce (depcheck) on 
project hadoop-hdfs-httpfs: Some Enforcer rules have failed. Look above for 
specific messages explaining why the rule failed. -> [Help 1]
{noformat}

Looks like httpfs depends on two versions of hadoop-auth: 3.0.0-SNAPSHOT and a 
timestamp based


> Maven's DependencyConvergence rule failed
> -
>
> Key: HADOOP-12692
> URL: https://issues.apache.org/jira/browse/HADOOP-12692
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>
> I am seeing a Maven warning in Jenkins:
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/761/console
> This nightly job failed because of a Maven rule failed
> {noformat}
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability the error(s) are [
> Dependency convergence error for org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT 
> paths to dependency are:
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
> +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
> +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-auth:3.0.0-20160107.005725-7960
> ]
> {noformat}
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-enforcer-plugin:1.3.1:enforce (depcheck) on 
> project hadoop-hdfs-httpfs: Some Enforcer rules have failed. Look above for 
> specific messages explaining why the rule failed. -> [Help 1]
> {noformat}
> Looks like httpfs depends on two versions of hadoop-auth: 3.0.0-SNAPSHOT and 
> a timestamp based.
> I think this can be fixed by updating one of the pom.xml files. But I am not 
> exactly sure how to do it. Need a Maven expert here.



--
This 

[jira] [Commented] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname incase multiple loopback addresses are present in etc/hosts

2016-01-06 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086731#comment-15086731
 ] 

Rohith Sharma K S commented on HADOOP-12687:


Cancelled the patch and resubmitted again to trigger Jenkin

> SecureUtil#getByName should also try to resolve direct hostname incase 
> multiple loopback addresses are present in etc/hosts
> ---
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.
> When {{/etc/hosts}} has multiple loopback entries, 
> {{InetAddress.getByName(null)}} will be returning the first entry present in 
> etc/hosts. Hence its possible that machine hostname can be second in list and 
> cause {{UnKnownHostException}}.
> Suggesting a direct resolve for such hostname scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts

2016-01-06 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086808#comment-15086808
 ] 

Rohith Sharma K S commented on HADOOP-12687:


committing shortly

> SecureUtil#getByName should also try to resolve direct hostname, incase 
> multiple loopback addresses are present in /etc/hosts
> -
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.
> When {{/etc/hosts}} has multiple loopback entries, 
> {{InetAddress.getByName(null)}} will be returning the first entry present in 
> etc/hosts. Hence its possible that machine hostname can be second in list and 
> cause {{UnKnownHostException}}.
> Suggesting a direct resolve for such hostname scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12692) Maven's DependencyConvergence rule failed

2016-01-06 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-12692:


 Summary: Maven's DependencyConvergence rule failed
 Key: HADOOP-12692
 URL: https://issues.apache.org/jira/browse/HADOOP-12692
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
 Environment: Jenkins
Reporter: Wei-Chiu Chuang


I am seeing a Maven warning in Jenkins:
https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/761/console

This nightly job failed because of a Maven rule failed
{noformat}
[WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
failed with message:
Failed while enforcing releasability the error(s) are [
Dependency convergence error for org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT 
paths to dependency are:
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
and
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
+-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
and
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
+-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
and
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-auth:3.0.0-20160107.005725-7960
]
{noformat}

{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-enforcer-plugin:1.3.1:enforce (depcheck) on 
project hadoop-hdfs-httpfs: Some Enforcer rules have failed. Look above for 
specific messages explaining why the rule failed. -> [Help 1]
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts

2016-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086801#comment-15086801
 ] 

Hadoop QA commented on HADOOP-12687:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
3s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 40s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 26s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 8s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
55s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 45s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 31s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 30s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 24s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 59s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 41s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_91 Failed junit tests | hadoop.fs.shell.TestCopyPreserveFlag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12780891/0004-HADOOP-12687.patch
 |
| JIRA Issue | HADOOP-12687 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9162e44a1c18 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| 

[jira] [Updated] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts

2016-01-06 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated HADOOP-12687:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.9.0
   Status: Resolved  (was: Patch Available)

committed to trunk/branch-2. thanks [~sunilg] for the patch:-) and thanks 
[~vinayrpet] for the review

> SecureUtil#getByName should also try to resolve direct hostname, incase 
> multiple loopback addresses are present in /etc/hosts
> -
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Fix For: 2.9.0
>
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.
> When {{/etc/hosts}} has multiple loopback entries, 
> {{InetAddress.getByName(null)}} will be returning the first entry present in 
> etc/hosts. Hence its possible that machine hostname can be second in list and 
> cause {{UnKnownHostException}}.
> Suggesting a direct resolve for such hostname scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12693) Many misusage of assertEquals(expected, actual)

2016-01-06 Thread Akihiro Suda (JIRA)
Akihiro Suda created HADOOP-12693:
-

 Summary: Many misusage of assertEquals(expected, actual)
 Key: HADOOP-12693
 URL: https://issues.apache.org/jira/browse/HADOOP-12693
 Project: Hadoop Common
  Issue Type: Test
Reporter: Akihiro Suda
Priority: Trivial
 Attachments: just-rough-approx.txt

The first arg of {{org.JUnit.Assert.assertEquals()}} should be an {{expected}} 
value, and the second one should be an {{actual}} value.

{code}
void assertEquals(T expected, T actual);
{code}

http://junit.org/apidocs/org/junit/Assert.html#assertEquals(java.lang.Object, 
java.lang.Object)

However, there are so many violations in Hadoop, which can make a misleading 
message like this:
{code}
AssertionError: expected: but was:
{code}.

Please refer to {{just-rough-approx.txt}}.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12692) Maven's DependencyConvergence rule failed

2016-01-06 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12692:
-
Attachment: HADOOP-12692.001.patch

Rev01. force Maven dependency enforcer to not match timestamped version.
I am not sure if this is right, but according to 
https://maven.apache.org/enforcer/enforcer-rules/dependencyConvergence.html, 
setting the property to false makes the enforcer to not look for timestamped 
version.

> Maven's DependencyConvergence rule failed
> -
>
> Key: HADOOP-12692
> URL: https://issues.apache.org/jira/browse/HADOOP-12692
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
> Attachments: HADOOP-12692.001.patch
>
>
> I am seeing a Maven warning in Jenkins:
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/761/console
> This nightly job failed because of a Maven rule failed
> {noformat}
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability the error(s) are [
> Dependency convergence error for org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT 
> paths to dependency are:
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
> +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
> +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-auth:3.0.0-20160107.005725-7960
> ]
> {noformat}
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-enforcer-plugin:1.3.1:enforce (depcheck) on 
> project hadoop-hdfs-httpfs: Some Enforcer rules have failed. Look above for 
> specific messages explaining why the rule failed. -> [Help 1]
> {noformat}
> Looks like httpfs depends on two versions of hadoop-auth: 3.0.0-SNAPSHOT and 
> a timestamp based.
> I think this can be fixed by updating one of the pom.xml files. But I am not 
> exactly sure how to do it. Need a Maven expert here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12693) Many misusage of assertEquals(expected, actual)

2016-01-06 Thread Akihiro Suda (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12693?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akihiro Suda updated HADOOP-12693:
--
Attachment: just-rough-approx.txt

> Many misusage of assertEquals(expected, actual)
> ---
>
> Key: HADOOP-12693
> URL: https://issues.apache.org/jira/browse/HADOOP-12693
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Akihiro Suda
>Priority: Trivial
> Attachments: just-rough-approx.txt
>
>
> The first arg of {{org.JUnit.Assert.assertEquals()}} should be an 
> {{expected}} value, and the second one should be an {{actual}} value.
> {code}
> void assertEquals(T expected, T actual);
> {code}
> http://junit.org/apidocs/org/junit/Assert.html#assertEquals(java.lang.Object, 
> java.lang.Object)
> However, there are so many violations in Hadoop, which can make a misleading 
> message like this:
> {code}
> AssertionError: expected: but was:
> {code}.
> Please refer to {{just-rough-approx.txt}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts

2016-01-06 Thread Sunil G (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086841#comment-15086841
 ] 

Sunil G commented on HADOOP-12687:
--

Thanks [~rohithsharma] for the review and commit. And thanks [~vinayrpet] for 
the review!

> SecureUtil#getByName should also try to resolve direct hostname, incase 
> multiple loopback addresses are present in /etc/hosts
> -
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Fix For: 2.9.0
>
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.
> When {{/etc/hosts}} has multiple loopback entries, 
> {{InetAddress.getByName(null)}} will be returning the first entry present in 
> etc/hosts. Hence its possible that machine hostname can be second in list and 
> cause {{UnKnownHostException}}.
> Suggesting a direct resolve for such hostname scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12692) Maven's DependencyConvergence rule failed

2016-01-06 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12692:
-
Description: 
I am seeing a Maven warning in Jenkins:
https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/761/console

This nightly job failed because of a Maven rule failed
{noformat}
[WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
failed with message:
Failed while enforcing releasability the error(s) are [
Dependency convergence error for org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT 
paths to dependency are:
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
and
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
+-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
and
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
+-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
and
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-auth:3.0.0-20160107.005725-7960
]
{noformat}

{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-enforcer-plugin:1.3.1:enforce (depcheck) on 
project hadoop-hdfs-httpfs: Some Enforcer rules have failed. Look above for 
specific messages explaining why the rule failed. -> [Help 1]
{noformat}

Looks like httpfs depends on two versions of hadoop-auth: 3.0.0-SNAPSHOT and a 
timestamp based

  was:
I am seeing a Maven warning in Jenkins:
https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/761/console

This nightly job failed because of a Maven rule failed
{noformat}
[WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
failed with message:
Failed while enforcing releasability the error(s) are [
Dependency convergence error for org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT 
paths to dependency are:
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
and
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
+-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
and
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
+-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
and
+-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
  +-org.apache.hadoop:hadoop-auth:3.0.0-20160107.005725-7960
]
{noformat}

{noformat}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-enforcer-plugin:1.3.1:enforce (depcheck) on 
project hadoop-hdfs-httpfs: Some Enforcer rules have failed. Look above for 
specific messages explaining why the rule failed. -> [Help 1]
{noformat}


> Maven's DependencyConvergence rule failed
> -
>
> Key: HADOOP-12692
> URL: https://issues.apache.org/jira/browse/HADOOP-12692
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
> Environment: Jenkins
>Reporter: Wei-Chiu Chuang
>
> I am seeing a Maven warning in Jenkins:
> https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/761/console
> This nightly job failed because of a Maven rule failed
> {noformat}
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.DependencyConvergence 
> failed with message:
> Failed while enforcing releasability the error(s) are [
> Dependency convergence error for org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT 
> paths to dependency are:
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
> +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-common:3.0.0-SNAPSHOT
> +-org.apache.hadoop:hadoop-auth:3.0.0-SNAPSHOT
> and
> +-org.apache.hadoop:hadoop-hdfs-httpfs:3.0.0-SNAPSHOT
>   +-org.apache.hadoop:hadoop-auth:3.0.0-20160107.005725-7960
> ]
> {noformat}
> {noformat}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-enforcer-plugin:1.3.1:enforce (depcheck) on 
> project hadoop-hdfs-httpfs: Some Enforcer rules have failed. Look above for 
> specific messages explaining why the rule failed. -> [Help 1]
> {noformat}
> Looks like httpfs depends on two versions of hadoop-auth: 3.0.0-SNAPSHOT and 
> a timestamp based



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2016-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086787#comment-15086787
 ] 

Hadoop QA commented on HADOOP-12678:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
44s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 0m 8s 
{color} | {color:red} Patch generated 2 new checkstyle issues in 
hadoop-tools/hadoop-azure (total was 25, now 27). {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
35s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 59s 
{color} | {color:red} hadoop-tools_hadoop-azure-jdk1.8.0_66 with JDK v1.8.0_66 
generated 18 new issues (was 26, now 26). {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 26s 
{color} | {color:red} hadoop-tools_hadoop-azure-jdk1.7.0_91 with JDK v1.7.0_91 
generated 1 new issues (was 1, now 1). {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 17s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 44s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12780895/HADOOP-12678.006.patch
 |
| JIRA Issue | 

[jira] [Commented] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts

2016-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086840#comment-15086840
 ] 

Hudson commented on HADOOP-12687:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9063 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9063/])
HADOOP-12687. SecureUtil#QualifiedHostResolver#getByName should also try 
(rohithsharmaks: rev 2b252844e04eebd4f32815d4bd6f914c02994709)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SecurityUtil.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> SecureUtil#getByName should also try to resolve direct hostname, incase 
> multiple loopback addresses are present in /etc/hosts
> -
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Fix For: 2.9.0
>
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.
> When {{/etc/hosts}} has multiple loopback entries, 
> {{InetAddress.getByName(null)}} will be returning the first entry present in 
> etc/hosts. Hence its possible that machine hostname can be second in list and 
> cause {{UnKnownHostException}}.
> Suggesting a direct resolve for such hostname scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12620) Advanced Hadoop Architecture (AHA) - Common

2016-01-06 Thread Dinesh S. Atreya (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12620?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086621#comment-15086621
 ] 

Dinesh S. Atreya commented on HADOOP-12620:
---

First of all, thank you very much for all of your inputs, especially 
[~ste...@apache.org], [Allen Wittenauer | 
https://issues.apache.org/jira/secure/ViewProfile.jspa?name=aw] and [~wheat9] .

Regarding
{quote}
I'd be surprised if making HDFS R/W is "minimal" 
{quote}

The word *"minimal"* highlights intention to keep it minimal to other 
alternatives that may involve significant changes, the objective is to 
*minimize* changes to HDFS as far as possible, not fully R/W "a la" POSIX but 
allow in-place write/update. 

Kindly use this JIRA primarily for Business Needs. It will be preferable to 
delegate all technical discussions to the respective child JIRAs, now that 
there is a consensus regarding business benefits of update-in-place. 

*I am using this JIRA mainly for Business/Use Cases highlights.* It is very 
encouraging to see the interest.

I have copied [~wheat9] comments above to [JIRA HDFS-9607 | 
https://issues.apache.org/jira/browse/HDFS-9607 ]


> Advanced Hadoop Architecture (AHA) - Common
> ---
>
> Key: HADOOP-12620
> URL: https://issues.apache.org/jira/browse/HADOOP-12620
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: Dinesh S. Atreya
>Assignee: Dinesh S. Atreya
>
> Advance Hadoop Architecture (AHA) / Advance Hadoop Adaptabilities (AHA):
> See 
> https://issues.apache.org/jira/browse/HADOOP-12620?focusedCommentId=15046300=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15046300
>  for more details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2016-01-06 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12678:
---
Attachment: HADOOP-12678.006.patch

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch, HADOOP-12678.004.patch, HADOOP-12678.005.patch, 
> HADOOP-12678.006.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2016-01-06 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12678:
---
Status: Patch Available  (was: Open)

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch, HADOOP-12678.004.patch, HADOOP-12678.005.patch, 
> HADOOP-12678.006.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2016-01-06 Thread madhumita chakraborty (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

madhumita chakraborty updated HADOOP-12678:
---
Status: Open  (was: Patch Available)

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch, HADOOP-12678.004.patch, HADOOP-12678.005.patch, 
> HADOOP-12678.006.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12687) SecureUtil#getByName should also try to resolve direct hostname, incase multiple loopback addresses are present in /etc/hosts

2016-01-06 Thread Sunil G (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12687?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sunil G updated HADOOP-12687:
-
Summary: SecureUtil#getByName should also try to resolve direct hostname, 
incase multiple loopback addresses are present in /etc/hosts  (was: 
SecureUtil#getByName should also try to resolve direct hostname incase multiple 
loopback addresses are present in etc/hosts)

> SecureUtil#getByName should also try to resolve direct hostname, incase 
> multiple loopback addresses are present in /etc/hosts
> -
>
> Key: HADOOP-12687
> URL: https://issues.apache.org/jira/browse/HADOOP-12687
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Junping Du
>Assignee: Sunil G
>  Labels: security
> Attachments: 0001-YARN-4352.patch, 0002-YARN-4352.patch, 
> 0003-HADOOP-12687.patch, 0004-HADOOP-12687.patch
>
>
> From 
> https://builds.apache.org/job/PreCommit-YARN-Build/9661/artifact/patchprocess/patch-unit-hadoop-yarn-project_hadoop-yarn_hadoop-yarn-client-jdk1.7.0_79.txt,
>  we can see the tests in TestYarnClient, TestAMRMClient and TestNMClient get 
> timeout which can be reproduced locally.
> When {{/etc/hosts}} has multiple loopback entries, 
> {{InetAddress.getByName(null)}} will be returning the first entry present in 
> etc/hosts. Hence its possible that machine hostname can be second in list and 
> cause {{UnKnownHostException}}.
> Suggesting a direct resolve for such hostname scenarios.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2016-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086818#comment-15086818
 ] 

Hadoop QA commented on HADOOP-12678:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
43s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
29s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 13s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
34s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 57s 
{color} | {color:red} hadoop-tools_hadoop-azure-jdk1.8.0_66 with JDK v1.8.0_66 
generated 18 new issues (was 26, now 26). {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 9s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 24s 
{color} | {color:red} hadoop-tools_hadoop-azure-jdk1.7.0_91 with JDK v1.7.0_91 
generated 1 new issues (was 1, now 1). {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 1s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 14s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 36s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12780899/HADOOP-12678.006.patch
 |
| JIRA Issue | HADOOP-12678 |
| Optional Tests |  asflicense  compile  javac  javadoc  

[jira] [Commented] (HADOOP-12693) Many misusage of assertEquals(expected, actual)

2016-01-06 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086913#comment-15086913
 ] 

Kai Zheng commented on HADOOP-12693:


This can be corrected incrementally. For the long term, I would suggest we 
introduce [AssertJ|http://joel-costigliola.github.io/assertj/] for the new 
tests avoiding such mistakes.

> Many misusage of assertEquals(expected, actual)
> ---
>
> Key: HADOOP-12693
> URL: https://issues.apache.org/jira/browse/HADOOP-12693
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Akihiro Suda
>Priority: Trivial
> Attachments: just-rough-approx.txt
>
>
> The first arg of {{org.JUnit.Assert.assertEquals()}} should be an 
> {{expected}} value, and the second one should be an {{actual}} value.
> {code}
> void assertEquals(T expected, T actual);
> {code}
> http://junit.org/apidocs/org/junit/Assert.html#assertEquals(java.lang.Object, 
> java.lang.Object)
> However, there are so many violations in Hadoop, which can make a misleading 
> message like this:
> {code}
> AssertionError: expected: but was:
> {code}.
> Please refer to {{just-rough-approx.txt}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12693) Many misusage of assertEquals(expected, actual)

2016-01-06 Thread Akihiro Suda (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086926#comment-15086926
 ] 

Akihiro Suda commented on HADOOP-12693:
---

+1


> Many misusage of assertEquals(expected, actual)
> ---
>
> Key: HADOOP-12693
> URL: https://issues.apache.org/jira/browse/HADOOP-12693
> Project: Hadoop Common
>  Issue Type: Test
>Reporter: Akihiro Suda
>Priority: Trivial
> Attachments: just-rough-approx.txt
>
>
> The first arg of {{org.JUnit.Assert.assertEquals()}} should be an 
> {{expected}} value, and the second one should be an {{actual}} value.
> {code}
> void assertEquals(T expected, T actual);
> {code}
> http://junit.org/apidocs/org/junit/Assert.html#assertEquals(java.lang.Object, 
> java.lang.Object)
> However, there are so many violations in Hadoop, which can make a misleading 
> message like this:
> {code}
> AssertionError: expected: but was:
> {code}.
> Please refer to {{just-rough-approx.txt}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11252) RPC client does not time out by default

2016-01-06 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11252?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-11252:

Target Version/s: 2.8.0, 2.6.4  (was: 2.8.0)

> RPC client does not time out by default
> ---
>
> Key: HADOOP-11252
> URL: https://issues.apache.org/jira/browse/HADOOP-11252
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.5.0
>Reporter: Wilfred Spiegelenburg
>Assignee: Masatake Iwasaki
>Priority: Critical
> Fix For: 2.8.0, 2.7.3, 2.6.4
>
> Attachments: HADOOP-11252.002.patch, HADOOP-11252.003.patch, 
> HADOOP-11252.004.patch, HADOOP-11252.patch
>
>
> The RPC client has a default timeout set to 0 when no timeout is passed in. 
> This means that the network connection created will not timeout when used to 
> write data. The issue has shown in YARN-2578 and HDFS-4858. Timeouts for 
> writes then fall back to the tcp level retry (configured via tcp_retries2) 
> and timeouts between the 15-30 minutes. Which is too long for a default 
> behaviour.
> Using 0 as the default value for timeout is incorrect. We should use a sane 
> value for the timeout and the "ipc.ping.interval" configuration value is a 
> logical choice for it. The default behaviour should be changed from 0 to the 
> value read for the ping interval from the Configuration.
> Fixing it in common makes more sense than finding and changing all other 
> points in the code that do not pass in a timeout.
> Offending code lines:
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L488
> and 
> https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L350



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12691) Add CSRF Filter to Hadoop Common

2016-01-06 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086200#comment-15086200
 ] 

Larry McCay commented on HADOOP-12691:
--

I will write up a quick design document for this filter and attach.
I intend to add it to the org/apache/hadoop/security/http/ package along side 
CrossOriginFilter.java.

> Add CSRF Filter to Hadoop Common
> 
>
> Key: HADOOP-12691
> URL: https://issues.apache.org/jira/browse/HADOOP-12691
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 3.0.0
>
>
> CSRF prevention for REST APIs can be provided through a common servlet 
> filter. This filter would check for the existence of an expected 
> (configurable) HTTP header - such as X-Requested-By.
> The fact that CSRF attacks are entirely browser based means that the above 
> approach can ensure that requests are coming from either: applications served 
> by the same origin as the REST API or that there is explicit policy 
> configuration that allows the setting of a header on XmlHttpRequest from 
> another origin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12678) Handle empty rename pending metadata file during atomic rename in redo path

2016-01-06 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086136#comment-15086136
 ] 

Chris Nauroth commented on HADOOP-12678:


Thank you, [~madhuch-ms].  The error handling in {{deleteRenamePendingFile}} 
still needs some work.  Here is the code from your v005 patch.

{code}
  } catch (IOException e) {
// If the rename metadata was not found then somebody probably
// raced with us and finished the delete first
Throwable t = e.getCause();
if (t != null && t instanceof StorageException) {
  StorageException se = (StorageException) t;
  if (se.getErrorCode().equals(("BlobNotFound"))) {
LOG.warn("rename pending file " + redoFile + " is already deleted");
  } else {
throw e;
  }
}
  }
{code}

If there is a general {{IOException}} not caused by an Azure 
{{StorageException}}, then this logic would stifle the exception without either 
throwing it or logging it.  An example of this could be loss of network 
connectivity to the Azure Storage backend, which Java would report as an 
{{IOException}} with no cause and a message describing the network error.  We'd 
want to make sure errors like this propagate to the caller, so please stick 
with the code I gave in my last comment:

{code}
  } catch (IOException e) {
Throwable cause = e.getCause();
if (cause != null && cause instanceof StorageException &&
"BlobNotFound".equals(((StorageException)cause).getErrorCode())) {
  LOG.warn("rename pending file " + redoFile + " is already deleted");
} else {
  throw e;
}
  }
{code}

This ensures that only the BlobNotFound error would get swallowed, and any 
other {{IOException}}, whether or not its root cause is in Azure Storage, would 
propagate to the caller.  It also clarifies that there are really only two 
cases for this code: swallow BlobNotFound, else rethrow.

The JavaDoc warnings from the last pre-commit run don't require any action.  
These are pre-existing warnings unrelated to this patch.  The patch is shifting 
the line numbers and therefore making it appear that new warnings were 
introduced.

> Handle empty rename pending metadata file during atomic rename in redo path
> ---
>
> Key: HADOOP-12678
> URL: https://issues.apache.org/jira/browse/HADOOP-12678
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: madhumita chakraborty
>Assignee: madhumita chakraborty
>Priority: Critical
> Attachments: HADOOP-12678.001.patch, HADOOP-12678.002.patch, 
> HADOOP-12678.003.patch, HADOOP-12678.004.patch, HADOOP-12678.005.patch
>
>
> Handle empty rename pending metadata file during atomic rename in redo path
> During atomic rename we create metadata file for rename(-renamePending.json). 
> We create that in 2 steps
> 1. We create an empty blob corresponding to the .json file in its real 
> location
> 2. We create a scratch file to which we write the contents of the rename 
> pending which is then copied over into the blob described in 1
> If process crash occurs after step 1 and before step 2 is complete - we will 
> be left with a zero size blob corresponding to the pending rename metadata 
> file.
> This kind of scenario can happen in the /hbase/.tmp folder because it is 
> considered a candidate folder for atomic rename. Now when HMaster starts up 
> it executes listStatus on the .tmp folder to clean up pending data. At this 
> stage due to the lazy pending rename complete process we look for these json 
> files. On seeing an empty file the process simply throws a fatal exception 
> assuming something went wrong.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12634) Change Lazy Rename Pending Operation Completion of WASB to address case of potential data loss due to partial copy

2016-01-06 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12634?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-12634:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

+1 for patch v02.  The JavaDoc warnings from the last pre-commit run are 
pre-existing warnings unrelated to this patch.  The patch shifted line numbers, 
which made them look like new warnings.

I have committed this to trunk, branch-2 and branch-2.8.  [~gouravk], thank you 
for the patch.

> Change Lazy Rename Pending Operation Completion of WASB to address case of 
> potential data loss due to partial copy
> --
>
> Key: HADOOP-12634
> URL: https://issues.apache.org/jira/browse/HADOOP-12634
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Gaurav Kanade
>Assignee: Gaurav Kanade
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-12634.01.patch, HADOOP-12634.02.patch
>
>
> HADOOP-12334 changed mode of Copy Operation of HBase WAL Archiving to bypass 
> Azure Storage Throttling after retries. This was via client side copy. 
> However a process crash when the copy is partially done would result in a 
> scenario where the source and destination blobs will have different contents 
> and lazy rename pending operation will not handle this thus causing data 
> loss. We need to fix the lazy rename pending operation to address this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12691) Add CSRF Filter to Hadoop Common

2016-01-06 Thread Larry McCay (JIRA)
Larry McCay created HADOOP-12691:


 Summary: Add CSRF Filter to Hadoop Common
 Key: HADOOP-12691
 URL: https://issues.apache.org/jira/browse/HADOOP-12691
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Larry McCay
Assignee: Larry McCay
 Fix For: 3.0.0


CSRF prevention for REST APIs can be provided through a common servlet filter. 
This filter would check for the existence of an expected (configurable) HTTP 
header - such as X-Requested-By.

The fact that CSRF attacks are entirely browser based means that the above 
approach can ensure that requests are coming from either: applications served 
by the same origin as the REST API or that there is explicit policy 
configuration that allows the setting of a header on XmlHttpRequest from 
another origin.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12634) Change Lazy Rename Pending Operation Completion of WASB to address case of potential data loss due to partial copy

2016-01-06 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12634?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086220#comment-15086220
 ] 

Hudson commented on HADOOP-12634:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9059 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9059/])
HADOOP-12634. Change Lazy Rename Pending Operation Completion of WASB to 
(cnauroth: rev 978bbdfeb2d12efd6e750da6a14849e072fb814b)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestNativeAzureFileSystemLive.java


> Change Lazy Rename Pending Operation Completion of WASB to address case of 
> potential data loss due to partial copy
> --
>
> Key: HADOOP-12634
> URL: https://issues.apache.org/jira/browse/HADOOP-12634
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Gaurav Kanade
>Assignee: Gaurav Kanade
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: HADOOP-12634.01.patch, HADOOP-12634.02.patch
>
>
> HADOOP-12334 changed mode of Copy Operation of HBase WAL Archiving to bypass 
> Azure Storage Throttling after retries. This was via client side copy. 
> However a process crash when the copy is partially done would result in a 
> scenario where the source and destination blobs will have different contents 
> and lazy rename pending operation will not handle this thus causing data 
> loss. We need to fix the lazy rename pending operation to address this issue



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12546) Improve TestKMS

2016-01-06 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12546?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15086331#comment-15086331
 ] 

Zhe Zhang commented on HADOOP-12546:


03 patch LGTM as well. [~templedf] Do you mind rebasing the patch?

> Improve TestKMS
> ---
>
> Key: HADOOP-12546
> URL: https://issues.apache.org/jira/browse/HADOOP-12546
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 2.7.1
>Reporter: Daniel Templeton
>Assignee: Daniel Templeton
> Attachments: HADOOP-12546.001.patch, HADOOP-12546.002.patch, 
> HADOOP-12546.003.patch
>
>
> The TestKMS class has some issues:
> * It swallows some exceptions' stack traces
> * It swallows some exceptions altogether
> * Some of the tests aren't as tight as they could be
> * Asserts lack messages
> * Code style is a bit hodgepodge
> This JIRA is to clean all that up.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)