[jira] [Commented] (HADOOP-14066) VersionInfo should be public api

2017-02-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865244#comment-15865244
 ] 

Hadoop QA commented on HADOOP-14066:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 10 unchanged - 1 fixed = 10 total (was 11) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 35s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 56m 59s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14066 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852506/HADOOP-14066.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 374349d9ef23 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 71c23c9 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11620/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11620/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11620/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> VersionInfo should be public api
> 
>
> Key: HADOOP-14066
> URL: https://issues.apache.org/jira/browse/HADOOP-14066
>

[jira] [Updated] (HADOOP-13924) Update checkstyle plugin to handle indentation of JDK8 Lambdas

2017-02-13 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13924:
---
Target Version/s: 3.0.0-alpha3
  Status: Patch Available  (was: Open)

> Update checkstyle plugin to handle indentation of JDK8 Lambdas
> --
>
> Key: HADOOP-13924
> URL: https://issues.apache.org/jira/browse/HADOOP-13924
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13924.01.patch
>
>
> Have seen this lately form Jenkins run. Propose add the following to maven 
> checkstyle plugin configuration to better handle this. 
> {code}
> 
>   com.puppycrawl.tools
>   checkstyle
>   7.3
> 
> {code}
> {code}
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2049:
>   if (failedVolumes.size() > 0) {: 'if' have incorrect indentation 
> level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2050:
> LOG.warn("checkDiskErrorAsync callback got {} failed volumes: 
> {}",: 'if' child have incorrect indentation level 12, expected level should 
> be 10.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2052:
>   } else {: 'if rcurly' have incorrect indentation level 10, expected 
> level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2053:
> LOG.debug("checkDiskErrorAsync: no volume failures detected");: 
> 'else' child have incorrect indentation level 12, expected level should be 10.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2054:
>   }: 'else rcurly' have incorrect indentation level 10, expected 
> level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2055:
>   lastDiskErrorCheck = Time.monotonicNow();: 'block' child have 
> incorrect indentation level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2056:
>   handleVolumeFailures(failedVolumes);: 'block' child have incorrect 
> indentation level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2057:
> });: 'block rcurly' have incorrect indentation level 8, expected 
> level should be 6.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13924) Update checkstyle plugin to handle indentation of JDK8 Lambdas

2017-02-13 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13924:
---
Attachment: HADOOP-13924.01.patch

01 patch
* Update the version of checkstyle and checkstyle plugin.
* Undo HADOOP-13603 because the fix is not necessary after upgrading checkstyle 
to 6.12+

> Update checkstyle plugin to handle indentation of JDK8 Lambdas
> --
>
> Key: HADOOP-13924
> URL: https://issues.apache.org/jira/browse/HADOOP-13924
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13924.01.patch
>
>
> Have seen this lately form Jenkins run. Propose add the following to maven 
> checkstyle plugin configuration to better handle this. 
> {code}
> 
>   com.puppycrawl.tools
>   checkstyle
>   7.3
> 
> {code}
> {code}
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2049:
>   if (failedVolumes.size() > 0) {: 'if' have incorrect indentation 
> level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2050:
> LOG.warn("checkDiskErrorAsync callback got {} failed volumes: 
> {}",: 'if' child have incorrect indentation level 12, expected level should 
> be 10.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2052:
>   } else {: 'if rcurly' have incorrect indentation level 10, expected 
> level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2053:
> LOG.debug("checkDiskErrorAsync: no volume failures detected");: 
> 'else' child have incorrect indentation level 12, expected level should be 10.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2054:
>   }: 'else rcurly' have incorrect indentation level 10, expected 
> level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2055:
>   lastDiskErrorCheck = Time.monotonicNow();: 'block' child have 
> incorrect indentation level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2056:
>   handleVolumeFailures(failedVolumes);: 'block' child have incorrect 
> indentation level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2057:
> });: 'block rcurly' have incorrect indentation level 8, expected 
> level should be 6.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13924) Update checkstyle plugin to handle indentation of JDK8 Lambdas

2017-02-13 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-13924:
--

Assignee: Akira Ajisaka  (was: Xiaoyu Yao)

> Update checkstyle plugin to handle indentation of JDK8 Lambdas
> --
>
> Key: HADOOP-13924
> URL: https://issues.apache.org/jira/browse/HADOOP-13924
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Akira Ajisaka
>
> Have seen this lately form Jenkins run. Propose add the following to maven 
> checkstyle plugin configuration to better handle this. 
> {code}
> 
>   com.puppycrawl.tools
>   checkstyle
>   7.3
> 
> {code}
> {code}
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2049:
>   if (failedVolumes.size() > 0) {: 'if' have incorrect indentation 
> level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2050:
> LOG.warn("checkDiskErrorAsync callback got {} failed volumes: 
> {}",: 'if' child have incorrect indentation level 12, expected level should 
> be 10.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2052:
>   } else {: 'if rcurly' have incorrect indentation level 10, expected 
> level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2053:
> LOG.debug("checkDiskErrorAsync: no volume failures detected");: 
> 'else' child have incorrect indentation level 12, expected level should be 10.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2054:
>   }: 'else rcurly' have incorrect indentation level 10, expected 
> level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2055:
>   lastDiskErrorCheck = Time.monotonicNow();: 'block' child have 
> incorrect indentation level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2056:
>   handleVolumeFailures(failedVolumes);: 'block' child have incorrect 
> indentation level 10, expected level should be 8.
> ./hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java:2057:
> });: 'block rcurly' have incorrect indentation level 8, expected 
> level should be 6.
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14066) VersionInfo should be public api

2017-02-13 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14066:
---
Target Version/s: 2.9.0, 3.0.0-alpha3

> VersionInfo should be public api
> 
>
> Key: HADOOP-14066
> URL: https://issues.apache.org/jira/browse/HADOOP-14066
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Thejas M Nair
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-14066.01.patch
>
>
> org.apache.hadoop.util.VersionInfo is commonly used by applications that work 
> with multiple versions of Hadoop.
> In case of Hive, this is used in a shims layer to identify the version of 
> hadoop and use different shim code based on version (and the corresponding 
> api it supports).
> I checked Pig and Hbase as well and they also use this class to get version 
> information.
> However, this method is annotated as "@private" and "@unstable".
> This code has actually been stable for long time and is widely used like a 
> public api. I think we should mark it as such.
> Note that there are apis to find the version of server components in hadoop, 
> however, this class necessary for finding the version of client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14066) VersionInfo should be public api

2017-02-13 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14066:
---
Status: Patch Available  (was: Open)

> VersionInfo should be public api
> 
>
> Key: HADOOP-14066
> URL: https://issues.apache.org/jira/browse/HADOOP-14066
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Thejas M Nair
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-14066.01.patch
>
>
> org.apache.hadoop.util.VersionInfo is commonly used by applications that work 
> with multiple versions of Hadoop.
> In case of Hive, this is used in a shims layer to identify the version of 
> hadoop and use different shim code based on version (and the corresponding 
> api it supports).
> I checked Pig and Hbase as well and they also use this class to get version 
> information.
> However, this method is annotated as "@private" and "@unstable".
> This code has actually been stable for long time and is widely used like a 
> public api. I think we should mark it as such.
> Note that there are apis to find the version of server components in hadoop, 
> however, this class necessary for finding the version of client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14066) VersionInfo should be public api

2017-02-13 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka reassigned HADOOP-14066:
--

Assignee: Akira Ajisaka

> VersionInfo should be public api
> 
>
> Key: HADOOP-14066
> URL: https://issues.apache.org/jira/browse/HADOOP-14066
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Thejas M Nair
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-14066.01.patch
>
>
> org.apache.hadoop.util.VersionInfo is commonly used by applications that work 
> with multiple versions of Hadoop.
> In case of Hive, this is used in a shims layer to identify the version of 
> hadoop and use different shim code based on version (and the corresponding 
> api it supports).
> I checked Pig and Hbase as well and they also use this class to get version 
> information.
> However, this method is annotated as "@private" and "@unstable".
> This code has actually been stable for long time and is widely used like a 
> public api. I think we should mark it as such.
> Note that there are apis to find the version of server components in hadoop, 
> however, this class necessary for finding the version of client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14066) VersionInfo should be public api

2017-02-13 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-14066:
---
Attachment: HADOOP-14066.01.patch

> VersionInfo should be public api
> 
>
> Key: HADOOP-14066
> URL: https://issues.apache.org/jira/browse/HADOOP-14066
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Thejas M Nair
>Assignee: Akira Ajisaka
>Priority: Critical
> Attachments: HADOOP-14066.01.patch
>
>
> org.apache.hadoop.util.VersionInfo is commonly used by applications that work 
> with multiple versions of Hadoop.
> In case of Hive, this is used in a shims layer to identify the version of 
> hadoop and use different shim code based on version (and the corresponding 
> api it supports).
> I checked Pig and Hbase as well and they also use this class to get version 
> information.
> However, this method is annotated as "@private" and "@unstable".
> This code has actually been stable for long time and is widely used like a 
> public api. I think we should mark it as such.
> Note that there are apis to find the version of server components in hadoop, 
> however, this class necessary for finding the version of client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9743) TestStaticMapping test fails

2017-02-13 Thread Paurav Munshi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865138#comment-15865138
 ] 

Paurav Munshi commented on HADOOP-9743:
---

Hello,

I saw the issue is resolved, but I was still facing the issue when doing my 
first Hadoop build for 3.0.0-alpha1. And here is my observation.

I think the issue occurs depending on how your internet provider handles the 
address lookup. When I am connected through my broadband provider I get the 
build failure. I observed that createQueryList() adds two dummy hosts 'n1' and 
'unknown'. Now when I traceroute to these hosts while connected through 
broadband, I am returned with some random IP which is same for both the 
hostnames. But when I am not connected to internet or I am connected through my 
mobile data, traceroute gives the unknown host error. In this situation the 
build is also successfull for this test case in particular. So it makes me 
think that some ISP might be providing a random IP for unknown hosts and 
thereby forcing the client to think that host exists at that IP address. In my 
observation, since both the hosts are dummy ones,  ISP provides same randon IP 
and therefore test is landing up with single IP in the cache. I have not 
thought of any concrete resolution but turning off network helps resolves this 
issue (though it fails TestDNS test). 

Thought of sharing my observation. I hope this helps, if issue still 
exists.Pardon me for missing the resolution if any is provided.

Best Regards,
Paurav.

> TestStaticMapping test fails
> 
>
> Key: HADOOP-9743
> URL: https://issues.apache.org/jira/browse/HADOOP-9743
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 3.0.0-alpha1
>Reporter: Jean-Baptiste Onofré
>
>   - testCachingRelaysResolveQueries(org.apache.hadoop.net.TestStaticMapping): 
> Expected two entries in the map Mapping: cached switch mapping relaying to 
> static mapping with single switch = false(..)
>   - 
> testCachingCachesNegativeEntries(org.apache.hadoop.net.TestStaticMapping): 
> Expected two entries in the map Mapping: cached switch mapping relaying to 
> static mapping with single switch = false(..)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14066) VersionInfo should be public api

2017-02-13 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14066?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865129#comment-15865129
 ] 

Akira Ajisaka commented on HADOOP-14066:


+1 for doing this.

> VersionInfo should be public api
> 
>
> Key: HADOOP-14066
> URL: https://issues.apache.org/jira/browse/HADOOP-14066
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: common
>Reporter: Thejas M Nair
>Priority: Critical
>
> org.apache.hadoop.util.VersionInfo is commonly used by applications that work 
> with multiple versions of Hadoop.
> In case of Hive, this is used in a shims layer to identify the version of 
> hadoop and use different shim code based on version (and the corresponding 
> api it supports).
> I checked Pig and Hbase as well and they also use this class to get version 
> information.
> However, this method is annotated as "@private" and "@unstable".
> This code has actually been stable for long time and is widely used like a 
> public api. I think we should mark it as such.
> Note that there are apis to find the version of server components in hadoop, 
> however, this class necessary for finding the version of client.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13929) ADLS connector should not check in contract-test-options.xml

2017-02-13 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-13929:

Release Note: To run live unit tests, create 
src/test/resources/auth-keys.xml with the same properties as in the deprecated 
contract-test-options.xml.  (was: To run live unit tests, create 
src/test/resources/auth-keys.xml which is compatible with the deprecated 
contract-test-options.xml.)

> ADLS connector should not check in contract-test-options.xml
> 
>
> Key: HADOOP-13929
> URL: https://issues.apache.org/jira/browse/HADOOP-13929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, test
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13929.001.patch, HADOOP-13929.002.patch, 
> HADOOP-13929.003.patch, HADOOP-13929.004.patch, HADOOP-13929.005.patch, 
> HADOOP-13929.006.patch, HADOOP-13929.007.patch, HADOOP-13929.008.patch, 
> HADOOP-13929.009.patch, HADOOP-13929.010.patch, HADOOP-13929.011.patch
>
>
> Should not check in the file {{contract-test-options.xml}}. Make sure the 
> file is excluded by {{.gitignore}}. Make sure ADLS {{index.md}} provides a 
> complete example of this file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13929) ADLS connector should not check in contract-test-options.xml

2017-02-13 Thread Vishwajeet Dusane (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865057#comment-15865057
 ] 

Vishwajeet Dusane commented on HADOOP-13929:


Thanks [~jzhuge] - This is a good optimization. And [~eddyxu] thanks for the 
commit. Thanks to [~ste...@apache.org] and [~cnauroth] to ensure consistency 
across.

> ADLS connector should not check in contract-test-options.xml
> 
>
> Key: HADOOP-13929
> URL: https://issues.apache.org/jira/browse/HADOOP-13929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, test
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13929.001.patch, HADOOP-13929.002.patch, 
> HADOOP-13929.003.patch, HADOOP-13929.004.patch, HADOOP-13929.005.patch, 
> HADOOP-13929.006.patch, HADOOP-13929.007.patch, HADOOP-13929.008.patch, 
> HADOOP-13929.009.patch, HADOOP-13929.010.patch, HADOOP-13929.011.patch
>
>
> Should not check in the file {{contract-test-options.xml}}. Make sure the 
> file is excluded by {{.gitignore}}. Make sure ADLS {{index.md}} provides a 
> complete example of this file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14077) Improve the patch of HADOOP-13119

2017-02-13 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15865021#comment-15865021
 ] 

Yuanbo Liu commented on HADOOP-14077:
-

[~eyang] Sorry to interrupt, would you mind reviewing patch. Thanks in advance.

> Improve the patch of HADOOP-13119
> -
>
> Key: HADOOP-14077
> URL: https://issues.apache.org/jira/browse/HADOOP-14077
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HADOOP-14077.001.patch, HADOOP-14077.002.patch
>
>
> For some links(such as "/jmx, /stack"), blocking the links in filter chain 
> due to impersonation issue is not friendly for users. For example, user "sam" 
> is not allowed to be impersonated by user "knox", and the link "/jmx" doesn't 
> need any user to do authorization by default. It only needs user "knox" to do 
> authentication, in this case, it's not right to  block the access in SPNEGO 
> filter. We intend to check impersonation permission when the method 
> "getRemoteUser" of request is used, so that such kind of links("/jmx, 
> /stack") would not be blocked by mistake.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14080) UserGroupInformation#loginUserFromKeytab does not load hadoop tokens ?

2017-02-13 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated HADOOP-14080:
-
Description: 
Found that UserGroupInformation#loginUserFromKeytab will not try to load hadoop 
tokens from HADOOP_TOKEN_FILE_LOCATION as loginUserFromSubject method does.  I 
know typically, if you have the keytab, you probably won't need the token, but  
is this an expected behavior ?

The problem with this is that suppose a long running app on YARN has its own 
keytabs, it does login via UserGroupInformation#loginUserFromKeytab, however, 
it will not load the hadoop tokens passed by YARN and YARN requires the token 
to be present for communication e.g. AM - RM communication. 




  was:
Found that UserGroupInformation#loginUserFromKeytab will not try to load hadoop 
tokens from HADOOP_TOKEN_FILE_LOCATION as loginUserFromSubject method does.  I 
know typically, if you have the keytab, you probably won't need the token, but  
is this an expected behavior ?

The problem with this is that suppose a long running app on YARN has its own 
keytabs, it does login via UserGroupInformation#loginUserFromKeytab, however, 
it will not load the hadoop tokens passed by YARN. 





> UserGroupInformation#loginUserFromKeytab does not load hadoop tokens ? 
> ---
>
> Key: HADOOP-14080
> URL: https://issues.apache.org/jira/browse/HADOOP-14080
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jian He
>Priority: Critical
>
> Found that UserGroupInformation#loginUserFromKeytab will not try to load 
> hadoop tokens from HADOOP_TOKEN_FILE_LOCATION as loginUserFromSubject method 
> does.  I know typically, if you have the keytab, you probably won't need the 
> token, but  is this an expected behavior ?
> The problem with this is that suppose a long running app on YARN has its own 
> keytabs, it does login via UserGroupInformation#loginUserFromKeytab, however, 
> it will not load the hadoop tokens passed by YARN and YARN requires the token 
> to be present for communication e.g. AM - RM communication. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14080) UserGroupInformation#loginUserFromKeytab does not load hadoop tokens ?

2017-02-13 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14080?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated HADOOP-14080:
-
Target Version/s: 2.9.0

> UserGroupInformation#loginUserFromKeytab does not load hadoop tokens ? 
> ---
>
> Key: HADOOP-14080
> URL: https://issues.apache.org/jira/browse/HADOOP-14080
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Jian He
>Priority: Critical
>
> Found that UserGroupInformation#loginUserFromKeytab will not try to load 
> hadoop tokens from HADOOP_TOKEN_FILE_LOCATION as loginUserFromSubject method 
> does.  I know typically, if you have the keytab, you probably won't need the 
> token, but  is this an expected behavior ?
> The problem with this is that suppose a long running app on YARN has its own 
> keytabs, it does login via UserGroupInformation#loginUserFromKeytab, however, 
> it will not load the hadoop tokens passed by YARN. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14080) UserGroupInformation#loginUserFromKeytab does not load hadoop tokens ?

2017-02-13 Thread Jian He (JIRA)
Jian He created HADOOP-14080:


 Summary: UserGroupInformation#loginUserFromKeytab does not load 
hadoop tokens ? 
 Key: HADOOP-14080
 URL: https://issues.apache.org/jira/browse/HADOOP-14080
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Jian He
Priority: Critical


Found that UserGroupInformation#loginUserFromKeytab will not try to load hadoop 
tokens from HADOOP_TOKEN_FILE_LOCATION as loginUserFromSubject method does.  I 
know typically, if you have the keytab, you probably won't need the token, but  
is this an expected behavior ?

The problem with this is that suppose a long running app on YARN has its own 
keytabs, it does login via UserGroupInformation#loginUserFromKeytab, however, 
it will not load the hadoop tokens passed by YARN. 






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14078) TestRaceWhenRelogin fails occasionally

2017-02-13 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864816#comment-15864816
 ] 

Duo Zhang commented on HADOOP-14078:


The error is expected as we will acquire service ticket along with relogin.

There maybe a race in the relogin and verification... Let me think how to make 
the test stable... This is a testcase issue.

Thanks for opening this issue to track the problem.

> TestRaceWhenRelogin fails occasionally
> --
>
> Key: HADOOP-14078
> URL: https://issues.apache.org/jira/browse/HADOOP-14078
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.9.0, 2.7.4, 2.6.6, 2.8.1, 3.0.0-alpha3
> Environment: Precommit jenkins
>Reporter: Wei-Chiu Chuang
>
> HADOOP-13433 added this test class and it failed in a few precommit jobs like 
> this one: 
> https://builds.apache.org/job/PreCommit-HADOOP-Build/11616/testReport/org.apache.hadoop.security/TestRaceWhenRelogin/test/
> There were a lot of errors in the test, starting with this one
> {noformat}
> 2017-02-13 12:26:01,838 ERROR impl.DefaultKdcHandler 
> (DefaultKdcHandler.java:handleMessage(71)) - Error occured while processing 
> request:
> org.apache.kerby.kerberos.kerb.KrbException: Integrity check on decrypted 
> field failed
>   at 
> org.apache.kerby.kerberos.kerb.crypto.enc.KeKiEnc.decryptWith(KeKiEnc.java:127)
>   at 
> org.apache.kerby.kerberos.kerb.crypto.enc.AbstractEncTypeHandler.decrypt(AbstractEncTypeHandler.java:150)
>   at 
> org.apache.kerby.kerberos.kerb.crypto.enc.AbstractEncTypeHandler.decrypt(AbstractEncTypeHandler.java:138)
>   at 
> org.apache.kerby.kerberos.kerb.crypto.EncryptionHandler.decrypt(EncryptionHandler.java:228)
>   at 
> org.apache.kerby.kerberos.kerb.common.EncryptionUtil.unseal(EncryptionUtil.java:136)
>   at 
> org.apache.kerby.kerberos.kerb.server.request.TgsRequest.verifyAuthenticator(TgsRequest.java:138)
>   at 
> org.apache.kerby.kerberos.kerb.server.preauth.builtin.TgtPreauth.verify(TgtPreauth.java:41)
>   at 
> org.apache.kerby.kerberos.kerb.server.preauth.PreauthHandle.verify(PreauthHandle.java:46)
>   at 
> org.apache.kerby.kerberos.kerb.server.preauth.PreauthHandler.verify(PreauthHandler.java:101)
>   at 
> org.apache.kerby.kerberos.kerb.server.request.KdcRequest.preauth(KdcRequest.java:562)
>   at 
> org.apache.kerby.kerberos.kerb.server.request.KdcRequest.process(KdcRequest.java:181)
>   at 
> org.apache.kerby.kerberos.kerb.server.KdcHandler.handleMessage(KdcHandler.java:115)
>   at 
> org.apache.kerby.kerberos.kerb.server.impl.DefaultKdcHandler.handleMessage(DefaultKdcHandler.java:67)
>   at 
> org.apache.kerby.kerberos.kerb.server.impl.DefaultKdcHandler.run(DefaultKdcHandler.java:52)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   at java.lang.Thread.run(Thread.java:745)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14079) Fix breaking link in s3guard.md

2017-02-13 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864721#comment-15864721
 ] 

Mingliang Liu commented on HADOOP-14079:


Thanks for your prompt review.

> Fix breaking link in s3guard.md
> ---
>
> Key: HADOOP-14079
> URL: https://issues.apache.org/jira/browse/HADOOP-14079
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Trivial
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-14079-HADOOP-13345.000.patch
>
>
> See the initial patch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14079) Fix breaking link in s3guard.md

2017-02-13 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14079:
---
   Resolution: Fixed
Fix Version/s: HADOOP-13345
   Status: Resolved  (was: Patch Available)

Committed to HADOOP-13345 branch. Thanks, [~liuml07]!

> Fix breaking link in s3guard.md
> ---
>
> Key: HADOOP-14079
> URL: https://issues.apache.org/jira/browse/HADOOP-14079
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Trivial
> Fix For: HADOOP-13345
>
> Attachments: HADOOP-14079-HADOOP-13345.000.patch
>
>
> See the initial patch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14079) Fix breaking link in s3guard.md

2017-02-13 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14079:
---
Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-13345

> Fix breaking link in s3guard.md
> ---
>
> Key: HADOOP-14079
> URL: https://issues.apache.org/jira/browse/HADOOP-14079
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Trivial
> Attachments: HADOOP-14079-HADOOP-13345.000.patch
>
>
> See the initial patch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14079) Fix breaking link in s3guard.md

2017-02-13 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864653#comment-15864653
 ] 

Sean Mackrory edited comment on HADOOP-14079 at 2/13/17 11:26 PM:
--

+1 (binding). Will push after Yetus checks in.


was (Author: mackrorysd):
+1 (binding). Will push shortly.

> Fix breaking link in s3guard.md
> ---
>
> Key: HADOOP-14079
> URL: https://issues.apache.org/jira/browse/HADOOP-14079
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Trivial
> Attachments: HADOOP-14079-HADOOP-13345.000.patch
>
>
> See the initial patch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14079) Fix breaking link in s3guard.md

2017-02-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864657#comment-15864657
 ] 

Hadoop QA commented on HADOOP-14079:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
24s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
26s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 15s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14079 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852435/HADOOP-14079-HADOOP-13345.000.patch
 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux c73d3435e4b7 3.13.0-107-generic #154-Ubuntu SMP Tue Dec 20 
09:57:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | HADOOP-13345 / a7e6dbe |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11619/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix breaking link in s3guard.md
> ---
>
> Key: HADOOP-14079
> URL: https://issues.apache.org/jira/browse/HADOOP-14079
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Trivial
> Attachments: HADOOP-14079-HADOOP-13345.000.patch
>
>
> See the initial patch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14079) Fix breaking link in s3guard.md

2017-02-13 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864653#comment-15864653
 ] 

Sean Mackrory commented on HADOOP-14079:


+1 (binding). Will push shortly.

> Fix breaking link in s3guard.md
> ---
>
> Key: HADOOP-14079
> URL: https://issues.apache.org/jira/browse/HADOOP-14079
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Trivial
> Attachments: HADOOP-14079-HADOOP-13345.000.patch
>
>
> See the initial patch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14079) Fix breaking link in s3guard.md

2017-02-13 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864637#comment-15864637
 ] 

Mingliang Liu commented on HADOOP-14079:


This is trivial change. Ping [~ste...@apache.org] and [~mackrorysd]. I tried 
two MarkDown editors and did not see the current link format is tolerated.

> Fix breaking link in s3guard.md
> ---
>
> Key: HADOOP-14079
> URL: https://issues.apache.org/jira/browse/HADOOP-14079
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Trivial
> Attachments: HADOOP-14079-HADOOP-13345.000.patch
>
>
> See the initial patch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14079) Fix breaking link in s3guard.md

2017-02-13 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14079:
---
Attachment: HADOOP-14079-HADOOP-13345.000.patch

> Fix breaking link in s3guard.md
> ---
>
> Key: HADOOP-14079
> URL: https://issues.apache.org/jira/browse/HADOOP-14079
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Trivial
> Attachments: HADOOP-14079-HADOOP-13345.000.patch
>
>
> See the initial patch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14079) Fix breaking link in s3guard.md

2017-02-13 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-14079:
---
Status: Patch Available  (was: Open)

> Fix breaking link in s3guard.md
> ---
>
> Key: HADOOP-14079
> URL: https://issues.apache.org/jira/browse/HADOOP-14079
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
>Priority: Trivial
> Attachments: HADOOP-14079-HADOOP-13345.000.patch
>
>
> See the initial patch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14079) Fix breaking link in s3guard.md

2017-02-13 Thread Mingliang Liu (JIRA)
Mingliang Liu created HADOOP-14079:
--

 Summary: Fix breaking link in s3guard.md
 Key: HADOOP-14079
 URL: https://issues.apache.org/jira/browse/HADOOP-14079
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: HADOOP-13345
Reporter: Mingliang Liu
Assignee: Mingliang Liu
Priority: Trivial


See the initial patch.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13929) ADLS connector should not check in contract-test-options.xml

2017-02-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864629#comment-15864629
 ] 

Hudson commented on HADOOP-13929:
-

ABORTED: Integrated in Jenkins build Hadoop-trunk-Commit #11241 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11241/])
HADOOP-13929. ADLS connector should not check in (lei: rev 
71c23c9fc94cfdf58de80effbc3f51c0925d0cfe)
* (edit) 
hadoop-tools/hadoop-azure-datalake/src/main/java/org/apache/hadoop/fs/adl/AdlFileSystem.java
* (delete) 
hadoop-tools/hadoop-azure-datalake/src/test/resources/contract-test-options.xml
* (edit) .gitignore
* (edit) hadoop-tools/hadoop-azure-datalake/src/site/markdown/index.md
* (edit) 
hadoop-tools/hadoop-azure-datalake/src/test/java/org/apache/hadoop/fs/adl/live/AdlStorageConfiguration.java
* (edit) hadoop-tools/hadoop-azure-datalake/src/test/resources/adls.xml


> ADLS connector should not check in contract-test-options.xml
> 
>
> Key: HADOOP-13929
> URL: https://issues.apache.org/jira/browse/HADOOP-13929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, test
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13929.001.patch, HADOOP-13929.002.patch, 
> HADOOP-13929.003.patch, HADOOP-13929.004.patch, HADOOP-13929.005.patch, 
> HADOOP-13929.006.patch, HADOOP-13929.007.patch, HADOOP-13929.008.patch, 
> HADOOP-13929.009.patch, HADOOP-13929.010.patch, HADOOP-13929.011.patch
>
>
> Should not check in the file {{contract-test-options.xml}}. Make sure the 
> file is excluded by {{.gitignore}}. Make sure ADLS {{index.md}} provides a 
> complete example of this file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13929) ADLS connector should not check in contract-test-options.xml

2017-02-13 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864619#comment-15864619
 ] 

John Zhuge commented on HADOOP-13929:
-

Thanks [~eddyxu] for the review and commit! Appreciate the review from 
[~ste...@apache.org], [~cnauroth], and [~vishwajeet.dusane].

> ADLS connector should not check in contract-test-options.xml
> 
>
> Key: HADOOP-13929
> URL: https://issues.apache.org/jira/browse/HADOOP-13929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, test
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13929.001.patch, HADOOP-13929.002.patch, 
> HADOOP-13929.003.patch, HADOOP-13929.004.patch, HADOOP-13929.005.patch, 
> HADOOP-13929.006.patch, HADOOP-13929.007.patch, HADOOP-13929.008.patch, 
> HADOOP-13929.009.patch, HADOOP-13929.010.patch, HADOOP-13929.011.patch
>
>
> Should not check in the file {{contract-test-options.xml}}. Make sure the 
> file is excluded by {{.gitignore}}. Make sure ADLS {{index.md}} provides a 
> complete example of this file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13398) prevent user classes from loading classes in the parent classpath with ApplicationClassLoader

2017-02-13 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-13398:
-
Attachment: hadoop-13398-notes.pdf

Notes attached.

> prevent user classes from loading classes in the parent classpath with 
> ApplicationClassLoader
> -
>
> Key: HADOOP-13398
> URL: https://issues.apache.org/jira/browse/HADOOP-13398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Attachments: HADOOP-13398-HADOOP-13070.01.patch, 
> HADOOP-13398-HADOOP-13070.02.patch, HADOOP-13398-HADOOP-13070.03.patch, 
> HADOOP-13398-HADOOP-13070.04.patch, hadoop-13398-notes.pdf
>
>
> Today, a user class is able to trigger loading a class from Hadoop's 
> dependencies, with or without the use of {{ApplicationClassLoader}}, and it 
> creates an implicit dependence from users' code on Hadoop's dependencies, and 
> as a result dependency conflicts.
> We should modify {{ApplicationClassLoader}} to prevent a user class from 
> loading a class from the parent classpath.
> This should also cover resource loading (including 
> {{ClassLoader.getResources()}} and as a corollary {{ServiceLoader}}).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-02-13 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864588#comment-15864588
 ] 

Andrew Wang commented on HADOOP-13866:
--

I want the shading done because from past experience, the Netty bumps have been 
very painful for our downstream users. In retrospect, I don't think we should 
have been bumped commonly used dependencies like Netty that are also known to 
be source/binary incompatible, particularly in minor releases. This also 
applies to Jackson, Guava, PB, and other known offenders.

If someone wants this badly enough, they can also do the shading work to get it 
into branch-2. I don't want a drive-by contribution that just changes the 
version in a pom.xml without also doing the work to make the experience smooth 
for our downstream users.

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch, 
> HADOOP-13866.v7.patch, HADOOP-13866.v8.patch, HADOOP-13866.v8.patch, 
> HADOOP-13866.v8.patch, HADOOP-13866.v9.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13929) ADLS connector should not check in contract-test-options.xml

2017-02-13 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-13929:
---
  Resolution: Fixed
Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)
  Status: Resolved  (was: Patch Available)

Ok, thanks for the explanation, [~jzhuge]. It makes sense, thus +1.

Committed to trunk.  

Thanks for the patch, [~jzhuge], and thanks for the reviews, [~cnauroth]!


> ADLS connector should not check in contract-test-options.xml
> 
>
> Key: HADOOP-13929
> URL: https://issues.apache.org/jira/browse/HADOOP-13929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, test
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13929.001.patch, HADOOP-13929.002.patch, 
> HADOOP-13929.003.patch, HADOOP-13929.004.patch, HADOOP-13929.005.patch, 
> HADOOP-13929.006.patch, HADOOP-13929.007.patch, HADOOP-13929.008.patch, 
> HADOOP-13929.009.patch, HADOOP-13929.010.patch, HADOOP-13929.011.patch
>
>
> Should not check in the file {{contract-test-options.xml}}. Make sure the 
> file is excluded by {{.gitignore}}. Make sure ADLS {{index.md}} provides a 
> complete example of this file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-9749) Remove synchronization for UGI.getCurrentUser

2017-02-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9749?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864567#comment-15864567
 ] 

Hadoop QA commented on HADOOP-9749:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 13m 
34s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
40s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
31s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
30s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} branch-2 passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} branch-2 passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
35s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
27s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
27s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
57s{color} | {color:red} root in the patch failed with JDK v1.7.0_121. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 57s{color} 
| {color:red} root in the patch failed with JDK v1.7.0_121. {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 27s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 7 new + 140 unchanged - 4 fixed = 147 total (was 144) {color} |
| {color:red}-1{color} | {color:red} mvnsite {color} | {color:red}  0m 
42s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
28s{color} | {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed with JDK v1.8.0_121 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed with JDK v1.7.0_121 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 44s{color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_121. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 67m 54s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_121 Timed out junit tests | 
org.apache.hadoop.http.TestHttpServerLifecycle |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:b59b8b7 |
| JIRA Issue | HADOOP-9749 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852419/HADOOP-9749.branch-2.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux bb28b03b6fd8 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-13398) prevent user classes from loading classes in the parent classpath with ApplicationClassLoader

2017-02-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864541#comment-15864541
 ] 

Sean Busbey commented on HADOOP-13398:
--

Thanks for the pointer. As a bonus, I remember that one. :)

> prevent user classes from loading classes in the parent classpath with 
> ApplicationClassLoader
> -
>
> Key: HADOOP-13398
> URL: https://issues.apache.org/jira/browse/HADOOP-13398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Attachments: HADOOP-13398-HADOOP-13070.01.patch, 
> HADOOP-13398-HADOOP-13070.02.patch, HADOOP-13398-HADOOP-13070.03.patch, 
> HADOOP-13398-HADOOP-13070.04.patch
>
>
> Today, a user class is able to trigger loading a class from Hadoop's 
> dependencies, with or without the use of {{ApplicationClassLoader}}, and it 
> creates an implicit dependence from users' code on Hadoop's dependencies, and 
> as a result dependency conflicts.
> We should modify {{ApplicationClassLoader}} to prevent a user class from 
> loading a class from the parent classpath.
> This should also cover resource loading (including 
> {{ClassLoader.getResources()}} and as a corollary {{ServiceLoader}}).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13398) prevent user classes from loading classes in the parent classpath with ApplicationClassLoader

2017-02-13 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864495#comment-15864495
 ] 

Sangjin Lee commented on HADOOP-13398:
--

Thanks Sean. I'll add a doc that summarizes the changes soon. Until then, you 
might want to look at [this 
one|https://issues.apache.org/jira/secure/attachment/12803266/classloading-improvements-ideas-v.3.pdf]
 attached to the parent JIRA (HADOOP-13070). That needs a little update as well 
but is still pretty accurate.

> prevent user classes from loading classes in the parent classpath with 
> ApplicationClassLoader
> -
>
> Key: HADOOP-13398
> URL: https://issues.apache.org/jira/browse/HADOOP-13398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Attachments: HADOOP-13398-HADOOP-13070.01.patch, 
> HADOOP-13398-HADOOP-13070.02.patch, HADOOP-13398-HADOOP-13070.03.patch, 
> HADOOP-13398-HADOOP-13070.04.patch
>
>
> Today, a user class is able to trigger loading a class from Hadoop's 
> dependencies, with or without the use of {{ApplicationClassLoader}}, and it 
> creates an implicit dependence from users' code on Hadoop's dependencies, and 
> as a result dependency conflicts.
> We should modify {{ApplicationClassLoader}} to prevent a user class from 
> loading a class from the parent classpath.
> This should also cover resource loading (including 
> {{ClassLoader.getResources()}} and as a corollary {{ServiceLoader}}).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14075) chown doesn't work with usernames containing '\' character

2017-02-13 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864493#comment-15864493
 ] 

Wei-Chiu Chuang commented on HADOOP-14075:
--

While this patch itself looks good, I have been thinking if there are better 
way to address this issue.

The way I look at the issue of allowed character set in user name, is that 
there is no single enforcement rule applied to all interfaces. For example, 
SETOWNER operation in WebHDFS/Httpfs has different rule than the command line 
chown. I am sure chown via fuse-dfs has another set of allowed characters too.

> chown doesn't work with usernames containing '\' character
> --
>
> Key: HADOOP-14075
> URL: https://issues.apache.org/jira/browse/HADOOP-14075
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Attila Bukor
>Assignee: Attila Bukor
> Attachments: HADOOP-14075.001.patch, HADOOP-14075.002.patch
>
>
> Usernames containing backslash (e.g. down-level logon names) seem to work 
> fine with Hadoop, except for chown.
> {code}
> $ HADOOP_USER_NAME="FOOBAR\\testuser" hdfs dfs -mkdir /test/testfile1
> $ hdfs dfs -ls /test
> Found 1 items
> drwxrwxr-x   - FOOBAR\testuser supergroup  0 2017-02-10 12:49 
> /test/testfile1
> $ HADOOP_USER_NAME="testuser" hdfs dfs -mkdir /test/testfile2
> $ HADOOP_USER_NAME="hdfs" hdfs dfs -chown "FOOBAR\\testuser" /test/testfile2
> -chown: 'FOOBAR\testuser' does not match expected pattern for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> $
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9749) Remove synchronization for UGI.getCurrentUser

2017-02-13 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-9749:

Attachment: HADOOP-9749.branch-2.patch
HADOOP-9749.trunk.patch

Latest versions of age old internal patches to avoid ugi synchronization issues 
that cause unnecessary contention and corruption of private credentials during 
relogin.

The synchronization model for the ugi is fundamentally flawed.  Instance level 
synchronization is meaningless due to many-to-1 relationship of ugi to subject. 
 Class level synchronization only applies to other ugi methods, not to 
authenticators (ex. gssapi or spnego) which also modify the private creds.

The current class synchronization is primarily intended to guard 
getCurrentUser/getLoginUser against a relogin.  The creates a contention point 
for common case usage, which doesn’t guard against authenticator modifications.

The comprehensive solution is removing class and instance synchronization, 
replacing with authenticator friendly synchronization on the underlying 
Subject’s private credentials during:
# Instantiation of a new ugi to guard the checks for keytab and ticket.
# Entire relogin (logout/login) to avoid inconsistencies or corruption by 
authenticators.

There’s one wrinkle as detailed by another subtask.  The hadoop login conf 
relies on class statics for keytab and principal.  Until removed, this requires 
all login-related methods related to synchronize on a global login lock before 
synchronizing on the Subject’s private credentials.  Effectively this replaces 
the class level synchronization previously used to protect these fields, 
enabling getCurrentUser to become concurrent.


> Remove synchronization for UGI.getCurrentUser
> -
>
> Key: HADOOP-9749
> URL: https://issues.apache.org/jira/browse/HADOOP-9749
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0-alpha1
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9749.branch-2.patch, HADOOP-9749.trunk.patch
>
>
> HADOOP-7854 added synchronization to {{getCurrentUser}} due to 
> {{ConcurrentModificationExceptions}}.  This degrades NN call handler 
> performance.
> The problem was not well understood at the time, but it's caused by a 
> collision between relogin and {{getCurrentUser}} due to a bug in 
> {{Krb5LoginModule}}.  Avoiding the collision will allow removal of the 
> synchronization.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-9749) Remove synchronization for UGI.getCurrentUser

2017-02-13 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-9749:

Target Version/s: 2.8.1
  Status: Patch Available  (was: Open)

Only difference in trunk/branch-2 patches are import order in the unit tests.

> Remove synchronization for UGI.getCurrentUser
> -
>
> Key: HADOOP-9749
> URL: https://issues.apache.org/jira/browse/HADOOP-9749
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0-alpha1, 2.0.0-alpha, 0.23.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
> Attachments: HADOOP-9749.branch-2.patch, HADOOP-9749.trunk.patch
>
>
> HADOOP-7854 added synchronization to {{getCurrentUser}} due to 
> {{ConcurrentModificationExceptions}}.  This degrades NN call handler 
> performance.
> The problem was not well understood at the time, but it's caused by a 
> collision between relogin and {{getCurrentUser}} due to a bug in 
> {{Krb5LoginModule}}.  Avoiding the collision will allow removal of the 
> synchronization.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13398) prevent user classes from loading classes in the parent classpath with ApplicationClassLoader

2017-02-13 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864324#comment-15864324
 ] 

Sean Busbey commented on HADOOP-13398:
--

I'm happy to review; should have time at the end of this week. A doc explaining 
the changes is always welcome, especially since I'm sure we'll both forget in 
9-12 months time.

> prevent user classes from loading classes in the parent classpath with 
> ApplicationClassLoader
> -
>
> Key: HADOOP-13398
> URL: https://issues.apache.org/jira/browse/HADOOP-13398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
>Priority: Critical
> Attachments: HADOOP-13398-HADOOP-13070.01.patch, 
> HADOOP-13398-HADOOP-13070.02.patch, HADOOP-13398-HADOOP-13070.03.patch, 
> HADOOP-13398-HADOOP-13070.04.patch
>
>
> Today, a user class is able to trigger loading a class from Hadoop's 
> dependencies, with or without the use of {{ApplicationClassLoader}}, and it 
> creates an implicit dependence from users' code on Hadoop's dependencies, and 
> as a result dependency conflicts.
> We should modify {{ApplicationClassLoader}} to prevent a user class from 
> loading a class from the parent classpath.
> This should also cover resource loading (including 
> {{ClassLoader.getResources()}} and as a corollary {{ServiceLoader}}).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13945) Azure: Add Kerberos and Delegation token support to WASB client.

2017-02-13 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864241#comment-15864241
 ] 

Mingliang Liu commented on HADOOP-13945:


Can you provide a new patch to get a clean Jenkins pre-commit run? Specially, 
to fix the failing unit test {{hadoop.conf.TestCommonConfigurationFields}}, we 
can skip the newly added config in method {{initializeMemberVariables()}}.

> Azure: Add Kerberos and Delegation token support to WASB client.
> 
>
> Key: HADOOP-13945
> URL: https://issues.apache.org/jira/browse/HADOOP-13945
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Santhosh G Nayak
>Assignee: Santhosh G Nayak
> Attachments: HADOOP-13945.1.patch, HADOOP-13945.2.patch, 
> HADOOP-13945.3.patch
>
>
> Current implementation of Azure storage client for Hadoop ({{WASB}}) does not 
> support Kerberos Authentication and FileSystem authorization, which makes it 
> unusable in secure environments with multi user setup. 
> To make {{WASB}} client more suitable to run in Secure environments, there 
> are 2 initiatives under way for providing the authorization (HADOOP-13930) 
> and fine grained access control (HADOOP-13863) support.
> This JIRA is created to add Kerberos and delegation token support to {{WASB}} 
> client to fetch Azure Storage SAS keys (from Remote service as discussed in 
> HADOOP-13863), which provides fine grained timed access to containers and 
> blobs. 
> For delegation token management, the proposal is it use the same REST service 
> which being used to generate the SAS Keys.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13578) Add Codec for ZStandard Compression

2017-02-13 Thread churro morales (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864240#comment-15864240
 ] 

churro morales commented on HADOOP-13578:
-

Hi [~redsky_luan] - I believe I can put up a patch for 2.7, should be pretty 
easy. 

Lets create another JIRA ticket for this and move it out. 

> Add Codec for ZStandard Compression
> ---
>
> Key: HADOOP-13578
> URL: https://issues.apache.org/jira/browse/HADOOP-13578
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13578-branch-2.v9.patch, HADOOP-13578.patch, 
> HADOOP-13578.v1.patch, HADOOP-13578.v2.patch, HADOOP-13578.v3.patch, 
> HADOOP-13578.v4.patch, HADOOP-13578.v5.patch, HADOOP-13578.v6.patch, 
> HADOOP-13578.v7.patch, HADOOP-13578.v8.patch, HADOOP-13578.v9.patch
>
>
> ZStandard: https://github.com/facebook/zstd has been used in production for 6 
> months by facebook now.  v1.0 was recently released.  Create a codec for this 
> library.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13930) Azure: Add Authorization support to WASB

2017-02-13 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864227#comment-15864227
 ] 

Mingliang Liu commented on HADOOP-13930:


Hi [~dchickabasapa], do you have updated patch available that addresses all 
Steve's comments? I'd like to get this in {{trunk}} and {{branch-2}}. Thanks,

> Azure: Add Authorization support to WASB
> 
>
> Key: HADOOP-13930
> URL: https://issues.apache.org/jira/browse/HADOOP-13930
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/azure
>Affects Versions: 2.8.0
>Reporter: Dushyanth
>Assignee: Dushyanth
> Attachments: HADOOP-13930.001.patch, HADOOP-13930.002.patch
>
>
> As highlighted in HADOOP-13863, current implementation of WASB does not 
> support authorization to any File System operations. This jira is created to 
> add authorization support for WASB. The current approach is to enforce 
> authorization via an external REST service (One approach could be to use 
> component like Ranger to enforce authorization).  The support for 
> authorization would be hiding behind a configuration flag : 
> "fs.azure.enable.authorization" and the remote service is expected to be 
> provided via config : "fs.azure.remote.auth.service.url".
> The remote service is expected to provide support for the following REST 
> call:  {URL}/CHECK_AUTHORIZATION```
>  An example request:
> {URL}/CHECK_AUTHORIZATION?wasb_absolute_path=_type=  type>_token=



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13508) FsPermission string constructor does not recognize sticky bit

2017-02-13 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864225#comment-15864225
 ] 

Wei-Chiu Chuang commented on HADOOP-13508:
--

Pushed the commit into branch-2.8

> FsPermission string constructor does not recognize sticky bit
> -
>
> Key: HADOOP-13508
> URL: https://issues.apache.org/jira/browse/HADOOP-13508
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Fix For: 2.9.0, 3.0.0-alpha2, 2.8.1
>
> Attachments: HADOOP-13508.003.patch, HADOOP-13508.004.patch, 
> HADOOP-13508.005.patch, HADOOP-13508.006.patch, HADOOP-13508-1.patch, 
> HADOOP-13508-2.patch, HADOOP-13508.branch-2.patch
>
>
> FsPermissions's string constructor breaks on valid permission strings, like 
> "1777". 
> This is because FsPermission class naïvely uses UmaskParser to do it’s 
> parsing of permissions: (from source code):
> public FsPermission(String mode) {
> this((new UmaskParser(mode)).getUMask());
> }
> The mode string UMask accepts is subtly different (esp wrt sticky bit), so 
> parsing Umask is not the same as parsing FsPermission. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13508) FsPermission string constructor does not recognize sticky bit

2017-02-13 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13508:
-
Fix Version/s: 2.8.1

> FsPermission string constructor does not recognize sticky bit
> -
>
> Key: HADOOP-13508
> URL: https://issues.apache.org/jira/browse/HADOOP-13508
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Fix For: 2.9.0, 3.0.0-alpha2, 2.8.1
>
> Attachments: HADOOP-13508.003.patch, HADOOP-13508.004.patch, 
> HADOOP-13508.005.patch, HADOOP-13508.006.patch, HADOOP-13508-1.patch, 
> HADOOP-13508-2.patch, HADOOP-13508.branch-2.patch
>
>
> FsPermissions's string constructor breaks on valid permission strings, like 
> "1777". 
> This is because FsPermission class naïvely uses UmaskParser to do it’s 
> parsing of permissions: (from source code):
> public FsPermission(String mode) {
> this((new UmaskParser(mode)).getUMask());
> }
> The mode string UMask accepts is subtly different (esp wrt sticky bit), so 
> parsing Umask is not the same as parsing FsPermission. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13233) help of stat is confusing

2017-02-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864202#comment-15864202
 ] 

Hudson commented on HADOOP-13233:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #11239 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11239/])
HADOOP-13233. help of stat is confusing. Contributed by Attila Bukor. (weichiu: 
rev cc45da79fda7dfba2795ac397d62f40a858dcdd9)
* (edit) 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/shell/Stat.java
* (edit) hadoop-common-project/hadoop-common/src/test/resources/testConf.xml
* (edit) 
hadoop-common-project/hadoop-common/src/site/markdown/FileSystemShell.md


> help of stat is confusing
> -
>
> Key: HADOOP-13233
> URL: https://issues.apache.org/jira/browse/HADOOP-13233
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs
>Affects Versions: 2.7.2
>Reporter: Xiaohe Lan
>Assignee: Attila Bukor
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-13233.001.patch, HADOOP-13233.002.patch
>
>
> %b is actually printing the size of a file in bytes, while in help it says 
> filesize in blocks.
> {code}
> hdfs dfs -help stat
> -stat [format]  ... :
>   Print statistics about the file/directory at 
>   in the specified format. Format accepts filesize in
>   blocks (%b)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13929) ADLS connector should not check in contract-test-options.xml

2017-02-13 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864176#comment-15864176
 ] 

John Zhuge commented on HADOOP-13929:
-

[~cnauroth] recommended to remove it from {{.gitignore}}:

bq. Patch 005 deleted contract-test-options.xml, but kept it listed in 
.gitignore. I don't think it needs to remain in .gitignore, because there is no 
other file anywhere in the source tree named contract-test-options.xml, besides 
the ADL one that the patch deletes.

> ADLS connector should not check in contract-test-options.xml
> 
>
> Key: HADOOP-13929
> URL: https://issues.apache.org/jira/browse/HADOOP-13929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, test
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13929.001.patch, HADOOP-13929.002.patch, 
> HADOOP-13929.003.patch, HADOOP-13929.004.patch, HADOOP-13929.005.patch, 
> HADOOP-13929.006.patch, HADOOP-13929.007.patch, HADOOP-13929.008.patch, 
> HADOOP-13929.009.patch, HADOOP-13929.010.patch, HADOOP-13929.011.patch
>
>
> Should not check in the file {{contract-test-options.xml}}. Make sure the 
> file is excluded by {{.gitignore}}. Make sure ADLS {{index.md}} provides a 
> complete example of this file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13233) help of stat is confusing

2017-02-13 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13233:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha3
   2.9.0
   Status: Resolved  (was: Patch Available)

Committed the patch into trunk and branch-2. There are quite a few conflicts in 
branch-2.8 and below.

Thanks to [~r1pp3rj4ck] for the patch, [~xilan] for filing the jira and 
[~surendrasingh] for the review!

> help of stat is confusing
> -
>
> Key: HADOOP-13233
> URL: https://issues.apache.org/jira/browse/HADOOP-13233
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs
>Affects Versions: 2.7.2
>Reporter: Xiaohe Lan
>Assignee: Attila Bukor
>Priority: Trivial
>  Labels: newbie
> Fix For: 2.9.0, 3.0.0-alpha3
>
> Attachments: HADOOP-13233.001.patch, HADOOP-13233.002.patch
>
>
> %b is actually printing the size of a file in bytes, while in help it says 
> filesize in blocks.
> {code}
> hdfs dfs -help stat
> -stat [format]  ... :
>   Print statistics about the file/directory at 
>   in the specified format. Format accepts filesize in
>   blocks (%b)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13233) help of stat is confusing

2017-02-13 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13233:
-
Component/s: fs
 documentation

> help of stat is confusing
> -
>
> Key: HADOOP-13233
> URL: https://issues.apache.org/jira/browse/HADOOP-13233
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation, fs
>Affects Versions: 2.7.2
>Reporter: Xiaohe Lan
>Assignee: Attila Bukor
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-13233.001.patch, HADOOP-13233.002.patch
>
>
> %b is actually printing the size of a file in bytes, while in help it says 
> filesize in blocks.
> {code}
> hdfs dfs -help stat
> -stat [format]  ... :
>   Print statistics about the file/directory at 
>   in the specified format. Format accepts filesize in
>   blocks (%b)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13929) ADLS connector should not check in contract-test-options.xml

2017-02-13 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13929?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864137#comment-15864137
 ] 

Lei (Eddy) Xu commented on HADOOP-13929:


Hi, [~jzhuge]

Should we also put {{contract-test-options.xml}} into {{.gitignore}}?  

> ADLS connector should not check in contract-test-options.xml
> 
>
> Key: HADOOP-13929
> URL: https://issues.apache.org/jira/browse/HADOOP-13929
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/adl, test
>Affects Versions: 3.0.0-alpha2
>Reporter: John Zhuge
>Assignee: John Zhuge
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13929.001.patch, HADOOP-13929.002.patch, 
> HADOOP-13929.003.patch, HADOOP-13929.004.patch, HADOOP-13929.005.patch, 
> HADOOP-13929.006.patch, HADOOP-13929.007.patch, HADOOP-13929.008.patch, 
> HADOOP-13929.009.patch, HADOOP-13929.010.patch, HADOOP-13929.011.patch
>
>
> Should not check in the file {{contract-test-options.xml}}. Make sure the 
> file is excluded by {{.gitignore}}. Make sure ADLS {{index.md}} provides a 
> complete example of this file.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13233) help of stat is confusing

2017-02-13 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864117#comment-15864117
 ] 

Wei-Chiu Chuang commented on HADOOP-13233:
--

+1
No need to fix checkstyle warning. Failed tests not reproducible.

> help of stat is confusing
> -
>
> Key: HADOOP-13233
> URL: https://issues.apache.org/jira/browse/HADOOP-13233
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Xiaohe Lan
>Assignee: Attila Bukor
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-13233.001.patch, HADOOP-13233.002.patch
>
>
> %b is actually printing the size of a file in bytes, while in help it says 
> filesize in blocks.
> {code}
> hdfs dfs -help stat
> -stat [format]  ... :
>   Print statistics about the file/directory at 
>   in the specified format. Format accepts filesize in
>   blocks (%b)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13433) Race in UGI.reloginFromKeytab

2017-02-13 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13433?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864115#comment-15864115
 ] 

Wei-Chiu Chuang commented on HADOOP-13433:
--

The added test TestRaceWhenRelogin failed occasionally in a few precommit jobs. 
Would any one be interested in looking into this? I filed HADOOP-14078 to track 
it. Thanks!

> Race in UGI.reloginFromKeytab
> -
>
> Key: HADOOP-13433
> URL: https://issues.apache.org/jira/browse/HADOOP-13433
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.9.0, 2.7.4, 2.6.6, 2.8.1, 3.0.0-alpha3
>
> Attachments: HADOOP-13433-branch-2.7.patch, 
> HADOOP-13433-branch-2.7-v1.patch, HADOOP-13433-branch-2.7-v2.patch, 
> HADOOP-13433-branch-2.8.patch, HADOOP-13433-branch-2.8.patch, 
> HADOOP-13433-branch-2.8-v1.patch, HADOOP-13433-branch-2.patch, 
> HADOOP-13433.patch, HADOOP-13433-v1.patch, HADOOP-13433-v2.patch, 
> HADOOP-13433-v4.patch, HADOOP-13433-v5.patch, HADOOP-13433-v6.patch, 
> HBASE-13433-testcase-v3.patch
>
>
> This is a problem that has troubled us for several years. For our HBase 
> cluster, sometimes the RS will be stuck due to
> {noformat}
> 2016-06-20,03:44:12,936 INFO org.apache.hadoop.ipc.SecureClient: Exception 
> encountered while connecting to the server :
> javax.security.sasl.SaslException: GSS initiate failed [Caused by 
> GSSException: No valid credentials provided (Mechanism level: The ticket 
> isn't for us (35) - BAD TGS SERVER NAME)]
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:194)
> at 
> org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:140)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupSaslConnection(SecureClient.java:187)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.access$700(SecureClient.java:95)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:325)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection$2.run(SecureClient.java:322)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.Subject.doAs(Subject.java:396)
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1781)
> at sun.reflect.GeneratedMethodAccessor23.invoke(Unknown Source)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> at java.lang.reflect.Method.invoke(Method.java:597)
> at org.apache.hadoop.hbase.util.Methods.call(Methods.java:37)
> at org.apache.hadoop.hbase.security.User.call(User.java:607)
> at org.apache.hadoop.hbase.security.User.access$700(User.java:51)
> at 
> org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:461)
> at 
> org.apache.hadoop.hbase.ipc.SecureClient$SecureConnection.setupIOstreams(SecureClient.java:321)
> at 
> org.apache.hadoop.hbase.ipc.HBaseClient.getConnection(HBaseClient.java:1164)
> at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1004)
> at 
> org.apache.hadoop.hbase.ipc.SecureRpcEngine$Invoker.invoke(SecureRpcEngine.java:107)
> at $Proxy24.replicateLogEntries(Unknown Source)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.shipEdits(ReplicationSource.java:962)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.runLoop(ReplicationSource.java:466)
> at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSource.run(ReplicationSource.java:515)
> Caused by: GSSException: No valid credentials provided (Mechanism level: The 
> ticket isn't for us (35) - BAD TGS SERVER NAME)
> at 
> sun.security.jgss.krb5.Krb5Context.initSecContext(Krb5Context.java:663)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:248)
> at 
> sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:180)
> at 
> com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:175)
> ... 23 more
> Caused by: KrbException: The ticket isn't for us (35) - BAD TGS SERVER NAME
> at sun.security.krb5.KrbTgsRep.(KrbTgsRep.java:64)
> at sun.security.krb5.KrbTgsReq.getReply(KrbTgsReq.java:185)
> at 
> sun.security.krb5.internal.CredentialsUtil.serviceCreds(CredentialsUtil.java:294)
> at 
> sun.security.krb5.internal.CredentialsUtil.acquireServiceCreds(CredentialsUtil.java:106)
> at 
> 

[jira] [Created] (HADOOP-14078) TestRaceWhenRelogin fails occasionally

2017-02-13 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-14078:


 Summary: TestRaceWhenRelogin fails occasionally
 Key: HADOOP-14078
 URL: https://issues.apache.org/jira/browse/HADOOP-14078
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.9.0, 2.7.4, 2.6.6, 2.8.1, 3.0.0-alpha3
 Environment: Precommit jenkins
Reporter: Wei-Chiu Chuang


HADOOP-13433 added this test class and it failed in a few precommit jobs like 
this one: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11616/testReport/org.apache.hadoop.security/TestRaceWhenRelogin/test/

There were a lot of errors in the test, starting with this one
{noformat}
2017-02-13 12:26:01,838 ERROR impl.DefaultKdcHandler 
(DefaultKdcHandler.java:handleMessage(71)) - Error occured while processing 
request:
org.apache.kerby.kerberos.kerb.KrbException: Integrity check on decrypted field 
failed
at 
org.apache.kerby.kerberos.kerb.crypto.enc.KeKiEnc.decryptWith(KeKiEnc.java:127)
at 
org.apache.kerby.kerberos.kerb.crypto.enc.AbstractEncTypeHandler.decrypt(AbstractEncTypeHandler.java:150)
at 
org.apache.kerby.kerberos.kerb.crypto.enc.AbstractEncTypeHandler.decrypt(AbstractEncTypeHandler.java:138)
at 
org.apache.kerby.kerberos.kerb.crypto.EncryptionHandler.decrypt(EncryptionHandler.java:228)
at 
org.apache.kerby.kerberos.kerb.common.EncryptionUtil.unseal(EncryptionUtil.java:136)
at 
org.apache.kerby.kerberos.kerb.server.request.TgsRequest.verifyAuthenticator(TgsRequest.java:138)
at 
org.apache.kerby.kerberos.kerb.server.preauth.builtin.TgtPreauth.verify(TgtPreauth.java:41)
at 
org.apache.kerby.kerberos.kerb.server.preauth.PreauthHandle.verify(PreauthHandle.java:46)
at 
org.apache.kerby.kerberos.kerb.server.preauth.PreauthHandler.verify(PreauthHandler.java:101)
at 
org.apache.kerby.kerberos.kerb.server.request.KdcRequest.preauth(KdcRequest.java:562)
at 
org.apache.kerby.kerberos.kerb.server.request.KdcRequest.process(KdcRequest.java:181)
at 
org.apache.kerby.kerberos.kerb.server.KdcHandler.handleMessage(KdcHandler.java:115)
at 
org.apache.kerby.kerberos.kerb.server.impl.DefaultKdcHandler.handleMessage(DefaultKdcHandler.java:67)
at 
org.apache.kerby.kerberos.kerb.server.impl.DefaultKdcHandler.run(DefaultKdcHandler.java:52)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14075) chown doesn't work with usernames containing '\' character

2017-02-13 Thread Attila Bukor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864108#comment-15864108
 ] 

Attila Bukor commented on HADOOP-14075:
---

Thanks [~jojochuang], I ran these 6 tests locally and all of them passed and I 
ran the previous 2 before already.

> chown doesn't work with usernames containing '\' character
> --
>
> Key: HADOOP-14075
> URL: https://issues.apache.org/jira/browse/HADOOP-14075
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Attila Bukor
>Assignee: Attila Bukor
> Attachments: HADOOP-14075.001.patch, HADOOP-14075.002.patch
>
>
> Usernames containing backslash (e.g. down-level logon names) seem to work 
> fine with Hadoop, except for chown.
> {code}
> $ HADOOP_USER_NAME="FOOBAR\\testuser" hdfs dfs -mkdir /test/testfile1
> $ hdfs dfs -ls /test
> Found 1 items
> drwxrwxr-x   - FOOBAR\testuser supergroup  0 2017-02-10 12:49 
> /test/testfile1
> $ HADOOP_USER_NAME="testuser" hdfs dfs -mkdir /test/testfile2
> $ HADOOP_USER_NAME="hdfs" hdfs dfs -chown "FOOBAR\\testuser" /test/testfile2
> -chown: 'FOOBAR\testuser' does not match expected pattern for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> $
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14075) chown doesn't work with usernames containing '\' character

2017-02-13 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15864049#comment-15864049
 ] 

Wei-Chiu Chuang commented on HADOOP-14075:
--

Thanks for the patch, [~r1pp3rj4ck] There are almost always failed tests in 
precommit, but the one failed here doesn't look like related to your patch. To 
verify, can you run these failed tests locally to double check?

> chown doesn't work with usernames containing '\' character
> --
>
> Key: HADOOP-14075
> URL: https://issues.apache.org/jira/browse/HADOOP-14075
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.0
>Reporter: Attila Bukor
>Assignee: Attila Bukor
> Attachments: HADOOP-14075.001.patch, HADOOP-14075.002.patch
>
>
> Usernames containing backslash (e.g. down-level logon names) seem to work 
> fine with Hadoop, except for chown.
> {code}
> $ HADOOP_USER_NAME="FOOBAR\\testuser" hdfs dfs -mkdir /test/testfile1
> $ hdfs dfs -ls /test
> Found 1 items
> drwxrwxr-x   - FOOBAR\testuser supergroup  0 2017-02-10 12:49 
> /test/testfile1
> $ HADOOP_USER_NAME="testuser" hdfs dfs -mkdir /test/testfile2
> $ HADOOP_USER_NAME="hdfs" hdfs dfs -chown "FOOBAR\\testuser" /test/testfile2
> -chown: 'FOOBAR\testuser' does not match expected pattern for [owner][:group].
> Usage: hadoop fs [generic options] -chown [-R] [OWNER][:[GROUP]] PATH...
> $
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14058) Fix NativeS3FileSystemContractBaseTest#testDirWithDifferentMarkersWorks

2017-02-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863852#comment-15863852
 ] 

Steve Loughran commented on HADOOP-14058:
-

Afraid I will have to be strict and say "no tests, no review". sorry, but we 
have to be consistent.

I'm not going near s3n for now: if anyone can do a test run that'd be great

> Fix NativeS3FileSystemContractBaseTest#testDirWithDifferentMarkersWorks
> ---
>
> Key: HADOOP-14058
> URL: https://issues.apache.org/jira/browse/HADOOP-14058
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3, test
>Reporter: Akira Ajisaka
>Assignee: Yiqun Lin
>  Labels: s3
> Attachments: HADOOP-14058.001.patch, 
> HADOOP-14058-HADOOP-13345.001.patch
>
>
> In NativeS3FileSystemContractBaseTest#testDirWithDifferentMarkersWorks, 
> {code}
>   else if (i == 3) {
> // test both markers
> store.storeEmptyFile(base + "_$folder$");
> store.storeEmptyFile(base + "/dir_$folder$");
> store.storeEmptyFile(base + "/");
> store.storeEmptyFile(base + "/dir/");
>   }
> {code}
> the above test code is not executed. In the following code:
> {code}
> for (int i = 0; i < 3; i++) {
> {code}
> < should be <=.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2017-02-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863855#comment-15863855
 ] 

ASF GitHub Bot commented on HADOOP-13075:
-

Github user orngejaket commented on the issue:

https://github.com/apache/hadoop/pull/183
  
Closing because changes are already pushed in.


> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Andrew Olson
>Assignee: Steve Moist
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-13075-001.patch, HADOOP-13075-002.patch, 
> HADOOP-13075-003.patch, HADOOP-13075-branch2.002.patch
>
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2017-02-13 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863854#comment-15863854
 ] 

ASF GitHub Bot commented on HADOOP-13075:
-

Github user orngejaket closed the pull request at:

https://github.com/apache/hadoop/pull/183


> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Andrew Olson
>Assignee: Steve Moist
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-13075-001.patch, HADOOP-13075-002.patch, 
> HADOOP-13075-003.patch, HADOOP-13075-branch2.002.patch
>
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13866) Upgrade netty-all to 4.1.1.Final

2017-02-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863817#comment-15863817
 ] 

Ted Yu commented on HADOOP-13866:
-

[~xiaochen] [~andrew.wang] [~djp]:
Can this be resolved this week ?

Let me know what else I need to do.

> Upgrade netty-all to 4.1.1.Final
> 
>
> Key: HADOOP-13866
> URL: https://issues.apache.org/jira/browse/HADOOP-13866
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Blocker
> Attachments: HADOOP-13866.v1.patch, HADOOP-13866.v2.patch, 
> HADOOP-13866.v3.patch, HADOOP-13866.v4.patch, HADOOP-13866.v6.patch, 
> HADOOP-13866.v7.patch, HADOOP-13866.v8.patch, HADOOP-13866.v8.patch, 
> HADOOP-13866.v8.patch, HADOOP-13866.v9.patch
>
>
> netty-all 4.1.1.Final is stable release which we should upgrade to.
> See bottom of HADOOP-12927 for related discussion.
> This issue was discovered since hbase 2.0 uses 4.1.1.Final of netty.
> When launching mapreduce job from hbase, /grid/0/hadoop/yarn/local/  
> usercache/hbase/appcache/application_1479850535804_0008/container_e01_1479850535804_0008_01_05/mr-framework/hadoop/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar
>  (from hdfs) is ahead of 4.1.1.Final jar (from hbase) on the classpath.
> Resulting in the following exception:
> {code}
> 2016-12-01 20:17:26,678 WARN [Default-IPC-NioEventLoopGroup-1-1] 
> io.netty.util.concurrent.DefaultPromise: An exception was thrown by 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete()
> java.lang.NoSuchMethodError: 
> io.netty.buffer.ByteBuf.retainedDuplicate()Lio/netty/buffer/ByteBuf;
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:272)
> at 
> org.apache.hadoop.hbase.ipc.NettyRpcConnection$3.operationComplete(NettyRpcConnection.java:262)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:680)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners0(DefaultPromise.java:603)
> at 
> io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:563)
> at 
> io.netty.util.concurrent.DefaultPromise.trySuccess(DefaultPromise.java:406)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14076) Allow Configuration to be persisted given path to file

2017-02-13 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863813#comment-15863813
 ] 

Ted Yu commented on HADOOP-14076:
-

That's what I did using JNI:
{code}
void writeConf(jobject conf, const char *filepath)
{
  jclass class_fdesc = (*env)->FindClass(env, "java/io/FileDescriptor");
  // construct a new FileDescriptor
  jmethodID const_fdesc = (*env)->GetMethodID(env, class_fdesc, "", 
"()V");

  jobject file = (*env)->NewObject(env, class_fdesc, const_fdesc);
  jfieldID field_fd = (*env)->GetFieldID(env, class_fdesc, "fd", "I");

  int fd = open(filepath, O_RDWR | O_NONBLOCK | O_CREAT, S_IRWXU);
  if (fd < 0) {
printf("Couldn't open file %s\n", filepath);
exit(-1);
  }
  (*env)->SetIntField(env, file, field_fd, fd);

  jclass cls_outstream = (*env)->FindClass(env, "java/io/FileOutputStream");
  jmethodID ctor_stream = (*env)->GetMethodID(env, cls_outstream, "",
"(Ljava/io/FileDescriptor;)V");
  if (ctor_stream == NULL) {
printf("Couldn't get ctor for FileOutputStream\n");
exit(-1);
  }
  jobject file_outstream = (*env)->NewObject(env, cls_outstream, ctor_stream, 
file);
  if (file_outstream == NULL) {
printf("Couldn't create FileOutputStream\n");
exit(-1);
  }
  jclass class_conf = (*env)->FindClass(env, HADOOP_CONF);
  jmethodID writeXmlMid = (*env)->GetMethodID(env, class_conf, "writeXml",
"(Ljava/io/OutputStream;)V");
  (*env)->CallObjectMethod(env, conf, writeXmlMid, file_outstream);
}
{code}
The code is tedious (manipulating fd field).

> Allow Configuration to be persisted given path to file
> --
>
> Key: HADOOP-14076
> URL: https://issues.apache.org/jira/browse/HADOOP-14076
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> Currently Configuration has the following methods for persistence:
> {code}
>   public void writeXml(OutputStream out) throws IOException {
>   public void writeXml(Writer out) throws IOException {
> {code}
> Adding API for persisting to file given path would be useful:
> {code}
>   public void writeXml(String path) throws IOException {
> {code}
> Background: I recently worked on exporting Configuration to a file using JNI.
> Without the proposed API, I resorted to some trick such as the following:
> http://www.kfu.com/~nsayer/Java/jni-filedesc.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14076) Allow Configuration to be persisted given path to file

2017-02-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863804#comment-15863804
 ] 

Steve Loughran commented on HADOOP-14076:
-

Ted, what's wrong with just creating the FileOutputStream and calling writeXML? 
It's not that hard

> Allow Configuration to be persisted given path to file
> --
>
> Key: HADOOP-14076
> URL: https://issues.apache.org/jira/browse/HADOOP-14076
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Ted Yu
>
> Currently Configuration has the following methods for persistence:
> {code}
>   public void writeXml(OutputStream out) throws IOException {
>   public void writeXml(Writer out) throws IOException {
> {code}
> Adding API for persisting to file given path would be useful:
> {code}
>   public void writeXml(String path) throws IOException {
> {code}
> Background: I recently worked on exporting Configuration to a file using JNI.
> Without the proposed API, I resorted to some trick such as the following:
> http://www.kfu.com/~nsayer/Java/jni-filedesc.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13075) Add support for SSE-KMS and SSE-C in s3a filesystem

2017-02-13 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863761#comment-15863761
 ] 

Steve Loughran commented on HADOOP-13075:
-

Not my patch:  [~fed...@gmail.com] & [~moist]: hope they got the credits on the 
commit

> Add support for SSE-KMS and SSE-C in s3a filesystem
> ---
>
> Key: HADOOP-13075
> URL: https://issues.apache.org/jira/browse/HADOOP-13075
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Andrew Olson
>Assignee: Steve Moist
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-13075-001.patch, HADOOP-13075-002.patch, 
> HADOOP-13075-003.patch, HADOOP-13075-branch2.002.patch
>
>
> S3 provides 3 types of server-side encryption [1],
> * SSE-S3 (Amazon S3-Managed Keys) [2]
> * SSE-KMS (AWS KMS-Managed Keys) [3]
> * SSE-C (Customer-Provided Keys) [4]
> Of which the S3AFileSystem in hadoop-aws only supports opting into SSE-S3 
> (HADOOP-10568) -- the underlying aws-java-sdk makes that very simple [5]. 
> With native support in aws-java-sdk already available it should be fairly 
> straightforward [6],[7] to support the other two types of SSE with some 
> additional fs.s3a configuration properties.
> [1] http://docs.aws.amazon.com/AmazonS3/latest/dev/serv-side-encryption.html
> [2] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingServerSideEncryption.html
> [3] http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingKMSEncryption.html
> [4] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/ServerSideEncryptionCustomerKeys.html
> [5] http://docs.aws.amazon.com/AmazonS3/latest/dev/SSEUsingJavaSDK.html
> [6] 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/kms-using-sdks.html#kms-using-sdks-java
> [7] http://docs.aws.amazon.com/AmazonS3/latest/dev/sse-c-using-java-sdk.html



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13233) help of stat is confusing

2017-02-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863591#comment-15863591
 ] 

Hadoop QA commented on HADOOP-13233:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
29s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
37s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 29s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 48 unchanged - 1 fixed = 49 total (was 49) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 34s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 55m 45s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
|   | hadoop.security.TestKDiag |
|   | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-13233 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852331/HADOOP-13233.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux a4ac6f4c9fea 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 243c0f3 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11616/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11616/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11616/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11616/console |
| Powered by | Apache Yetus 0.5.0-SNAPSHOT   

[jira] [Updated] (HADOOP-13233) help of stat is confusing

2017-02-13 Thread Attila Bukor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Bukor updated HADOOP-13233:
--
Attachment: HADOOP-13233.002.patch

> help of stat is confusing
> -
>
> Key: HADOOP-13233
> URL: https://issues.apache.org/jira/browse/HADOOP-13233
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Xiaohe Lan
>Assignee: Attila Bukor
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-13233.001.patch, HADOOP-13233.002.patch
>
>
> %b is actually printing the size of a file in bytes, while in help it says 
> filesize in blocks.
> {code}
> hdfs dfs -help stat
> -stat [format]  ... :
>   Print statistics about the file/directory at 
>   in the specified format. Format accepts filesize in
>   blocks (%b)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14077) Improve the patch of HADOOP-13119

2017-02-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863217#comment-15863217
 ] 

Hadoop QA commented on HADOOP-14077:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
52s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
53s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
35s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
35s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
57s{color} | {color:red} 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
43s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 19s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
38s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
9s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}119m  7s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | 
module:hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common
 |
|  |  Redundant nullcheck of callerUGI, which is known to be non-null in 
org.apache.hadoop.yarn.server.webapp.AppBlock.render(HtmlBlock$Block)  
Redundant null check at AppBlock.java:is known to be non-null in 
org.apache.hadoop.yarn.server.webapp.AppBlock.render(HtmlBlock$Block)  
Redundant null check at AppBlock.java:[line 235] |
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14077 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852285/HADOOP-14077.001.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7efadf6e87cf 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 

[jira] [Commented] (HADOOP-14077) Improve the patch of HADOOP-13119

2017-02-13 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863341#comment-15863341
 ] 

Hadoop QA commented on HADOOP-14077:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 12m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
31s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
58s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
51s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
36s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
16s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 11m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 11m  
2s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 36s{color} | {color:orange} root: The patch generated 1 new + 59 unchanged - 
7 fixed = 60 total (was 66) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
6s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 10s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-yarn-server-common in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m  
0s{color} | {color:green} hadoop-mapreduce-client-app in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}100m 56s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
|   | hadoop.security.TestKDiag |
|   | hadoop.net.TestDNS |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:a9ad5d6 |
| JIRA Issue | HADOOP-14077 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12852302/HADOOP-14077.002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 478d8f71a41f 3.13.0-106-generic #153-Ubuntu SMP Tue Dec 6 
15:44:32 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 243c0f3 |
| Default Java | 1.8.0_121 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/11615/artifact/patchprocess/diff-checkstyle-root.txt
 |
| unit | 

[jira] [Updated] (HADOOP-14077) Improve the patch of HADOOP-13119

2017-02-13 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14077:

Attachment: HADOOP-14077.001.patch

> Improve the patch of HADOOP-13119
> -
>
> Key: HADOOP-14077
> URL: https://issues.apache.org/jira/browse/HADOOP-14077
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HADOOP-14077.001.patch
>
>
> For some links(such as "/jmx, /stack"), blocking the links in filter chain 
> due to impersonation issue is not friendly for users. For example, user "sam" 
> is not allowed to be impersonated by user "knox", and the link "/jmx" doesn't 
> need any user to do authorization by default. It only needs user "knox" to do 
> authentication, in this case, it's not right to  block the access in SPNEGO 
> filter. We intend to check impersonation permission when the method 
> "getRemoteUser" of request is used, so that such kind of links("/jmx, 
> /stack") would not be blocked by mistake.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14072) AliyunOSS: Failed to read from stream when seek beyond the download size.

2017-02-13 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863348#comment-15863348
 ] 

Genmao Yu edited comment on HADOOP-14072 at 2/13/17 8:57 AM:
-

pending for [~drankye] 's review


was (Author: unclegen):
pending for @Kai Zheng 's review

> AliyunOSS: Failed to read from stream when seek beyond the download size.
> -
>
> Key: HADOOP-14072
> URL: https://issues.apache.org/jira/browse/HADOOP-14072
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Attachments: HADOOP-14072.001.patch
>
>
> {code}
> public synchronized void seek(long pos) throws IOException {
> checkNotClosed();
> if (position == pos) {
>   return;
> } else if (pos > position && pos < position + partRemaining) {
>   AliyunOSSUtils.skipFully(wrappedStream, pos - position);
>   position = pos;
> } else {
>   reopen(pos);
> }
>   }
> {code}
> In seek function, we need to update the partRemaining when the seeking 
> position is located in downloaded part.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Work started] (HADOOP-13233) help of stat is confusing

2017-02-13 Thread Attila Bukor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-13233 started by Attila Bukor.
-
> help of stat is confusing
> -
>
> Key: HADOOP-13233
> URL: https://issues.apache.org/jira/browse/HADOOP-13233
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Xiaohe Lan
>Assignee: Attila Bukor
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-13233.001.patch, HADOOP-13233.002.patch
>
>
> %b is actually printing the size of a file in bytes, while in help it says 
> filesize in blocks.
> {code}
> hdfs dfs -help stat
> -stat [format]  ... :
>   Print statistics about the file/directory at 
>   in the specified format. Format accepts filesize in
>   blocks (%b)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14069) AliyunOSS: listStatus returns wrong file info

2017-02-13 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-14069:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha3
   Status: Resolved  (was: Patch Available)

+1 and committed to trunk. Thanks Fei Hui and Genmao.

> AliyunOSS: listStatus returns wrong file info
> -
>
> Key: HADOOP-14069
> URL: https://issues.apache.org/jira/browse/HADOOP-14069
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14069.001.patch
>
>
> When i use command 'hadoop fs -ls oss://oss-for-hadoop-sh/', i find that list 
> info is wrong
> {quote}
> $bin/hadoop fs -ls oss://oss-for-hadoop-sh/ 
> Found 1 items
> drwxrwxrwx   -  0 1970-01-01 08:00 oss://oss-for-hadoop-sh/test00
> {quote}
> the modifiedtime is wrong, it should not be 1970-01-01 08:00



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12404) Disable caching for JarURLConnection to avoid sharing JarFile with other users when loading resource from URL in Configuration class.

2017-02-13 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863164#comment-15863164
 ] 

zhihai xu commented on HADOOP-12404:


[~anishek] We see this issue in hivesever2 logs, all the queries share a JVM in 
different threads for hiveserver2. The finalize statement will also close the 
InputStream for ZipFile class so it will also depend on when the garbage 
collection happens. Normally we see this issue happened several times per day 
in hiveserver2 which run thousand of queries per day. I thought loading jar 
files should be very quick that is why it happens rarely. After we disable 
caching, this issue didn't happen any more. Did you see this issue also? What 
is your environment?

> Disable caching for JarURLConnection to avoid sharing JarFile with other 
> users when loading resource from URL in Configuration class.
> -
>
> Key: HADOOP-12404
> URL: https://issues.apache.org/jira/browse/HADOOP-12404
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: conf
>Reporter: zhihai xu
>Assignee: zhihai xu
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-12404.000.patch
>
>
> Disable caching for JarURLConnection to avoid sharing JarFile with other 
> users when loading resource from URL in Configuration class.
> Currently {{Configuration#parse}} will call {{url.openStream}} to get the 
> InputStream for {{DocumentBuilder}} to parse.
> Based on the JDK source code, the calling sequence is 
> url.openStream => 
> [handler.openConnection.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/Handler.java]
>  => [new 
> JarURLConnection|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/JarURLConnection.java#JarURLConnection]
>  => JarURLConnection.connect => [factory.get(getJarFileURL(), 
> getUseCaches())|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/JarFileFactory.java]
>  =>  
> [URLJarFile.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/sun/net/www/protocol/jar/URLJarFile.java#URLJarFile.getJarFile%28java.net.URL%2Csun.net.www.protocol.jar.URLJarFile.URLJarFileCloseController%29]=>[JarFile.getInputStream|http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/6-b14/java/util/jar/JarFile.java#JarFile.getInputStream%28java.util.zip.ZipEntry%29]=>ZipFile.getInputStream
> If {{URLConnection#getUseCaches}} is true (by default), URLJarFile will be 
> shared for the same URL. If the shared URLJarFile is closed by other users, 
> all the InputStream returned by URLJarFile#getInputStream will be closed 
> based on the 
> [document|http://docs.oracle.com/javase/7/docs/api/java/util/zip/ZipFile.html#getInputStream(java.util.zip.ZipEntry)]
> So we saw the following exception in a heavy-load system at rare situation 
> which cause a hive job failed 
> {code}
> 2014-10-21 23:44:41,856 ERROR org.apache.hadoop.hive.ql.exec.Task: Ended 
> Job = job_1413909398487_3696 with exception 
> 'java.lang.RuntimeException(java.io.IOException: Stream closed)' 
> java.lang.RuntimeException: java.io.IOException: Stream closed 
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2484) 
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2337) 
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2254) 
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:861) 
> at 
> org.apache.hadoop.mapred.JobConf.checkAndWarnDeprecation(JobConf.java:2030) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:479) 
> at org.apache.hadoop.mapred.JobConf.(JobConf.java:469) 
> at org.apache.hadoop.mapreduce.Cluster.getJob(Cluster.java:187) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:582) 
> at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:580) 
> at java.security.AccessController.doPrivileged(Native Method) 
> at javax.security.auth.Subject.doAs(Subject.java:415) 
> at 
> org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.j 
> ava:1614) 
> at 
> org.apache.hadoop.mapred.JobClient.getJobUsingCluster(JobClient.java:580) 
> at org.apache.hadoop.mapred.JobClient.getJob(JobClient.java:598) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExe 
> cHelper.java:288) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.HadoopJobExecHelper.progress(HadoopJobExe 
> cHelper.java:547) 
> at 
> org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:426) 
> at 
> 

[jira] [Updated] (HADOOP-13769) AliyunOSS: update oss sdk version

2017-02-13 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-13769:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

+1 and committed to trunk. Thanks Genmao and Steve!

> AliyunOSS: update oss sdk version
> -
>
> Key: HADOOP-13769
> URL: https://issues.apache.org/jira/browse/HADOOP-13769
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13769.001.patch, HADOOP-13769.002.patch
>
>
>  -AliyunOSS object inputstream.close() will read the remaining bytes of the 
> OSS object, potentially transferring a lot of bytes from OSS that are 
> discarded.-
> just update oss sdk version. It fix many bugs including this " 
> inputstream.close()" perfomance issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14077) Improve the patch of HADOOP-13119

2017-02-13 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14077:

Attachment: HADOOP-14077.002.patch

upload v2 patch to address findbugs issue.

> Improve the patch of HADOOP-13119
> -
>
> Key: HADOOP-14077
> URL: https://issues.apache.org/jira/browse/HADOOP-14077
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HADOOP-14077.001.patch, HADOOP-14077.002.patch
>
>
> For some links(such as "/jmx, /stack"), blocking the links in filter chain 
> due to impersonation issue is not friendly for users. For example, user "sam" 
> is not allowed to be impersonated by user "knox", and the link "/jmx" doesn't 
> need any user to do authorization by default. It only needs user "knox" to do 
> authentication, in this case, it's not right to  block the access in SPNEGO 
> filter. We intend to check impersonation permission when the method 
> "getRemoteUser" of request is used, so that such kind of links("/jmx, 
> /stack") would not be blocked by mistake.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14077) Improve the patch of HADOOP-13119

2017-02-13 Thread Yuanbo Liu (JIRA)
Yuanbo Liu created HADOOP-14077:
---

 Summary: Improve the patch of HADOOP-13119
 Key: HADOOP-14077
 URL: https://issues.apache.org/jira/browse/HADOOP-14077
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Yuanbo Liu
Assignee: Yuanbo Liu


For some links(such as "/jmx, /stack"), blocking the links in filter chain due 
to impersonation issue is not friendly for users. For example, user "sam" is 
not allowed to be impersonated by user "knox", and the link "/jmx" doesn't need 
any user to do authorization by default. It only needs user "knox" to do 
authentication, in this case, it's not right to  block the access in SPNEGO 
filter. We intend to check impersonation permission when the method 
"getRemoteUser" of request is used, so that such kind of links("/jmx, /stack") 
would not be blocked by mistake.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13769) AliyunOSS: update oss sdk version

2017-02-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13769?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863232#comment-15863232
 ] 

Hudson commented on HADOOP-13769:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #11237 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11237/])
HADOOP-13769. AliyunOSS: update oss sdk version. Contributed by Genmao 
(kai.zheng: rev 243c0f33ec2559c1b727f6c7ea73625df3ac3a43)
* (edit) hadoop-project/pom.xml


> AliyunOSS: update oss sdk version
> -
>
> Key: HADOOP-13769
> URL: https://issues.apache.org/jira/browse/HADOOP-13769
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-13769.001.patch, HADOOP-13769.002.patch
>
>
>  -AliyunOSS object inputstream.close() will read the remaining bytes of the 
> OSS object, potentially transferring a lot of bytes from OSS that are 
> discarded.-
> just update oss sdk version. It fix many bugs including this " 
> inputstream.close()" perfomance issue.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13233) help of stat is confusing

2017-02-13 Thread Attila Bukor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863512#comment-15863512
 ] 

Attila Bukor commented on HADOOP-13233:
---

Thanks for the feedback [~surendrasingh], I've uploaded the new patch.

> help of stat is confusing
> -
>
> Key: HADOOP-13233
> URL: https://issues.apache.org/jira/browse/HADOOP-13233
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Xiaohe Lan
>Assignee: Attila Bukor
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-13233.001.patch, HADOOP-13233.002.patch
>
>
> %b is actually printing the size of a file in bytes, while in help it says 
> filesize in blocks.
> {code}
> hdfs dfs -help stat
> -stat [format]  ... :
>   Print statistics about the file/directory at 
>   in the specified format. Format accepts filesize in
>   blocks (%b)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13233) help of stat is confusing

2017-02-13 Thread Attila Bukor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Bukor updated HADOOP-13233:
--
Status: Patch Available  (was: In Progress)

> help of stat is confusing
> -
>
> Key: HADOOP-13233
> URL: https://issues.apache.org/jira/browse/HADOOP-13233
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Xiaohe Lan
>Assignee: Attila Bukor
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-13233.001.patch, HADOOP-13233.002.patch
>
>
> %b is actually printing the size of a file in bytes, while in help it says 
> filesize in blocks.
> {code}
> hdfs dfs -help stat
> -stat [format]  ... :
>   Print statistics about the file/directory at 
>   in the specified format. Format accepts filesize in
>   blocks (%b)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14077) Improve the patch of HADOOP-13119

2017-02-13 Thread Yuanbo Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863150#comment-15863150
 ] 

Yuanbo Liu commented on HADOOP-14077:
-

Also fix some inappropriate operation of null point condition in YARN app 
controller.

> Improve the patch of HADOOP-13119
> -
>
> Key: HADOOP-14077
> URL: https://issues.apache.org/jira/browse/HADOOP-14077
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
>
> For some links(such as "/jmx, /stack"), blocking the links in filter chain 
> due to impersonation issue is not friendly for users. For example, user "sam" 
> is not allowed to be impersonated by user "knox", and the link "/jmx" doesn't 
> need any user to do authorization by default. It only needs user "knox" to do 
> authentication, in this case, it's not right to  block the access in SPNEGO 
> filter. We intend to check impersonation permission when the method 
> "getRemoteUser" of request is used, so that such kind of links("/jmx, 
> /stack") would not be blocked by mistake.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14077) Improve the patch of HADOOP-13119

2017-02-13 Thread Yuanbo Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuanbo Liu updated HADOOP-14077:

Status: Patch Available  (was: Open)

> Improve the patch of HADOOP-13119
> -
>
> Key: HADOOP-14077
> URL: https://issues.apache.org/jira/browse/HADOOP-14077
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Yuanbo Liu
>Assignee: Yuanbo Liu
> Attachments: HADOOP-14077.001.patch
>
>
> For some links(such as "/jmx, /stack"), blocking the links in filter chain 
> due to impersonation issue is not friendly for users. For example, user "sam" 
> is not allowed to be impersonated by user "knox", and the link "/jmx" doesn't 
> need any user to do authorization by default. It only needs user "knox" to do 
> authentication, in this case, it's not right to  block the access in SPNEGO 
> filter. We intend to check impersonation permission when the method 
> "getRemoteUser" of request is used, so that such kind of links("/jmx, 
> /stack") would not be blocked by mistake.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14072) AliyunOSS: Failed to read from stream when seek beyond the download size.

2017-02-13 Thread Genmao Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863348#comment-15863348
 ] 

Genmao Yu commented on HADOOP-14072:


pending for @Kai Zheng 's review

> AliyunOSS: Failed to read from stream when seek beyond the download size.
> -
>
> Key: HADOOP-14072
> URL: https://issues.apache.org/jira/browse/HADOOP-14072
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Genmao Yu
>Assignee: Genmao Yu
> Attachments: HADOOP-14072.001.patch
>
>
> {code}
> public synchronized void seek(long pos) throws IOException {
> checkNotClosed();
> if (position == pos) {
>   return;
> } else if (pos > position && pos < position + partRemaining) {
>   AliyunOSSUtils.skipFully(wrappedStream, pos - position);
>   position = pos;
> } else {
>   reopen(pos);
> }
>   }
> {code}
> In seek function, we need to update the partRemaining when the seeking 
> position is located in downloaded part.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14069) AliyunOSS: listStatus returns wrong file info

2017-02-13 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863231#comment-15863231
 ] 

Hudson commented on HADOOP-14069:
-

FAILURE: Integrated in Jenkins build Hadoop-trunk-Commit #11237 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/11237/])
HADOOP-14069. AliyunOSS: listStatus returns wrong file info. Contributed 
(kai.zheng: rev 01be4503c3b053d2cff0b179774dabfd267877db)
* (edit) 
hadoop-tools/hadoop-aliyun/src/main/java/org/apache/hadoop/fs/aliyun/oss/AliyunOSSFileSystem.java
* (edit) 
hadoop-tools/hadoop-aliyun/src/test/java/org/apache/hadoop/fs/aliyun/oss/TestAliyunOSSFileSystemContract.java


> AliyunOSS: listStatus returns wrong file info
> -
>
> Key: HADOOP-14069
> URL: https://issues.apache.org/jira/browse/HADOOP-14069
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/oss
>Affects Versions: 3.0.0-alpha2
>Reporter: Fei Hui
>Assignee: Fei Hui
> Fix For: 3.0.0-alpha3
>
> Attachments: HADOOP-14069.001.patch
>
>
> When i use command 'hadoop fs -ls oss://oss-for-hadoop-sh/', i find that list 
> info is wrong
> {quote}
> $bin/hadoop fs -ls oss://oss-for-hadoop-sh/ 
> Found 1 items
> drwxrwxrwx   -  0 1970-01-01 08:00 oss://oss-for-hadoop-sh/test00
> {quote}
> the modifiedtime is wrong, it should not be 1970-01-01 08:00



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13233) help of stat is confusing

2017-02-13 Thread Attila Bukor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Attila Bukor updated HADOOP-13233:
--
Status: Open  (was: Patch Available)

> help of stat is confusing
> -
>
> Key: HADOOP-13233
> URL: https://issues.apache.org/jira/browse/HADOOP-13233
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Xiaohe Lan
>Assignee: Attila Bukor
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-13233.001.patch, HADOOP-13233.002.patch
>
>
> %b is actually printing the size of a file in bytes, while in help it says 
> filesize in blocks.
> {code}
> hdfs dfs -help stat
> -stat [format]  ... :
>   Print statistics about the file/directory at 
>   in the specified format. Format accepts filesize in
>   blocks (%b)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13578) Add Codec for ZStandard Compression

2017-02-13 Thread James_Luan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863517#comment-15863517
 ] 

James_Luan commented on HADOOP-13578:
-

Hi there.. Is there any schedule for backporing zstd on 2.7? I'd glad to test 
zstd on version 2.7

> Add Codec for ZStandard Compression
> ---
>
> Key: HADOOP-13578
> URL: https://issues.apache.org/jira/browse/HADOOP-13578
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: churro morales
>Assignee: churro morales
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13578-branch-2.v9.patch, HADOOP-13578.patch, 
> HADOOP-13578.v1.patch, HADOOP-13578.v2.patch, HADOOP-13578.v3.patch, 
> HADOOP-13578.v4.patch, HADOOP-13578.v5.patch, HADOOP-13578.v6.patch, 
> HADOOP-13578.v7.patch, HADOOP-13578.v8.patch, HADOOP-13578.v9.patch
>
>
> ZStandard: https://github.com/facebook/zstd has been used in production for 6 
> months by facebook now.  v1.0 was recently released.  Create a codec for this 
> library.  



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13233) help of stat is confusing

2017-02-13 Thread Surendra Singh Lilhore (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15863273#comment-15863273
 ] 

Surendra Singh Lilhore commented on HADOOP-13233:
-

Thanks [~r1pp3rj4ck] for patch..
Mostly looks good to me. Would you update {{FileSystemShell.md}} as well? I'm 
+1 (non-binding) if that is addressed.

> help of stat is confusing
> -
>
> Key: HADOOP-13233
> URL: https://issues.apache.org/jira/browse/HADOOP-13233
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Xiaohe Lan
>Assignee: Attila Bukor
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-13233.001.patch
>
>
> %b is actually printing the size of a file in bytes, while in help it says 
> filesize in blocks.
> {code}
> hdfs dfs -help stat
> -stat [format]  ... :
>   Print statistics about the file/directory at 
>   in the specified format. Format accepts filesize in
>   blocks (%b)
> ...
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org