[jira] [Commented] (HADOOP-12570) HDFS Secure Mode Documentation updates

2015-12-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060652#comment-15060652
 ] 

Steve Loughran commented on HADOOP-12570:
-

yeah, find a way to make my ZK client code not to leave GSS API error 44 in the 
zk logs

> HDFS Secure Mode Documentation updates
> --
>
> Key: HADOOP-12570
> URL: https://issues.apache.org/jira/browse/HADOOP-12570
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-9254.01.patch, HDFS-9254.02.patch, 
> HDFS-9254.03.patch, HDFS-9254.04.patch
>
>
> Some Kerberos configuration parameters are not documented well enough. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-12455) fs.Globber breaks on colon in filename; doesn't use Path's handling for colons

2015-12-16 Thread Rich Haase (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12455?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rich Haase reassigned HADOOP-12455:
---

Assignee: Rich Haase

> fs.Globber breaks on colon in filename; doesn't use Path's handling for colons
> --
>
> Key: HADOOP-12455
> URL: https://issues.apache.org/jira/browse/HADOOP-12455
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Daniel Barclay (Drill)
>Assignee: Rich Haase
>
> {{org.apache.hadoop.fs.Globber.glob()}} breaks when a searched directory 
> contains a file whose simple name contains a colon.
> The problem seem to be in the code currently at lines 258 and 257 
> [https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Globber.java#L257]:
> {noformat}
> 256:  // Set the child path based on the parent path.
> 257:  child.setPath(new Path(candidate.getPath(),
> 258:  child.getPath().getName()));
> {noformat}
> That last line should probably be:
> {noformat}
>   new Path(null, null, child.getPath().getName(;
> {noformat}
> 
> The bug in the current code is that:
> 1) {{child.getPath().getName()}} gets the simple name (last segment) of the 
> child {{Path}} as a _raw_ string (not necessarily the corresponding relative 
> _{{Path}}_ string), and
> 2) that raw string is passed as {{Path(Path, String)}}'s second argument, 
> which takes a _{{Path}}_ string.
> When that raw string contains a colon (e.g., {{xxx:yyy}}), it looks like a 
> {{Path}} string that specifies a scheme ("{{xxx}}") and has a relative path 
> "{{yyy}}}"--but that combination isn't allowed, so trying to constructing a 
> {{Path}} with it (as {{Path(Path, String)}} does inside) throws an exception, 
> aborting the entire {{glob()}} call.
> 
> Adding the call to {{Path(String, String, String)}} does the equivalent of 
> converting the raw string "{{xxx:yyy}}" to the {{Path}} string 
> "{{./xxx:yyy}}", so the part before the colon is not taken as a scheme.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12608) Fix error message in WASB in connecting through Anonymous Credential codepath

2015-12-16 Thread Dushyanth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dushyanth updated HADOOP-12608:
---
Attachment: HADOOP-12608.004.patch

Fixing the issue with creating patch and attaching the patch.

> Fix error message in WASB in connecting through Anonymous Credential codepath
> -
>
> Key: HADOOP-12608
> URL: https://issues.apache.org/jira/browse/HADOOP-12608
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Dushyanth
>Assignee: Dushyanth
> Fix For: 2.8.0
>
> Attachments: HADOOP-12608.001.patch, HADOOP-12608.002.patch, 
> HADOOP-12608.003.patch, HADOOP-12608.004.patch
>
>
> Users of WASB have raised complaints over the error message returned back 
> from WASB when they are trying to connect to Azure storage with anonymous 
> credentials. Current implementation returns the correct message when we 
> encounter a Storage exception, however for scenarios like querying to check 
> if a container exists does not throw a StorageException but returns false 
> when URI is directly specified (Anonymous access) the error message returned 
> does not clearly state that credentials for storage account is not provided. 
> This JIRA tracks the fix the error message to return what is returned when a 
> storage exception is hit and also correct spelling mistakes in the error 
> message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12570) HDFS Secure Mode Documentation updates

2015-12-16 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060487#comment-15060487
 ] 

Arpit Agarwal commented on HADOOP-12570:


[~ste...@apache.org], any way I can convince you to review the latest patch? I 
think it addresses your feedback.

> HDFS Secure Mode Documentation updates
> --
>
> Key: HADOOP-12570
> URL: https://issues.apache.org/jira/browse/HADOOP-12570
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-9254.01.patch, HDFS-9254.02.patch, 
> HDFS-9254.03.patch, HDFS-9254.04.patch
>
>
> Some Kerberos configuration parameters are not documented well enough. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12604) Exception may be swallowed in KMSClientProvider

2015-12-16 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060634#comment-15060634
 ] 

Zhe Zhang commented on HADOOP-12604:


Thanks Yongjun for the work and Steve for the review.

The latest patch LGTM. +1.

> Exception may be swallowed in KMSClientProvider
> ---
>
> Key: HADOOP-12604
> URL: https://issues.apache.org/jira/browse/HADOOP-12604
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
>  Labels: supportability
> Attachments: HADOOP-12604.001.patch, HADOOP-12604.002.patch
>
>
> In KMSClientProvider# createConnection
> {code}
>   try {
> is = conn.getInputStream();
> ret = mapper.readValue(is, klass);
>   } catch (IOException ex) {
> if (is != null) {
>   is.close(); <== close may throw exception
> }
> throw ex;
>   } finally {
> if (is != null) {
>   is.close();
> }
>   }
> }
> {code}
> {{ex}} may be swallowed when {{close}} highlighted in the code throws 
> exception.  Thanks [~qwertymaniac] for pointing this out.
> BTW, I think we should be able to consolidate the two {{is.close()}} in the 
> above code, so we don't close the same stream twice. The one in the {{finally 
> block}} may be called after an exception is thrown or not, and it may throw 
> exception too, we need to be careful not to swallow exception here too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12608) Fix error message in WASB in connecting through Anonymous Credential codepath

2015-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060817#comment-15060817
 ] 

Hadoop QA commented on HADOOP-12608:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
8s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 22s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
11s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
30s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 12s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 21s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 15s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 25s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
22s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 16m 29s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12778087/HADOOP-12608.004.patch
 |
| JIRA Issue | HADOOP-12608 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d5f91f1e964b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0f708d4 |
| findbugs | v3.0.0 |
| whitespace | 

[jira] [Updated] (HADOOP-12615) Fix NPE in MiniKMS.start()

2015-12-16 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12615:
-
Attachment: HADOOP-12615.006.patch

Rev06:
Fixed the bug discovered by Zhe. Also, did a refactoring in MiniKMS which uses 
a private helper method to error-handle copying resource input stream in a 
consistent manner. 

Similarly, created a convenient method getResourceAsStream() in MiniKdc to get 
resource as stream (basically, replicate ThreadUtil.getResourceAsStream(), 
because MiniKdc does not depend on Hadoop)

> Fix NPE in MiniKMS.start()
> --
>
> Key: HADOOP-12615
> URL: https://issues.apache.org/jira/browse/HADOOP-12615
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, test
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: jenkins, supportability, test
> Attachments: HADOOP-12615.001.patch, HADOOP-12615.002.patch, 
> HADOOP-12615.003.patch, HADOOP-12615.004.patch, HADOOP-12615.005.patch, 
> HADOOP-12615.006.patch
>
>
> Sometimes, KMS resource file can not be loaded. When this happens, an 
> InputStream variable will be a null pointer which will subsequently throw NPE.
> This is a supportability JIRA that makes the error message more explicit, and 
> explain why NPE is thrown. Ultimately, leads us to understand why the 
> resource files can not be loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12650) Document all of the secret env vars

2015-12-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060649#comment-15060649
 ] 

Steve Loughran commented on HADOOP-12650:
-

{{HADOOP_CREDSTORE_PASSWORD}}
{{"HADOOP_KEYSTORE_PASSWORD"}}
{{"hadoop.metrics.init.mode"}}

> Document all of the secret env vars
> ---
>
> Key: HADOOP-12650
> URL: https://issues.apache.org/jira/browse/HADOOP-12650
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> Over the years, developers have added all kinds of magical environment 
> variables in the Java code without any concern or thought about either a) 
> documenting them or b) whether they are already used by something else.  We 
> need to update at least hadoop-env.sh to contain a list of these env vars so 
> that end users know that they are either private/unsafe and/or how they can 
> be used.
> Just one of many examples: HADOOP_JAAS_DEBUG.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10940) RPC client does no bounds checking of responses

2015-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060880#comment-15060880
 ] 

Hadoop QA commented on HADOOP-10940:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 26m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 36m 
46s {color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 30m 
30s {color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
6s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 3m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
49s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 6m 
42s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 3m 43s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 32m 
43s {color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 32m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 20m 
25s {color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 20m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 17s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 22s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 19m 18s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 16m 47s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
56s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 223m 33s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | 
hadoop.security.ssl.TestReloadingX509TrustManager |
|   | hadoop.fs.TestLocalFsFCStatistics |
|   | hadoop.fs.shell.find.TestAnd |
|   | hadoop.io.compress.TestCodecPool |
|   | hadoop.fs.shell.find.TestPrint0 |
|   | hadoop.test.TestTimedOutTestsListener |
|   | hadoop.fs.shell.find.TestIname |
|   | hadoop.fs.shell.find.TestName |
|   | hadoop.fs.shell.find.TestFind |
|   | hadoop.ipc.TestRPCWaitForProxy |
| JDK v1.7.0_91 Failed junit tests | 

[jira] [Commented] (HADOOP-12570) HDFS Secure Mode Documentation updates

2015-12-16 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12570?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060709#comment-15060709
 ] 

Arpit Agarwal commented on HADOOP-12570:


I have no idea what that's about. So I am going to see if I can find another 
reviewer.

> HDFS Secure Mode Documentation updates
> --
>
> Key: HADOOP-12570
> URL: https://issues.apache.org/jira/browse/HADOOP-12570
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation
>Affects Versions: 2.7.1
>Reporter: Arpit Agarwal
>Assignee: Arpit Agarwal
> Attachments: HDFS-9254.01.patch, HDFS-9254.02.patch, 
> HDFS-9254.03.patch, HDFS-9254.04.patch
>
>
> Some Kerberos configuration parameters are not documented well enough. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10940) RPC client does no bounds checking of responses

2015-12-16 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-10940:
-
Attachment: HADOOP-10940.patch

Addressed style warnings.  Tests pass for me, pre-commit has become so 
unreliable.

> RPC client does no bounds checking of responses
> ---
>
> Key: HADOOP-10940
> URL: https://issues.apache.org/jira/browse/HADOOP-10940
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10940.patch, HADOOP-10940.patch, 
> HADOOP-10940.patch, HADOOP-10940.patch, HADOOP-10940.patch
>
>
> The rpc client does no bounds checking of server responses.  In the case of 
> communicating with an older and incompatible RPC, this may lead to OOM issues 
> and leaking of resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12192) update releasedocmaker commands

2015-12-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060564#comment-15060564
 ] 

Hudson commented on HADOOP-12192:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #699 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/699/])
HADOOP-12192. update releasedocmaker command line (aw) (aw: rev 
607473e1d047ccd2a2c3804ae94e04f133af9cc2)
* hadoop-common-project/hadoop-common/pom.xml


> update releasedocmaker commands
> ---
>
> Key: HADOOP-12192
> URL: https://issues.apache.org/jira/browse/HADOOP-12192
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-12192.00.patch, HADOOP-12192.01.patch
>
>
> If HADOOP-12135 gets committed and Hadoop switches to use 
> Yetus/Yetus-compatible, then the pom.xml that runs releasedocmaker will need 
> to get updated as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12645) Add option to allow multiple threads making calls on same RPC client not retry serially

2015-12-16 Thread Chang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060910#comment-15060910
 ] 

Chang Li commented on HADOOP-12645:
---

.2 patch include a unit test

> Add option to allow multiple threads making calls on same RPC client not 
> retry serially
> ---
>
> Key: HADOOP-12645
> URL: https://issues.apache.org/jira/browse/HADOOP-12645
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: HADOOP-12645.1.patch, HADOOP-12645.2.patch
>
>
> Current multiple threads making calls on same RPC client retry serially. For 
> example, when a first call starts for retrying for 10 minutes, second and 
> third and fourth calls called in during this time will get queued up. After 
> 1st call fail, those second, third and fourth call will still unwisely go 
> ahead serially to do the 10 minutes retry and fail. Propose to add a optional 
> setting, default to false, but when enabled will allow those queued up 
> following calls to fail without wasting time to retry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12645) Add option to allow multiple threads making calls on same RPC client not retry serially

2015-12-16 Thread Chang Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chang Li updated HADOOP-12645:
--
Attachment: HADOOP-12645.2.patch

> Add option to allow multiple threads making calls on same RPC client not 
> retry serially
> ---
>
> Key: HADOOP-12645
> URL: https://issues.apache.org/jira/browse/HADOOP-12645
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: HADOOP-12645.1.patch, HADOOP-12645.2.patch
>
>
> Current multiple threads making calls on same RPC client retry serially. For 
> example, when a first call starts for retrying for 10 minutes, second and 
> third and fourth calls called in during this time will get queued up. After 
> 1st call fail, those second, third and fourth call will still unwisely go 
> ahead serially to do the 10 minutes retry and fail. Propose to add a optional 
> setting, default to false, but when enabled will allow those queued up 
> following calls to fail without wasting time to retry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12608) Fix error message in WASB in connecting through Anonymous Credential codepath

2015-12-16 Thread Dushyanth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dushyanth updated HADOOP-12608:
---
Attachment: HADOOP-12608.005.patch

Fixed the white-space tab issue.

> Fix error message in WASB in connecting through Anonymous Credential codepath
> -
>
> Key: HADOOP-12608
> URL: https://issues.apache.org/jira/browse/HADOOP-12608
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.0
>Reporter: Dushyanth
>Assignee: Dushyanth
> Fix For: 2.8.0
>
> Attachments: HADOOP-12608.001.patch, HADOOP-12608.002.patch, 
> HADOOP-12608.003.patch, HADOOP-12608.004.patch, HADOOP-12608.005.patch
>
>
> Users of WASB have raised complaints over the error message returned back 
> from WASB when they are trying to connect to Azure storage with anonymous 
> credentials. Current implementation returns the correct message when we 
> encounter a Storage exception, however for scenarios like querying to check 
> if a container exists does not throw a StorageException but returns false 
> when URI is directly specified (Anonymous access) the error message returned 
> does not clearly state that credentials for storage account is not provided. 
> This JIRA tracks the fix the error message to return what is returned when a 
> storage exception is hit and also correct spelling mistakes in the error 
> message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12415) hdfs and nfs builds broken on -missing compile-time dependency on netty

2015-12-16 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12415?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-12415:
-
Fix Version/s: 2.7.2

Merged this into branch-2.7, branch-2.7.2 per our discussion above.

> hdfs and nfs builds broken on -missing compile-time dependency on netty
> ---
>
> Key: HADOOP-12415
> URL: https://issues.apache.org/jira/browse/HADOOP-12415
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.1
> Environment: Bigtop, plain Linux distro of any kind
>Reporter: Konstantin Boudnik
>Assignee: Tom Zeng
> Fix For: 2.8.0, 2.7.2
>
> Attachments: HADOOP-12415.patch
>
>
> As discovered in BIGTOP-2049 {{hadoop-nfs}} module compilation is broken. 
> Looks like that HADOOP-11489 is the root-cause of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12652) Error message in Shell.checkIsBashSupported is misleading

2015-12-16 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12652:
-
Labels: shell supportability  (was: )

> Error message in Shell.checkIsBashSupported is misleading
> -
>
> Key: HADOOP-12652
> URL: https://issues.apache.org/jira/browse/HADOOP-12652
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: shell, supportability
>
> Shell.checkIsBashSupported() creates a bash shell command to verify if the 
> system supports bash. However, its error message is misleading, and the logic 
> should be updated.
> If the shell command throws an IOException, it does not imply the bash did 
> not run successfully. If the shell command process was interrupted, its 
> internal logic throws an InterruptedIOException, which is a subclass of 
> IOException.
> {code:title=Shell.checkIsBashSupported|borderStyle=solid}
> ShellCommandExecutor shexec;
> boolean supported = true;
> try {
>   String[] args = {"bash", "-c", "echo 1000"};
>   shexec = new ShellCommandExecutor(args);
>   shexec.execute();
> } catch (IOException ioe) {
>   LOG.warn("Bash is not supported by the OS", ioe);
>   supported = false;
> }
> {code}
> An example of it appeared in a recent jenkins job
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8257/testReport/org.apache.hadoop.ipc/TestRPCWaitForProxy/testInterruptedWaitForProxy/
> {noformat}
> 2015-12-16 21:31:53,797 WARN  util.Shell 
> (Shell.java:checkIsBashSupported(718)) - Bash is not supported by the OS
> java.io.InterruptedIOException: java.lang.InterruptedException
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:930)
>   at org.apache.hadoop.util.Shell.run(Shell.java:838)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1117)
>   at org.apache.hadoop.util.Shell.checkIsBashSupported(Shell.java:716)
>   at org.apache.hadoop.util.Shell.(Shell.java:705)
>   at org.apache.hadoop.util.StringUtils.(StringUtils.java:79)
>   at 
> org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:639)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:273)
>   at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:803)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:773)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:646)
>   at org.apache.hadoop.ipc.RPC.waitForProtocolProxy(RPC.java:397)
>   at org.apache.hadoop.ipc.RPC.waitForProtocolProxy(RPC.java:350)
>   at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:330)
>   at 
> org.apache.hadoop.ipc.TestRPCWaitForProxy$RpcThread.run(TestRPCWaitForProxy.java:115)
> Caused by: java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Object.wait(Object.java:503)
>   at java.lang.UNIXProcess.waitFor(UNIXProcess.java:264)
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:920)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12652) Error message in Shell.checkIsBashSupported is misleading

2015-12-16 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12652:
-
Status: Patch Available  (was: Open)

> Error message in Shell.checkIsBashSupported is misleading
> -
>
> Key: HADOOP-12652
> URL: https://issues.apache.org/jira/browse/HADOOP-12652
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: shell, supportability
> Attachments: HADOOP-12652.001.patch
>
>
> Shell.checkIsBashSupported() creates a bash shell command to verify if the 
> system supports bash. However, its error message is misleading, and the logic 
> should be updated.
> If the shell command throws an IOException, it does not imply the bash did 
> not run successfully. If the shell command process was interrupted, its 
> internal logic throws an InterruptedIOException, which is a subclass of 
> IOException.
> {code:title=Shell.checkIsBashSupported|borderStyle=solid}
> ShellCommandExecutor shexec;
> boolean supported = true;
> try {
>   String[] args = {"bash", "-c", "echo 1000"};
>   shexec = new ShellCommandExecutor(args);
>   shexec.execute();
> } catch (IOException ioe) {
>   LOG.warn("Bash is not supported by the OS", ioe);
>   supported = false;
> }
> {code}
> An example of it appeared in a recent jenkins job
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8257/testReport/org.apache.hadoop.ipc/TestRPCWaitForProxy/testInterruptedWaitForProxy/
> {noformat}
> 2015-12-16 21:31:53,797 WARN  util.Shell 
> (Shell.java:checkIsBashSupported(718)) - Bash is not supported by the OS
> java.io.InterruptedIOException: java.lang.InterruptedException
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:930)
>   at org.apache.hadoop.util.Shell.run(Shell.java:838)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1117)
>   at org.apache.hadoop.util.Shell.checkIsBashSupported(Shell.java:716)
>   at org.apache.hadoop.util.Shell.(Shell.java:705)
>   at org.apache.hadoop.util.StringUtils.(StringUtils.java:79)
>   at 
> org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:639)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:273)
>   at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:803)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:773)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:646)
>   at org.apache.hadoop.ipc.RPC.waitForProtocolProxy(RPC.java:397)
>   at org.apache.hadoop.ipc.RPC.waitForProtocolProxy(RPC.java:350)
>   at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:330)
>   at 
> org.apache.hadoop.ipc.TestRPCWaitForProxy$RpcThread.run(TestRPCWaitForProxy.java:115)
> Caused by: java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Object.wait(Object.java:503)
>   at java.lang.UNIXProcess.waitFor(UNIXProcess.java:264)
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:920)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12615) Fix NPE in MiniKMS.start()

2015-12-16 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060979#comment-15060979
 ] 

Zhe Zhang commented on HADOOP-12615:


Thanks Wei-Chiu. +1 on the latest patch pending a couple of minor suggestions:
# In {{ThreadUtil#getResourceAsStream}} I suggest we throw an exception when 
current thread's CL is null. Pls let me know you have an explanation that the 
same input stream will be returned from ThreadUtil's CL.
# Similarly, I suggest we remove the change in {{Server}}, or change it to 
throw an exception. Otherwise it will possibly use the CL of ThreadUtil.
{code}
if (is == null) {
  throw new IOException("Can not read resource file '" +
  resourceName + "'");
}
{code}

> Fix NPE in MiniKMS.start()
> --
>
> Key: HADOOP-12615
> URL: https://issues.apache.org/jira/browse/HADOOP-12615
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, test
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: jenkins, supportability, test
> Attachments: HADOOP-12615.001.patch, HADOOP-12615.002.patch, 
> HADOOP-12615.003.patch, HADOOP-12615.004.patch, HADOOP-12615.005.patch, 
> HADOOP-12615.006.patch
>
>
> Sometimes, KMS resource file can not be loaded. When this happens, an 
> InputStream variable will be a null pointer which will subsequently throw NPE.
> This is a supportability JIRA that makes the error message more explicit, and 
> explain why NPE is thrown. Ultimately, leads us to understand why the 
> resource files can not be loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12587) Hadoop AuthToken refuses to work without a maxinactive attribute in issued token

2015-12-16 Thread Benoy Antony (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Benoy Antony updated HADOOP-12587:
--
Attachment: HADOOP-12587-001.patch

Attaching the patch

1. changed the name of the configuration from 
hadoop.http.authentication.token.MaxInactiveInterval  to 
hadoop.http.authentication.token.max-inactive-interval.
2. Set the default value to -1 so that the feature is disabled by default.
3. If the token does not contain mxInterval, the token is still processed 
without throwing exception.
4. add test cases to test token with valid activity interval, expired activity 
interval and missing activity interval. Each test case runs against a server 
where activity interval feature is enabled and disabled.
5. Updated documentation to reflect changes.

> Hadoop AuthToken refuses to work without a maxinactive attribute in issued 
> token
> 
>
> Key: HADOOP-12587
> URL: https://issues.apache.org/jira/browse/HADOOP-12587
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
> Environment: OSX heimdal kerberos client against Linux KDC -talking 
> to a Hadoop 2.6.0 cluster
>Reporter: Steve Loughran
>Assignee: Benoy Antony
>Priority: Blocker
> Attachments: HADOOP-12587-001.patch
>
>
> If you don't have a max-inactive attribute in the auth token returned from 
> the web site, AuthToken will raise an exception. This stops callers without 
> this token being able to submit jobs to a secure Hadoop 2.6 YARN cluster with 
> timeline server enabled. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12645) Add option to allow multiple threads making calls on same RPC client not retry serially

2015-12-16 Thread Chang Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060913#comment-15060913
 ] 

Chang Li commented on HADOOP-12645:
---

[~daryn], could you help review?

> Add option to allow multiple threads making calls on same RPC client not 
> retry serially
> ---
>
> Key: HADOOP-12645
> URL: https://issues.apache.org/jira/browse/HADOOP-12645
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Chang Li
>Assignee: Chang Li
> Attachments: HADOOP-12645.1.patch, HADOOP-12645.2.patch
>
>
> Current multiple threads making calls on same RPC client retry serially. For 
> example, when a first call starts for retrying for 10 minutes, second and 
> third and fourth calls called in during this time will get queued up. After 
> 1st call fail, those second, third and fourth call will still unwisely go 
> ahead serially to do the 10 minutes retry and fail. Propose to add a optional 
> setting, default to false, but when enabled will allow those queued up 
> following calls to fail without wasting time to retry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12615) Fix NPE in MiniKMS.start()

2015-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060924#comment-15060924
 ] 

Hadoop QA commented on HADOOP-12615:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
34s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 56s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 42s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
0s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 1s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 4s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 29s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 44s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
37s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 55s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 55s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 43s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 43s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
58s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 30s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 46s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hadoop-minikdc in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 6m 40s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 30s 
{color} | {color:green} hadoop-kms in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 29s 
{color} | {color:green} hadoop-kms in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 33s 
{color} | {color:green} hadoop-minikdc in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 32s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_91. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 37s 
{color} | {color:green} hadoop-kms in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 37s 
{color} | {color:green} hadoop-kms in the patch passed 

[jira] [Commented] (HADOOP-10940) RPC client does no bounds checking of responses

2015-12-16 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10940?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060925#comment-15060925
 ] 

Daryn Sharp commented on HADOOP-10940:
--

The failed tests pass for me locally.   Would someone please verify?  Wasting 
my time on bizarre pre-commit failures is really irking me...

> RPC client does no bounds checking of responses
> ---
>
> Key: HADOOP-10940
> URL: https://issues.apache.org/jira/browse/HADOOP-10940
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Affects Versions: 2.0.0-alpha, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
>Priority: Critical
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10940.patch, HADOOP-10940.patch, 
> HADOOP-10940.patch, HADOOP-10940.patch, HADOOP-10940.patch
>
>
> The rpc client does no bounds checking of server responses.  In the case of 
> communicating with an older and incompatible RPC, this may lead to OOM issues 
> and leaking of resources.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12587) Hadoop AuthToken refuses to work without a maxinactive attribute in issued token

2015-12-16 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061135#comment-15061135
 ] 

Benoy Antony edited comment on HADOOP-12587 at 12/16/15 11:39 PM:
--

Attaching the patch

1. changed the name of the configuration from 
hadoop.http.authentication.token.MaxInactiveInterval  to 
hadoop.http.authentication.token.max-inactive-interval.
2. Set the default value to -1 so that the feature is disabled by default.
3. If the token does not contain InactiveInterval, the token is still processed 
without throwing exception.
4. added test cases to test token with valid activity interval, expired 
activity interval and missing activity interval. Each test case runs against a 
server where activity interval feature is enabled and disabled.
5. Updated documentation to reflect changes.


was (Author: benoyantony):
Attaching the patch

1. changed the name of the configuration from 
hadoop.http.authentication.token.MaxInactiveInterval  to 
hadoop.http.authentication.token.max-inactive-interval.
2. Set the default value to -1 so that the feature is disabled by default.
3. If the token does not contain mxInterval, the token is still processed 
without throwing exception.
4. add test cases to test token with valid activity interval, expired activity 
interval and missing activity interval. Each test case runs against a server 
where activity interval feature is enabled and disabled.
5. Updated documentation to reflect changes.

> Hadoop AuthToken refuses to work without a maxinactive attribute in issued 
> token
> 
>
> Key: HADOOP-12587
> URL: https://issues.apache.org/jira/browse/HADOOP-12587
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
> Environment: OSX heimdal kerberos client against Linux KDC -talking 
> to a Hadoop 2.6.0 cluster
>Reporter: Steve Loughran
>Assignee: Benoy Antony
>Priority: Blocker
> Attachments: HADOOP-12587-001.patch
>
>
> If you don't have a max-inactive attribute in the auth token returned from 
> the web site, AuthToken will raise an exception. This stops callers without 
> this token being able to submit jobs to a secure Hadoop 2.6 YARN cluster with 
> timeline server enabled. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12645) Add option to allow multiple threads making calls on same RPC client not retry serially

2015-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12645?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061099#comment-15061099
 ] 

Hadoop QA commented on HADOOP-12645:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
32s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 6s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
58s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 10s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 13s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 13s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 6s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 1s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 7s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 52s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 6s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 6s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12778106/HADOOP-12645.2.patch |
| JIRA Issue | HADOOP-12645 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  xml  |
| uname | Linux 19c6cce720c3 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HADOOP-12415) hdfs and nfs builds broken on -missing compile-time dependency on netty

2015-12-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12415?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061098#comment-15061098
 ] 

Hudson commented on HADOOP-12415:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8979 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8979/])
HADOOP-12415. Fixed pom files to correctly include compile-time (vinodkv: rev 
ce16541c62728f69634111dc2ddb321690d3d29b)
* hadoop-common-project/hadoop-common/CHANGES.txt


> hdfs and nfs builds broken on -missing compile-time dependency on netty
> ---
>
> Key: HADOOP-12415
> URL: https://issues.apache.org/jira/browse/HADOOP-12415
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.7.1
> Environment: Bigtop, plain Linux distro of any kind
>Reporter: Konstantin Boudnik
>Assignee: Tom Zeng
> Fix For: 2.8.0, 2.7.2
>
> Attachments: HADOOP-12415.patch
>
>
> As discovered in BIGTOP-2049 {{hadoop-nfs}} module compilation is broken. 
> Looks like that HADOOP-11489 is the root-cause of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12615) Fix NPE in MiniKMS.start()

2015-12-16 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060929#comment-15060929
 ] 

Wei-Chiu Chuang commented on HADOOP-12615:
--

test failure is unrelated.

> Fix NPE in MiniKMS.start()
> --
>
> Key: HADOOP-12615
> URL: https://issues.apache.org/jira/browse/HADOOP-12615
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, test
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: jenkins, supportability, test
> Attachments: HADOOP-12615.001.patch, HADOOP-12615.002.patch, 
> HADOOP-12615.003.patch, HADOOP-12615.004.patch, HADOOP-12615.005.patch, 
> HADOOP-12615.006.patch
>
>
> Sometimes, KMS resource file can not be loaded. When this happens, an 
> InputStream variable will be a null pointer which will subsequently throw NPE.
> This is a supportability JIRA that makes the error message more explicit, and 
> explain why NPE is thrown. Ultimately, leads us to understand why the 
> resource files can not be loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12652) Error message in Shell.checkIsBashSupported is misleading

2015-12-16 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HADOOP-12652:


 Summary: Error message in Shell.checkIsBashSupported is misleading
 Key: HADOOP-12652
 URL: https://issues.apache.org/jira/browse/HADOOP-12652
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


Shell.checkIsBashSupported() creates a bash shell command to verify if the 
system supports bash. However, its error message is misleading, and the logic 
should be updated.

If the shell command throws an IOException, it does not imply the bash did not 
run successfully. If the shell command process was interrupted, its internal 
logic throws an InterruptedIOException, which is a subclass of IOException.
{code:title=Shell.checkIsBashSupported|borderStyle=solid}
ShellCommandExecutor shexec;
boolean supported = true;
try {
  String[] args = {"bash", "-c", "echo 1000"};
  shexec = new ShellCommandExecutor(args);
  shexec.execute();
} catch (IOException ioe) {
  LOG.warn("Bash is not supported by the OS", ioe);
  supported = false;
}
{code}
An example of it appeared in a recent jenkins job
https://builds.apache.org/job/PreCommit-HADOOP-Build/8257/testReport/org.apache.hadoop.ipc/TestRPCWaitForProxy/testInterruptedWaitForProxy/

{noformat}
2015-12-16 21:31:53,797 WARN  util.Shell (Shell.java:checkIsBashSupported(718)) 
- Bash is not supported by the OS
java.io.InterruptedIOException: java.lang.InterruptedException
at org.apache.hadoop.util.Shell.runCommand(Shell.java:930)
at org.apache.hadoop.util.Shell.run(Shell.java:838)
at 
org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1117)
at org.apache.hadoop.util.Shell.checkIsBashSupported(Shell.java:716)
at org.apache.hadoop.util.Shell.(Shell.java:705)
at org.apache.hadoop.util.StringUtils.(StringUtils.java:79)
at 
org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:639)
at 
org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:273)
at 
org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261)
at 
org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:803)
at 
org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:773)
at 
org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:646)
at org.apache.hadoop.ipc.RPC.waitForProtocolProxy(RPC.java:397)
at org.apache.hadoop.ipc.RPC.waitForProtocolProxy(RPC.java:350)
at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:330)
at 
org.apache.hadoop.ipc.TestRPCWaitForProxy$RpcThread.run(TestRPCWaitForProxy.java:115)
Caused by: java.lang.InterruptedException
at java.lang.Object.wait(Native Method)
at java.lang.Object.wait(Object.java:503)
at java.lang.UNIXProcess.waitFor(UNIXProcess.java:264)
at org.apache.hadoop.util.Shell.runCommand(Shell.java:920)
... 15 more
{noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12652) Error message in Shell.checkIsBashSupported is misleading

2015-12-16 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12652:
-
Attachment: HADOOP-12652.001.patch

Rev01: catch InterruptedIOException and log that the exception is due to 
interrupt.

> Error message in Shell.checkIsBashSupported is misleading
> -
>
> Key: HADOOP-12652
> URL: https://issues.apache.org/jira/browse/HADOOP-12652
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: shell, supportability
> Attachments: HADOOP-12652.001.patch
>
>
> Shell.checkIsBashSupported() creates a bash shell command to verify if the 
> system supports bash. However, its error message is misleading, and the logic 
> should be updated.
> If the shell command throws an IOException, it does not imply the bash did 
> not run successfully. If the shell command process was interrupted, its 
> internal logic throws an InterruptedIOException, which is a subclass of 
> IOException.
> {code:title=Shell.checkIsBashSupported|borderStyle=solid}
> ShellCommandExecutor shexec;
> boolean supported = true;
> try {
>   String[] args = {"bash", "-c", "echo 1000"};
>   shexec = new ShellCommandExecutor(args);
>   shexec.execute();
> } catch (IOException ioe) {
>   LOG.warn("Bash is not supported by the OS", ioe);
>   supported = false;
> }
> {code}
> An example of it appeared in a recent jenkins job
> https://builds.apache.org/job/PreCommit-HADOOP-Build/8257/testReport/org.apache.hadoop.ipc/TestRPCWaitForProxy/testInterruptedWaitForProxy/
> {noformat}
> 2015-12-16 21:31:53,797 WARN  util.Shell 
> (Shell.java:checkIsBashSupported(718)) - Bash is not supported by the OS
> java.io.InterruptedIOException: java.lang.InterruptedException
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:930)
>   at org.apache.hadoop.util.Shell.run(Shell.java:838)
>   at 
> org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:1117)
>   at org.apache.hadoop.util.Shell.checkIsBashSupported(Shell.java:716)
>   at org.apache.hadoop.util.Shell.(Shell.java:705)
>   at org.apache.hadoop.util.StringUtils.(StringUtils.java:79)
>   at 
> org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:639)
>   at 
> org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:273)
>   at 
> org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261)
>   at 
> org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:803)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:773)
>   at 
> org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:646)
>   at org.apache.hadoop.ipc.RPC.waitForProtocolProxy(RPC.java:397)
>   at org.apache.hadoop.ipc.RPC.waitForProtocolProxy(RPC.java:350)
>   at org.apache.hadoop.ipc.RPC.waitForProxy(RPC.java:330)
>   at 
> org.apache.hadoop.ipc.TestRPCWaitForProxy$RpcThread.run(TestRPCWaitForProxy.java:115)
> Caused by: java.lang.InterruptedException
>   at java.lang.Object.wait(Native Method)
>   at java.lang.Object.wait(Object.java:503)
>   at java.lang.UNIXProcess.waitFor(UNIXProcess.java:264)
>   at org.apache.hadoop.util.Shell.runCommand(Shell.java:920)
>   ... 15 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11127) Improve versioning and compatibility support in native library for downstream hadoop-common users.

2015-12-16 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061083#comment-15061083
 ] 

Chris Nauroth commented on HADOOP-11127:


I've done some more thinking about this, and I'm now on board with 
version-stamping the library names.  I'd like us to proceed with that as the 
scope for this JIRA.

Version-stamping the libraries alone is an incomplete solution because of the 
issues I raised in my earlier comments.  However, I believe version-stamping 
the libraries is a pre-requisite for any more complete solution, such as 
bundling the native code into the jar.

There are a few remaining requirements though that were not addressed by the 
earlier proposals and patches:

# Fallback logic should make a best effort attempt to load a possibly 
compatible library if an exact match is not available.  For example, assuming 
hadoop-common-2.8.3.jar, it should attempt to load, in order, 
libhadoop-2.8.3.so, libhadoop-2.8.2.so, libhadoop-2.8.1.so, libhadoop-2.8.0.so, 
and finally libhadoop.so as the ultimate fallback.  This would help mitigate 
the deficiencies I described earlier when server-side deployment falls behind 
client-side upgrades.  A client who decides to upgrade form 
hadoop-common-2.8.2.jar to hadoop-common-2.8.3.jar should not suddenly see 
their application break because the library loading logic went all the way back 
to the old libhadoop.so.
# The solution must cover both *nix (libhadoop.so) and Windows (hadoop.dll and 
winutils.exe) completely.
# Let's review with someone from BigTop to make sure the change wouldn't have 
unintended consequences for rpm/deb packaging.

> Improve versioning and compatibility support in native library for downstream 
> hadoop-common users.
> --
>
> Key: HADOOP-11127
> URL: https://issues.apache.org/jira/browse/HADOOP-11127
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Reporter: Chris Nauroth
>Assignee: Alan Burlison
> Attachments: HADOOP-11064.003.patch, proposal.01.txt
>
>
> There is no compatibility policy enforced on the JNI function signatures 
> implemented in the native library.  This library typically is deployed to all 
> nodes in a cluster, built from a specific source code version.  However, 
> downstream applications that want to run in that cluster might choose to 
> bundle a hadoop-common jar at a different version.  Since there is no 
> compatibility policy, this can cause link errors at runtime when the native 
> function signatures expected by hadoop-common.jar do not exist in 
> libhadoop.so/hadoop.dll.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12587) Hadoop AuthToken refuses to work without a maxinactive attribute in issued token

2015-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061211#comment-15061211
 ] 

Hadoop QA commented on HADOOP-12587:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 0m 1s {color} 
| {color:red} Docker failed to build yetus/hadoop:5d9212c. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HADOOP-12587 |
| GITHUB PR | https://github.com/apache/hadoop/pull/48 |
| Powered by | Apache Yetus 0.2.0-SNAPSHOT   http://yetus.apache.org |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8261/console |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/8261/console |


This message was automatically generated.



> Hadoop AuthToken refuses to work without a maxinactive attribute in issued 
> token
> 
>
> Key: HADOOP-12587
> URL: https://issues.apache.org/jira/browse/HADOOP-12587
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
> Environment: OSX heimdal kerberos client against Linux KDC -talking 
> to a Hadoop 2.6.0 cluster
>Reporter: Steve Loughran
>Assignee: Benoy Antony
>Priority: Blocker
> Attachments: HADOOP-12587-001.patch
>
>
> If you don't have a max-inactive attribute in the auth token returned from 
> the web site, AuthToken will raise an exception. This stops callers without 
> this token being able to submit jobs to a secure Hadoop 2.6 YARN cluster with 
> timeline server enabled. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12615) Fix NPE in MiniKMS.start()

2015-12-16 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-12615:
-
Attachment: HADOOP-12615.007.patch

Rev07: removed the change in Server.java. Throw exception if class loader of 
the current thread is null.

Thanks @Zhe for the review!

> Fix NPE in MiniKMS.start()
> --
>
> Key: HADOOP-12615
> URL: https://issues.apache.org/jira/browse/HADOOP-12615
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, test
>Affects Versions: 3.0.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Minor
>  Labels: jenkins, supportability, test
> Attachments: HADOOP-12615.001.patch, HADOOP-12615.002.patch, 
> HADOOP-12615.003.patch, HADOOP-12615.004.patch, HADOOP-12615.005.patch, 
> HADOOP-12615.006.patch, HADOOP-12615.007.patch
>
>
> Sometimes, KMS resource file can not be loaded. When this happens, an 
> InputStream variable will be a null pointer which will subsequently throw NPE.
> This is a supportability JIRA that makes the error message more explicit, and 
> explain why NPE is thrown. Ultimately, leads us to understand why the 
> resource files can not be loaded.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12608) Fix error message in WASB in connecting through Anonymous Credential codepath

2015-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061325#comment-15061325
 ] 

Hadoop QA commented on HADOOP-12608:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
51s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 20s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 26s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 14s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 18s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
10s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 17s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 26s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 34s 
{color} | {color:green} hadoop-azure in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 18m 58s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12778137/HADOOP-12608.005.patch
 |
| JIRA Issue | HADOOP-12608 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 17a47a2eb635 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 3c0adac |
| findbugs | v3.0.0 |
| JDK v1.7.0_91  Test Results | 

[jira] [Commented] (HADOOP-12273) releasedocmaker.py fails with stacktrace if --project option is not specified

2015-12-16 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12273?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061458#comment-15061458
 ] 

Kai Sasaki commented on HADOOP-12273:
-

[~aw] Which branch did you commit? I cannot find this fix on trunk and branch-2.
{code}
$ git checkout trunk
$ git log --grep=HADOOP-12111

$ git checkout branch-2
$ git log --grep=HADOOP-12111

{code}

> releasedocmaker.py fails with stacktrace if --project option is not specified
> -
>
> Key: HADOOP-12273
> URL: https://issues.apache.org/jira/browse/HADOOP-12273
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Reporter: Kengo Seki
>Assignee: Kengo Seki
>Priority: Trivial
> Fix For: HADOOP-12111
>
> Attachments: HADOOP-12273.HADOOP-12111.00.patch, 
> HADOOP-12273.HADOOP-12111.01.patch
>
>
> It should show its usage instead. 
> {code}
> [sekikn@mobile hadoop]$ dev-support/releasedocmaker.py --version 3.0.0
> Traceback (most recent call last):
>   File "dev-support/releasedocmaker.py", line 580, in 
> main()
>   File "dev-support/releasedocmaker.py", line 424, in main
> title=projects[0]
> TypeError: 'NoneType' object has no attribute '__getitem__'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12356) CPU usage statistics on Windows

2015-12-16 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061384#comment-15061384
 ] 

Inigo Goiri commented on HADOOP-12356:
--

{{SysInfo#getCpuUsage}} returns a number between 0 and 100%. The 
{{ResourceUtilization}} returned by {{NodeResourceMonitorImpl}} should contain 
the number of used VCores. To return VCores, we need to take the CPU usage 
(0-100%) from {{SysInfo#getCpuUsage}}, multiply it by the total number of 
cores, and divide by a 100, right?

Right now in trunk, this is broken for Linux because when the system is fully 
utilized, it will return 100, while it should return 4 if it has 4 cores.

> CPU usage statistics on Windows
> ---
>
> Key: HADOOP-12356
> URL: https://issues.apache.org/jira/browse/HADOOP-12356
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
> Environment: CPU: Intel Xeon
> OS: Windows server
>Reporter: Yunqi Zhang
>Assignee: Yunqi Zhang
>  Labels: easyfix, newbie, patch
> Attachments: 0001-Correct-the-CPU-usage-calcualtion.patch, 
> 0001-Correct-the-CPU-usage-calcualtion.patch, HADOOP-12356-v3.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> The CPU usage information on Windows is computed incorrectly. The proposed 
> patch fixes the issue, and unifies the the interface with Linux.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12356) CPU usage statistics on Windows

2015-12-16 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061185#comment-15061185
 ] 

Chris Nauroth commented on HADOOP-12356:


Hello [~yunqi] and [~elgoiri].  The change in {{SysInfoWindows}} looks good to 
me, but it's not clear to me why it was necessary to change 
{{NodeResourceMonitorImpl}}.  The existing code there calls into 
{{SysInfo#getCpuUsage}}, so I expect the change in 
{{SysInfoWindows#getCpuUsage}} is sufficient.  In fact, it seems like the 
{{NodeResourceMonitorImpl}} change would break Linux, which was already 
accounting for number of processors in {{SysInfoLinux#getCpuUsage}}.

> CPU usage statistics on Windows
> ---
>
> Key: HADOOP-12356
> URL: https://issues.apache.org/jira/browse/HADOOP-12356
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
> Environment: CPU: Intel Xeon
> OS: Windows server
>Reporter: Yunqi Zhang
>Assignee: Yunqi Zhang
>  Labels: easyfix, newbie, patch
> Attachments: 0001-Correct-the-CPU-usage-calcualtion.patch, 
> 0001-Correct-the-CPU-usage-calcualtion.patch, HADOOP-12356-v3.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> The CPU usage information on Windows is computed incorrectly. The proposed 
> patch fixes the issue, and unifies the the interface with Linux.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12653) Client.java can get "Address already in use" when using kerberos and attempting to bind to any port on the local IP address

2015-12-16 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-12653:
-

 Summary: Client.java can get "Address already in use" when using 
kerberos and attempting to bind to any port on the local IP address
 Key: HADOOP-12653
 URL: https://issues.apache.org/jira/browse/HADOOP-12653
 Project: Hadoop Common
  Issue Type: Bug
  Components: net
Affects Versions: 2.4.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


Client.java can get "Address already in use" when using kerberos and attempting 
to bind to any port on the local IP address.  It appears to be caused by the 
host running out of ports in the ephemeral range.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12615) Fix NPE in MiniKMS.start()

2015-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061425#comment-15061425
 ] 

Hadoop QA commented on HADOOP-12615:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
26s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 50s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 43s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 40s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 17s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 31s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 33s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 33s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 45s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 9m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 27s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 39s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 29s 
{color} | {color:green} hadoop-minikdc in the patch passed with JDK v1.8.0_66. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 38s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 1m 36s {color} 
| {color:red} hadoop-kms in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 34s 
{color} | {color:green} hadoop-minikdc in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 5s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 41s 
{color} | {color:green} hadoop-kms in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
25s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 56s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK 

[jira] [Commented] (HADOOP-12652) Error message in Shell.checkIsBashSupported is misleading

2015-12-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061232#comment-15061232
 ] 

Hadoop QA commented on HADOOP-12652:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
40s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 4s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 50s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} trunk passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 3s 
{color} | {color:green} trunk passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 59s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 59s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 42s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
15s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_66 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} the patch passed with JDK v1.7.0_91 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 40s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_66. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 27s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_91. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 69m 12s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_66 Failed junit tests | hadoop.fs.shell.TestCopyPreserveFlag |
|   | hadoop.fs.TestSymlinkLocalFSFileSystem |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:0ca8df7 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12778125/HADOOP-12652.001.patch
 |
| JIRA Issue | HADOOP-12652 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux dcddedf79ace 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 

[jira] [Commented] (HADOOP-12653) Client.java can get "Address already in use" when using kerberos and attempting to bind to any port on the local IP address

2015-12-16 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12653?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061231#comment-15061231
 ] 

Colin Patrick McCabe commented on HADOOP-12653:
---

The code that's having the problem is here:
{code}
/*
 * Bind the socket to the host specified in the principal name of the
 * client, to ensure Server matching address of the client connection
 * to host name in principal passed.
 */
UserGroupInformation ticket = remoteId.getTicket();
if (ticket != null && ticket.hasKerberosCredentials()) {
  KerberosInfo krbInfo = 
remoteId.getProtocol().getAnnotation(KerberosInfo.class);
  if (krbInfo != null && krbInfo.clientPrincipal() != null) {
String host = 
  SecurityUtil.getHostFromPrincipal(remoteId.getTicket().getUserName());

// If host name is a valid local address then bind socket to it
InetAddress localAddr = NetUtils.getLocalInetAddress(host);
if (localAddr != null) {
  this.socket.bind(new InetSocketAddress(localAddr, 0));  <=== HERE
}
  }
{code}
You can see that this is binding to port 0, so the usual explanations for 
getting "address already in use" are not relevant here.

There is a discussion here: 
https://idea.popcount.org/2014-04-03-bind-before-connect/

It's kind of a confusing issue, but it boils down to:
* Every TCP connection is identified by a unique 4-tuple of (src ip, src port, 
dst ip, dst port).
* Calling {{bind-then-connect}} imposes restrictions on what src port can be 
that simply calling {{connect}} does not.  Specifically {{bind}} has to choose 
a port without knowing what dst ip and dst port will be, meaning that it has to 
be more conservative to ensure global uniqueness.

I think using {{SO_REUSEADDR}} can help here.  It's a bit confusing since that 
also opens us up to getting {{EADDRNOTAVAIL}}.  If I'm reading this right, 
though, that error code would only happen in the rare case where two threads 
happened to get into the critical section between bind and connect at the same 
time AND choose the same source port.  We could either retry in that case or 
ignore it and rely on higher-level retry mechanisms to kick in.

> Client.java can get "Address already in use" when using kerberos and 
> attempting to bind to any port on the local IP address
> ---
>
> Key: HADOOP-12653
> URL: https://issues.apache.org/jira/browse/HADOOP-12653
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: net
>Affects Versions: 2.4.0
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
>
> Client.java can get "Address already in use" when using kerberos and 
> attempting to bind to any port on the local IP address.  It appears to be 
> caused by the host running out of ports in the ephemeral range.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12587) Hadoop AuthToken refuses to work without a maxinactive attribute in issued token

2015-12-16 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12587?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061372#comment-15061372
 ] 

Benoy Antony commented on HADOOP-12587:
---

Not sure, what happened here. I can apply the patch and build fine on my local 
machine.

> Hadoop AuthToken refuses to work without a maxinactive attribute in issued 
> token
> 
>
> Key: HADOOP-12587
> URL: https://issues.apache.org/jira/browse/HADOOP-12587
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.8.0
> Environment: OSX heimdal kerberos client against Linux KDC -talking 
> to a Hadoop 2.6.0 cluster
>Reporter: Steve Loughran
>Assignee: Benoy Antony
>Priority: Blocker
> Attachments: HADOOP-12587-001.patch
>
>
> If you don't have a max-inactive attribute in the auth token returned from 
> the web site, AuthToken will raise an exception. This stops callers without 
> this token being able to submit jobs to a secure Hadoop 2.6 YARN cluster with 
> timeline server enabled. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12356) CPU usage statistics on Windows

2015-12-16 Thread Inigo Goiri (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15061391#comment-15061391
 ] 

Inigo Goiri commented on HADOOP-12356:
--

{code:title=SysInfoLinux.java|borderStyle=solid}
  @Override
  public float getCpuUsage() {
readProcStatFile();
float overallCpuUsage = cpuTimeTracker.getCpuTrackerUsagePercent();
if (overallCpuUsage != CpuTimeTracker.UNAVAILABLE) {
  overallCpuUsage = overallCpuUsage / getNumProcessors();
}
return overallCpuUsage;
  }
{code}

This shows that it will take the summation of all the CPU percentages and 
divide it by the number of cores to make a number between 0 and 100.

> CPU usage statistics on Windows
> ---
>
> Key: HADOOP-12356
> URL: https://issues.apache.org/jira/browse/HADOOP-12356
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
> Environment: CPU: Intel Xeon
> OS: Windows server
>Reporter: Yunqi Zhang
>Assignee: Yunqi Zhang
>  Labels: easyfix, newbie, patch
> Attachments: 0001-Correct-the-CPU-usage-calcualtion.patch, 
> 0001-Correct-the-CPU-usage-calcualtion.patch, HADOOP-12356-v3.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> The CPU usage information on Windows is computed incorrectly. The proposed 
> patch fixes the issue, and unifies the the interface with Linux.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12646) NPE thrown at KMS startup

2015-12-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12646?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15059857#comment-15059857
 ] 

Steve Loughran commented on HADOOP-12646:
-

can you tag this with the hadoop version you are seeing with it. Ideally, can 
you see if you can replicate it on branch-2 or trunk from git

> NPE thrown at KMS startup
> -
>
> Key: HADOOP-12646
> URL: https://issues.apache.org/jira/browse/HADOOP-12646
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms
>Reporter: Archana T
>Assignee: Archana T
>
> NPE thrown while starting KMS --
> ERROR: Hadoop KMS could not be started
> REASON: java.lang.NullPointerException
> Stacktrace:
> ---
> java.lang.NullPointerException
> at 
> org.apache.hadoop.security.ProviderUtils.unnestUri(ProviderUtils.java:35)
> at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:134)
> at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider.(JavaKeyStoreProvider.java:91)
> at 
> org.apache.hadoop.crypto.key.JavaKeyStoreProvider$Factory.createProvider(JavaKeyStoreProvider.java:669)
> at 
> org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:95)
> at 
> org.apache.hadoop.crypto.key.kms.server.KMSWebApp.contextInitialized(KMSWebApp.java:167)
> at 
> org.apache.catalina.core.StandardContext.listenerStart(StandardContext.java:5017)
> at 
> org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5531)
> at 
> org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:150)
> at 
> org.apache.catalina.core.ContainerBase.addChildInternal(ContainerBase.java:901)
> at 
> org.apache.catalina.core.ContainerBase.addChild(ContainerBase.java:877)
> at 
> org.apache.catalina.core.StandardHost.addChild(StandardHost.java:652)
> at 
> org.apache.catalina.startup.HostConfig.deployDirectory(HostConfig.java:1263)
> at 
> org.apache.catalina.startup.HostConfig$DeployDirectory.run(HostConfig.java:1948)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12648) Not able to compile hadoop source code on windows

2015-12-16 Thread Pardeep (JIRA)
Pardeep created HADOOP-12648:


 Summary: Not able to compile hadoop source code on windows
 Key: HADOOP-12648
 URL: https://issues.apache.org/jira/browse/HADOOP-12648
 Project: Hadoop Common
  Issue Type: Wish
  Components: build
Affects Versions: 2.6.2
 Environment: WIndow 7 32 bit 
Maven 3.3.9
Protoc 2.5.0
Cmake 3.3.2
Zlib 1.2.7
Cygwin

Reporter: Pardeep


I haved added the path as per below :

cmake_path =C:\cmake
FINDBUGS_HOME=C:\FINDBUGS_HOME
HADOOP_HOME=C:\HOOO\hadoop-2.6.2-src
path=C:\JAVA\bin
ZLIB_HOME=C:\zlib-1.2.7
path 
=C:\oraclexe\app\oracle\product\11.2.0\server\bin;D:\Forms\bin;D:\Reports\bin;D:\oracle\ora92\bin;C:\Program
 Files\Oracle\jre\1.3.1\bin;C:\Program 
Files\Oracle\jre\1.1.8\bin;D:\Workflow\bin;C:\Program Files\Intel\iCLS 
Client\;%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;%SYSTEMROOT%\System32\WindowsPowerShell\v1.0\;C:\Program
 Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program 
Files\Intel\Intel(R) Management Engine Components\IPT;C:\Program 
Files\WIDCOMM\Bluetooth Software\;C:\Program Files\Intel\WiFi\bin\;C:\Program 
Files\Common Files\Intel\WirelessCommon\;D:\Forms\jdk\bin;C:\Program 
Files\Intel\OpenCL SDK\2.0\bin\x86;D:\Reports\jdk\bin;C:\Program 
Files\TortoiseSVN\bin;c:\cygwin\bin;%M2_HOME%\bin;C:\protobuf;C/Windows/Microsoft.NET/Framework/v4.0.30319;C:\Program
 Files\Microsoft Windows Performance 
Toolkit\;C:\msysgit\Git\cmd;C:\msysgit\bin\;C:\msysgit\mingw\bin\;C:\cmake;C:\FINDBUGS_HOME;C:\zlib-1.2.7

Please let me know if anything is wrong or need to install any other s/w



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-12648) Not able to compile hadoop source code on windows

2015-12-16 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula resolved HADOOP-12648.
---
Resolution: Not A Problem

> Not able to compile hadoop source code on windows
> -
>
> Key: HADOOP-12648
> URL: https://issues.apache.org/jira/browse/HADOOP-12648
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: build
>Affects Versions: 2.6.2
> Environment: WIndow 7 32 bit 
> Maven 3.3.9
> Protoc 2.5.0
> Cmake 3.3.2
> Zlib 1.2.7
> Cygwin
>Reporter: Pardeep
>
> I haved added the path as per below :
> cmake_path =C:\cmake
> FINDBUGS_HOME=C:\FINDBUGS_HOME
> HADOOP_HOME=C:\HOOO\hadoop-2.6.2-src
> path=C:\JAVA\bin
> ZLIB_HOME=C:\zlib-1.2.7
> path 
> =C:\oraclexe\app\oracle\product\11.2.0\server\bin;D:\Forms\bin;D:\Reports\bin;D:\oracle\ora92\bin;C:\Program
>  Files\Oracle\jre\1.3.1\bin;C:\Program 
> Files\Oracle\jre\1.1.8\bin;D:\Workflow\bin;C:\Program Files\Intel\iCLS 
> Client\;%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;%SYSTEMROOT%\System32\WindowsPowerShell\v1.0\;C:\Program
>  Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program 
> Files\Intel\Intel(R) Management Engine Components\IPT;C:\Program 
> Files\WIDCOMM\Bluetooth Software\;C:\Program Files\Intel\WiFi\bin\;C:\Program 
> Files\Common Files\Intel\WirelessCommon\;D:\Forms\jdk\bin;C:\Program 
> Files\Intel\OpenCL SDK\2.0\bin\x86;D:\Reports\jdk\bin;C:\Program 
> Files\TortoiseSVN\bin;c:\cygwin\bin;%M2_HOME%\bin;C:\protobuf;C/Windows/Microsoft.NET/Framework/v4.0.30319;C:\Program
>  Files\Microsoft Windows Performance 
> Toolkit\;C:\msysgit\Git\cmd;C:\msysgit\bin\;C:\msysgit\mingw\bin\;C:\cmake;C:\FINDBUGS_HOME;C:\zlib-1.2.7
> Please let me know if anything is wrong or need to install any other s/w



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12649) Improve UGI diagnostics and failure handling

2015-12-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15059982#comment-15059982
 ] 

Allen Wittenauer commented on HADOOP-12649:
---

There is nothing better than waking up to find some magic, undocumented env var 
like HADOOP_JAAS_DEBUG.

> Improve UGI diagnostics and failure handling
> 
>
> Key: HADOOP-12649
> URL: https://issues.apache.org/jira/browse/HADOOP-12649
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
> Environment: Kerberos
>Reporter: Steve Loughran
>
> Sometimes —apparently— some people cannot get kerberos to work.
> The ability to diagnose problems here is hampered by some aspects of UGI
> # the only way to turn on JAAS debug information is through an env var, not 
> within the JVM
> # failures are potentially underlogged
> # exceptions raised are generic IOEs, so can't be trapped and filtered
> # failure handling on the TGT renewer thread is nonexistent
> # the code is barely-readable, underdocumented mess.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12649) Improve UGI diagnostics and failure handling

2015-12-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15059985#comment-15059985
 ] 

Steve Loughran commented on HADOOP-12649:
-

SLIDER-1027 covers a kerberos diagnostics command line entry I'm adding in 
slider; it has to go into the hadoop.security package to be able to force 
keytab renewal.

It's a bit limited in what it can debug; there's not enough information for 
diagnostics, and when things like the renewer thread exit, there is no obvious 
way to determine the fact. At the very least have some bool we can probe to see 
if the thread is running

> Improve UGI diagnostics and failure handling
> 
>
> Key: HADOOP-12649
> URL: https://issues.apache.org/jira/browse/HADOOP-12649
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
> Environment: Kerberos
>Reporter: Steve Loughran
>
> Sometimes —apparently— some people cannot get kerberos to work.
> The ability to diagnose problems here is hampered by some aspects of UGI
> # the only way to turn on JAAS debug information is through an env var, not 
> within the JVM
> # failures are potentially underlogged
> # exceptions raised are generic IOEs, so can't be trapped and filtered
> # failure handling on the TGT renewer thread is nonexistent
> # the code is barely-readable, underdocumented mess.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12649) Improve UGI diagnostics and failure handling

2015-12-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15059989#comment-15059989
 ] 

Steve Loughran commented on HADOOP-12649:
-

Except the secret sysprops needed for JRE debugging, like 
{{sun.security.spnego.debug}}

See also: 
[https://steveloughran.gitbooks.io/kerberos_and_hadoop/content/sections/secrets.html]

> Improve UGI diagnostics and failure handling
> 
>
> Key: HADOOP-12649
> URL: https://issues.apache.org/jira/browse/HADOOP-12649
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
> Environment: Kerberos
>Reporter: Steve Loughran
>
> Sometimes —apparently— some people cannot get kerberos to work.
> The ability to diagnose problems here is hampered by some aspects of UGI
> # the only way to turn on JAAS debug information is through an env var, not 
> within the JVM
> # failures are potentially underlogged
> # exceptions raised are generic IOEs, so can't be trapped and filtered
> # failure handling on the TGT renewer thread is nonexistent
> # the code is barely-readable, underdocumented mess.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12649) Improve UGI diagnostics and failure handling

2015-12-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15059990#comment-15059990
 ] 

Allen Wittenauer commented on HADOOP-12649:
---

Opened  HADOOP-12650 to basically audit the entire code base for all of the 
getenvs in the Java code. 

> Improve UGI diagnostics and failure handling
> 
>
> Key: HADOOP-12649
> URL: https://issues.apache.org/jira/browse/HADOOP-12649
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
> Environment: Kerberos
>Reporter: Steve Loughran
>
> Sometimes —apparently— some people cannot get kerberos to work.
> The ability to diagnose problems here is hampered by some aspects of UGI
> # the only way to turn on JAAS debug information is through an env var, not 
> within the JVM
> # failures are potentially underlogged
> # exceptions raised are generic IOEs, so can't be trapped and filtered
> # failure handling on the TGT renewer thread is nonexistent
> # the code is barely-readable, underdocumented mess.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12650) Document all of the secret env vars

2015-12-16 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12650:
-

 Summary: Document all of the secret env vars
 Key: HADOOP-12650
 URL: https://issues.apache.org/jira/browse/HADOOP-12650
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Priority: Blocker


Over the years, developers have added all kinds of magical environment 
variables in the Java code without any concern or thought about either a) 
documenting them or b) whether they are already used by something else.  We 
need to update at least hadoop-env.sh to contain a list of these env vars so 
that end users know that they are either private/unsafe and/or how they can be 
used.

Just one of many examples: HADOOP_JAAS_DEBUG.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12642) Update documentation to cover fs.s3.buffer.dir enhancements

2015-12-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12642:

Affects Version/s: 2.6.0
 Priority: Minor  (was: Major)
Fix Version/s: (was: 2.6.0)
  Component/s: fs/s3

> Update documentation to cover fs.s3.buffer.dir enhancements
> ---
>
> Key: HADOOP-12642
> URL: https://issues.apache.org/jira/browse/HADOOP-12642
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: 2.6.0
>Reporter: Jason Archip
>Priority: Minor
>
> Could you please update the documentation at 
> https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html 
> to include the options for the updated fs.s3.buffer.dir
> Please let me know if there is a different place to put my request
> Thanks,
> Jason Archip



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12642) Update documentation to cover fs.s3.buffer.dir enhancements

2015-12-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12642:

Parent Issue: HADOOP-11694  (was: HADOOP-10610)

> Update documentation to cover fs.s3.buffer.dir enhancements
> ---
>
> Key: HADOOP-12642
> URL: https://issues.apache.org/jira/browse/HADOOP-12642
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation, fs/s3
>Affects Versions: 2.6.0
>Reporter: Jason Archip
>
> Could you please update the documentation at 
> https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html 
> to include the options for the updated fs.s3.buffer.dir
> Please let me know if there is a different place to put my request
> Thanks,
> Jason Archip



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12649) Improve UGI diagnostics and failure handling

2015-12-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15059963#comment-15059963
 ] 

Steve Loughran commented on HADOOP-12649:
-

If you can't renew a ticket as you were kinited-in and it's expired, the 
renewer thread exits with nothing but a warning. It doesn't even print the 
stack trace of the nested exception.

{code}
2015-12-16 12:57:44,005 [TGT Renewer for stevel@COTHAM] WARN  
security.UserGroupInformation (run(914)) - Exception encountered while running 
the renewal command. Aborting renew thread. ExitCodeException exitCode=1: 
kinit: krb5_get_kdc_cred: Error from KDC: TKT_EXPIRED
{code}

A near-silent failure is not always what you want. There is nothing to prevent 
a renewal-failure action to be provided to this thread, allowing an 
application-level action to be performed (maybe even retry)


> Improve UGI diagnostics and failure handling
> 
>
> Key: HADOOP-12649
> URL: https://issues.apache.org/jira/browse/HADOOP-12649
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
> Environment: Kerberos
>Reporter: Steve Loughran
>
> Sometimes —apparently— some people cannot get kerberos to work.
> The ability to diagnose problems here is hampered by some aspects of UGI
> # the only way to turn on JAAS debug information is through an env var, not 
> within the JVM
> # failures are potentially underlogged
> # exceptions raised are generic IOEs, so can't be trapped and filtered
> # failure handling on the TGT renewer thread is nonexistent
> # the code is barely-readable, underdocumented mess.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12649) Improve UGI diagnostics and failure handling

2015-12-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15059976#comment-15059976
 ] 

Steve Loughran commented on HADOOP-12649:
-

Example: the only way to debug JAAS internals is to set the env var 
{{HADOOP_JAAS_DEBUG}}. It is therefore impossible to enable this from inside 
the JVM.

Better: provide a method to turn this or, and/or hook it up to the log level of 
UGI itself. That is if the UGI log is at debug, turn JAAS debug on automatically

> Improve UGI diagnostics and failure handling
> 
>
> Key: HADOOP-12649
> URL: https://issues.apache.org/jira/browse/HADOOP-12649
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
> Environment: Kerberos
>Reporter: Steve Loughran
>
> Sometimes —apparently— some people cannot get kerberos to work.
> The ability to diagnose problems here is hampered by some aspects of UGI
> # the only way to turn on JAAS debug information is through an env var, not 
> within the JVM
> # failures are potentially underlogged
> # exceptions raised are generic IOEs, so can't be trapped and filtered
> # failure handling on the TGT renewer thread is nonexistent
> # the code is barely-readable, underdocumented mess.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12648) Not able to compile hadoop source code on windows

2015-12-16 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12648?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15059975#comment-15059975
 ] 

Brahma Reddy Battula commented on HADOOP-12648:
---

[~PardeepJangid] thanks for reporting.. Jira is for track the issues..Please 
post your queries in [user mailing 
list|https://hadoop.apache.org/mailing_lists.html]..



> Not able to compile hadoop source code on windows
> -
>
> Key: HADOOP-12648
> URL: https://issues.apache.org/jira/browse/HADOOP-12648
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: build
>Affects Versions: 2.6.2
> Environment: WIndow 7 32 bit 
> Maven 3.3.9
> Protoc 2.5.0
> Cmake 3.3.2
> Zlib 1.2.7
> Cygwin
>Reporter: Pardeep
>
> I haved added the path as per below :
> cmake_path =C:\cmake
> FINDBUGS_HOME=C:\FINDBUGS_HOME
> HADOOP_HOME=C:\HOOO\hadoop-2.6.2-src
> path=C:\JAVA\bin
> ZLIB_HOME=C:\zlib-1.2.7
> path 
> =C:\oraclexe\app\oracle\product\11.2.0\server\bin;D:\Forms\bin;D:\Reports\bin;D:\oracle\ora92\bin;C:\Program
>  Files\Oracle\jre\1.3.1\bin;C:\Program 
> Files\Oracle\jre\1.1.8\bin;D:\Workflow\bin;C:\Program Files\Intel\iCLS 
> Client\;%SystemRoot%\system32;%SystemRoot%;%SystemRoot%\System32\Wbem;%SYSTEMROOT%\System32\WindowsPowerShell\v1.0\;C:\Program
>  Files\Intel\Intel(R) Management Engine Components\DAL;C:\Program 
> Files\Intel\Intel(R) Management Engine Components\IPT;C:\Program 
> Files\WIDCOMM\Bluetooth Software\;C:\Program Files\Intel\WiFi\bin\;C:\Program 
> Files\Common Files\Intel\WirelessCommon\;D:\Forms\jdk\bin;C:\Program 
> Files\Intel\OpenCL SDK\2.0\bin\x86;D:\Reports\jdk\bin;C:\Program 
> Files\TortoiseSVN\bin;c:\cygwin\bin;%M2_HOME%\bin;C:\protobuf;C/Windows/Microsoft.NET/Framework/v4.0.30319;C:\Program
>  Files\Microsoft Windows Performance 
> Toolkit\;C:\msysgit\Git\cmd;C:\msysgit\bin\;C:\msysgit\mingw\bin\;C:\cmake;C:\FINDBUGS_HOME;C:\zlib-1.2.7
> Please let me know if anything is wrong or need to install any other s/w



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12649) Improve UGI diagnostics and failure handling

2015-12-16 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-12649:
---

 Summary: Improve UGI diagnostics and failure handling
 Key: HADOOP-12649
 URL: https://issues.apache.org/jira/browse/HADOOP-12649
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.7.1
 Environment: Kerberos
Reporter: Steve Loughran


Sometimes —apparently— some people cannot get kerberos to work.

The ability to diagnose problems here is hampered by some aspects of UGI
# the only way to turn on JAAS debug information is through an env var, not 
within the JVM
# failures are potentially underlogged
# exceptions raised are generic IOEs, so can't be trapped and filtered
# failure handling on the TGT renewer thread is nonexistent
# the code is barely-readable, underdocumented mess.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12426) Add Entry point for Kerberos health check

2015-12-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060048#comment-15060048
 ] 

Steve Loughran commented on HADOOP-12426:
-

The UGI improvements of HADOOP-12649 are a foundation for this

> Add Entry point for Kerberos health check
> -
>
> Key: HADOOP-12426
> URL: https://issues.apache.org/jira/browse/HADOOP-12426
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Minor
>
> If we a little command line entry point for testing kerberos settings, 
> including some automated diagnostics checks, we could simplify fielding the 
> client-side support calls.
> Specifically
> * check JRE for having java crypto extensions at full key length.
> * network checks: do you know your own name?
> * Is the user kinited in?
> * if a tgt is specified, does it exist?
> * are hadoop security options consistent?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12649) Improve UGI diagnostics and failure handling

2015-12-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060045#comment-15060045
 ] 

Steve Loughran commented on HADOOP-12649:
-

{{loginUserFromKeytabAndReturnUGI}} could perhaps check its parameters 
—especially principal— for being non-null; and that the keytab exists and is 
non-empty. It could then fail with more useful messages than "login failure for 
user null"

> Improve UGI diagnostics and failure handling
> 
>
> Key: HADOOP-12649
> URL: https://issues.apache.org/jira/browse/HADOOP-12649
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
> Environment: Kerberos
>Reporter: Steve Loughran
>
> Sometimes —apparently— some people cannot get kerberos to work.
> The ability to diagnose problems here is hampered by some aspects of UGI
> # the only way to turn on JAAS debug information is through an env var, not 
> within the JVM
> # failures are potentially underlogged
> # exceptions raised are generic IOEs, so can't be trapped and filtered
> # failure handling on the TGT renewer thread is nonexistent
> # the code is barely-readable, underdocumented mess.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12426) Add Entry point for Kerberos health check

2015-12-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060049#comment-15060049
 ] 

Steve Loughran commented on HADOOP-12426:
-

SLIDER-1027 is the first implementation of this; it's in the hadoop security 
package and only using hadoop operations as a precursor to migration. I just 
need it without waiting for a future hadoop release.

> Add Entry point for Kerberos health check
> -
>
> Key: HADOOP-12426
> URL: https://issues.apache.org/jira/browse/HADOOP-12426
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>Priority: Minor
>
> If we a little command line entry point for testing kerberos settings, 
> including some automated diagnostics checks, we could simplify fielding the 
> client-side support calls.
> Specifically
> * check JRE for having java crypto extensions at full key length.
> * network checks: do you know your own name?
> * Is the user kinited in?
> * if a tgt is specified, does it exist?
> * are hadoop security options consistent?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12651) Replace dev-support with wrappers to Yetus

2015-12-16 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-12651:
-

 Summary: Replace dev-support with wrappers to Yetus
 Key: HADOOP-12651
 URL: https://issues.apache.org/jira/browse/HADOOP-12651
 Project: Hadoop Common
  Issue Type: New Feature
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer


Now that Yetus has had a release, we should rip out the components that make it 
up from dev-support and replace them with wrappers.  The wrappers should:
* default to a sane version
* allow for version overrides via an env var
* download into patchprocess
* execute with the given parameters

Marking this as an incompatible change, since we should also remove the 
filename extensions and move these into a bin directory for better maintenance 
towards the future.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12192) update releasedocmaker commands

2015-12-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060060#comment-15060060
 ] 

Hudson commented on HADOOP-12192:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8974 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8974/])
HADOOP-12192. update releasedocmaker command line (aw) (aw: rev 
607473e1d047ccd2a2c3804ae94e04f133af9cc2)
* hadoop-common-project/hadoop-common/pom.xml


> update releasedocmaker commands
> ---
>
> Key: HADOOP-12192
> URL: https://issues.apache.org/jira/browse/HADOOP-12192
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-12192.00.patch, HADOOP-12192.01.patch
>
>
> If HADOOP-12135 gets committed and Hadoop switches to use 
> Yetus/Yetus-compatible, then the pom.xml that runs releasedocmaker will need 
> to get updated as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12650) Document all of the secret env vars

2015-12-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060088#comment-15060088
 ] 

Steve Loughran commented on HADOOP-12650:
-

>From the slider kdiags code, those from UGI alone
{code}
new String[]{
  "HADOOP_JAAS_DEBUG",
  KRB5_CCNAME,
  HADOOP_USER_NAME,
  HADOOP_PROXY_USER,
  HADOOP_TOKEN_FILE_LOCATION,
}) {
{code}

> Document all of the secret env vars
> ---
>
> Key: HADOOP-12650
> URL: https://issues.apache.org/jira/browse/HADOOP-12650
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> Over the years, developers have added all kinds of magical environment 
> variables in the Java code without any concern or thought about either a) 
> documenting them or b) whether they are already used by something else.  We 
> need to update at least hadoop-env.sh to contain a list of these env vars so 
> that end users know that they are either private/unsafe and/or how they can 
> be used.
> Just one of many examples: HADOOP_JAAS_DEBUG.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12192) update releasedocmaker commands

2015-12-16 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12192?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12192:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks! Committed to trunk

> update releasedocmaker commands
> ---
>
> Key: HADOOP-12192
> URL: https://issues.apache.org/jira/browse/HADOOP-12192
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0
>
> Attachments: HADOOP-12192.00.patch, HADOOP-12192.01.patch
>
>
> If HADOOP-12135 gets committed and Hadoop switches to use 
> Yetus/Yetus-compatible, then the pom.xml that runs releasedocmaker will need 
> to get updated as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12649) Improve UGI diagnostics and failure handling

2015-12-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060156#comment-15060156
 ] 

Steve Loughran commented on HADOOP-12649:
-

ZOOKEEPER-2344 would benefit from some of this


> Improve UGI diagnostics and failure handling
> 
>
> Key: HADOOP-12649
> URL: https://issues.apache.org/jira/browse/HADOOP-12649
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
> Environment: Kerberos
>Reporter: Steve Loughran
>
> Sometimes —apparently— some people cannot get kerberos to work.
> The ability to diagnose problems here is hampered by some aspects of UGI
> # the only way to turn on JAAS debug information is through an env var, not 
> within the JVM
> # failures are potentially underlogged
> # exceptions raised are generic IOEs, so can't be trapped and filtered
> # failure handling on the TGT renewer thread is nonexistent
> # the code is barely-readable, underdocumented mess.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12650) Document all of the secret env vars

2015-12-16 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060168#comment-15060168
 ] 

Allen Wittenauer commented on HADOOP-12650:
---

Luckily, KRB5_CCNAME is actually in the Kerberos man pages. :)

> Document all of the secret env vars
> ---
>
> Key: HADOOP-12650
> URL: https://issues.apache.org/jira/browse/HADOOP-12650
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> Over the years, developers have added all kinds of magical environment 
> variables in the Java code without any concern or thought about either a) 
> documenting them or b) whether they are already used by something else.  We 
> need to update at least hadoop-env.sh to contain a list of these env vars so 
> that end users know that they are either private/unsafe and/or how they can 
> be used.
> Just one of many examples: HADOOP_JAAS_DEBUG.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12642) Update documentation to cover fs.s3.buffer.dir enhancements

2015-12-16 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12642?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12642:

Summary: Update documentation to cover fs.s3.buffer.dir enhancements  (was: 
Update documentation to reflect these changes )

> Update documentation to cover fs.s3.buffer.dir enhancements
> ---
>
> Key: HADOOP-12642
> URL: https://issues.apache.org/jira/browse/HADOOP-12642
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Jason Archip
> Fix For: 2.6.0
>
>
> Could you please update the documentation at 
> https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html 
> to include the options for the updated fs.s3.buffer.dir
> Please let me know if there is a different place to put my request
> Thanks,
> Jason Archip



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12649) Improve UGI diagnostics and failure handling

2015-12-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060187#comment-15060187
 ] 

Steve Loughran commented on HADOOP-12649:
-

some (package scoped at least) access to User and their LoginContext would help 
debug that

> Improve UGI diagnostics and failure handling
> 
>
> Key: HADOOP-12649
> URL: https://issues.apache.org/jira/browse/HADOOP-12649
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
> Environment: Kerberos
>Reporter: Steve Loughran
>
> Sometimes —apparently— some people cannot get kerberos to work.
> The ability to diagnose problems here is hampered by some aspects of UGI
> # the only way to turn on JAAS debug information is through an env var, not 
> within the JVM
> # failures are potentially underlogged
> # exceptions raised are generic IOEs, so can't be trapped and filtered
> # failure handling on the TGT renewer thread is nonexistent
> # the code is barely-readable, underdocumented mess.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12649) Improve UGI diagnostics and failure handling

2015-12-16 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15060197#comment-15060197
 ] 

Steve Loughran commented on HADOOP-12649:
-

the {{isLoginKeytabBased}} and {{isLoginTicketBased}} calls could be exported 
by the UGI instances; currently they are static and hard-coded to the login user

> Improve UGI diagnostics and failure handling
> 
>
> Key: HADOOP-12649
> URL: https://issues.apache.org/jira/browse/HADOOP-12649
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
> Environment: Kerberos
>Reporter: Steve Loughran
>
> Sometimes —apparently— some people cannot get kerberos to work.
> The ability to diagnose problems here is hampered by some aspects of UGI
> # the only way to turn on JAAS debug information is through an env var, not 
> within the JVM
> # failures are potentially underlogged
> # exceptions raised are generic IOEs, so can't be trapped and filtered
> # failure handling on the TGT renewer thread is nonexistent
> # the code is barely-readable, underdocumented mess.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12647) Allowing to use -Drequire.isal but without -Disal.prefix and -Disal.lib to build ISA-L support

2015-12-16 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15059720#comment-15059720
 ] 

Kai Zheng commented on HADOOP-12647:


Tested the changes as follows:
* Case 1. Build with default ISA-L installation
** Installing ISA-L to /usr/lib the default folder the library uses
** mvn build hadoop using -Drequire.isal option only, run {{mvn -Pnative 
package -Drequire.isal -DskipTests -Pdist}} passed
* Case 2. Build with customized ISA-L installation
** Installing ISA-L to a custom folder like /tmp/isal
** mvn build hadoop using the installation, run {{mvn -Pnative package 
-Drequire.isal -Disal.prefix=/tmp/isal -DskipTests -Pdist}} passed
* Case 3. Build tar dist package with customized ISA-L installaton
** Installing ISA-L to a custom folder like /tmp/isal
** mvn build hadoop using the installation, run {{mvn -Pnative package 
-Drequire.isal -Disal.lib=/tmp/isal -Dbundle.isal -DskipTests -Pdist -Dtar}} 
passed
** Checked the result tar package is valid and contains ISA-L library files
In all cases, run {{hadoop checknative}} passed.
Also tested negative cases.

> Allowing to use -Drequire.isal but without -Disal.prefix and -Disal.lib to 
> build ISA-L support
> --
>
> Key: HADOOP-12647
> URL: https://issues.apache.org/jira/browse/HADOOP-12647
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 3.0.0
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Attachments: HADOOP-12647-v1.patch
>
>
> It was found the building will fail when use -Drequire.isal only (without 
> -Disal.prefix and -Disal.lib options) to build ISA-L in order to use the 
> default library installation. The cause is the mvn executor doesn't know 
> where to find the dynamic so file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)