[jira] [Commented] (HADOOP-11878) FileContext.java # fixRelativePart should check for not null for a more informative exception

2015-09-11 Thread Yi Zhou (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740292#comment-14740292
 ] 

Yi Zhou commented on HADOOP-11878:
--

Hi [~brahmareddy]
I also come across the issue and if there is workaround to fix it without apply 
patch ? Thanks in advance !

> FileContext.java # fixRelativePart should check for not null for a more 
> informative exception
> -
>
> Key: HADOOP-11878
> URL: https://issues.apache.org/jira/browse/HADOOP-11878
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
> Fix For: 2.8.0
>
> Attachments: HADOOP-11878-002.patch, HADOOP-11878-003.patch, 
> HADOOP-11878-004.patch, HADOOP-11878.patch
>
>
> Following will come when job failed and deletion service trying to delete the 
> log fiels
> 2015-04-27 14:56:17,113 INFO 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: Deleting 
> absolute path : null
> 2015-04-27 14:56:17,113 ERROR 
> org.apache.hadoop.yarn.server.nodemanager.DeletionService: Exception during 
> execution of task in DeletionService
> java.lang.NullPointerException
> at 
> org.apache.hadoop.fs.FileContext.fixRelativePart(FileContext.java:274)
> at org.apache.hadoop.fs.FileContext.delete(FileContext.java:761)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.deleteAsUser(DefaultContainerExecutor.java:457)
> at 
> org.apache.hadoop.yarn.server.nodemanager.DeletionService$FileDeletionTask.run(DeletionService.java:293)
> at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
> at 
> java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12397) Incomplete comment for test-patch compile_cycle function

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740262#comment-14740262
 ] 

Hadoop QA commented on HADOOP-12397:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7636/console in case of 
problems.

> Incomplete comment for test-patch compile_cycle function
> 
>
> Key: HADOOP-12397
> URL: https://issues.apache.org/jira/browse/HADOOP-12397
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-12397.HADOOP-12111.00.patch, 
> HADOOP-12397.HADOOP-12111.01.patch
>
>
> Its comment says:
> {code}
> ## @description  This will callout to _precompile, compile, and _postcompile
> {code}
> but it calls _rebuild also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12400) Wrong comment for scaladoc_rebuild function in test-patch scala plugin

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740261#comment-14740261
 ] 

Hadoop QA commented on HADOOP-12400:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7635/console in case of 
problems.

> Wrong comment for scaladoc_rebuild function in test-patch scala plugin
> --
>
> Key: HADOOP-12400
> URL: https://issues.apache.org/jira/browse/HADOOP-12400
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-12400.HADOOP-12111.00.patch, 
> HADOOP-12400.HADOOP-12111.01.patch, HADOOP-12400.HADOOP-12111.02.patch
>
>
> {code}
>  62 ## @description  Count and compare the number of JavaDoc warnings pre- 
> and post- patch
>  63 ## @audience private
>  64 ## @stabilityevolving
>  65 ## @replaceable  no
>  66 ## @return   0 on success
>  67 ## @return   1 on failure
>  68 function scaladoc_rebuild
> {code}
> s/JavaDoc/ScalaDoc/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12397) Incomplete comment for test-patch compile_cycle function

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740265#comment-14740265
 ] 

Hadoop QA commented on HADOOP-12397:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
9s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
18s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 32s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12755320/HADOOP-12397.HADOOP-12111.01.patch
 |
| JIRA Issue | HADOOP-12397 |
| git revision | HADOOP-12111 / 1e2eeb0 |
| Optional Tests | asflicense unit  shellcheck  |
| uname | Linux asf905.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/dev-support-test/personality/hadoop.sh
 |
| Default Java | 1.7.0_55 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/home/jenkins/tools/java/jdk1.7.0_55:1.7.0_55 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| JDK v1.7.0_55  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7636/testReport/ |
| Max memory used | 48MB |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7636/console |


This message was automatically generated.



> Incomplete comment for test-patch compile_cycle function
> 
>
> Key: HADOOP-12397
> URL: https://issues.apache.org/jira/browse/HADOOP-12397
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-12397.HADOOP-12111.00.patch, 
> HADOOP-12397.HADOOP-12111.01.patch
>
>
> Its comment says:
> {code}
> ## @description  This will callout to _precompile, compile, and _postcompile
> {code}
> but it calls _rebuild also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12400) Wrong comment for scaladoc_rebuild function in test-patch scala plugin

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740263#comment-14740263
 ] 

Hadoop QA commented on HADOOP-12400:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
7s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 27s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12755321/HADOOP-12400.HADOOP-12111.02.patch
 |
| JIRA Issue | HADOOP-12400 |
| git revision | HADOOP-12111 / 1e2eeb0 |
| Optional Tests | asflicense unit  shellcheck  |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/patchprocess/dev-support-test/personality/hadoop.sh
 |
| Default Java | 1.7.0_55 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/home/jenkins/tools/java/jdk1.7.0_55:1.7.0_55 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| JDK v1.7.0_55  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7635/testReport/ |
| Max memory used | 48MB |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7635/console |


This message was automatically generated.



> Wrong comment for scaladoc_rebuild function in test-patch scala plugin
> --
>
> Key: HADOOP-12400
> URL: https://issues.apache.org/jira/browse/HADOOP-12400
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-12400.HADOOP-12111.00.patch, 
> HADOOP-12400.HADOOP-12111.01.patch, HADOOP-12400.HADOOP-12111.02.patch
>
>
> {code}
>  62 ## @description  Count and compare the number of JavaDoc warnings pre- 
> and post- patch
>  63 ## @audience private
>  64 ## @stabilityevolving
>  65 ## @replaceable  no
>  66 ## @return   0 on success
>  67 ## @return   1 on failure
>  68 function scaladoc_rebuild
> {code}
> s/JavaDoc/ScalaDoc/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12344) validateSocketPathSecurity0 message could be better

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740270#comment-14740270
 ] 

Hadoop QA commented on HADOOP-12344:


\\
\\
| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 27s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 56s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  6s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 12s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  1s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 55s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |  23m  4s | Tests passed in 
hadoop-common. |
| | |  64m 10s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755306/HADOOP-12344.004.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / f103a70 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7633/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7633/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7633/console |


This message was automatically generated.

> validateSocketPathSecurity0 message could be better
> ---
>
> Key: HADOOP-12344
> URL: https://issues.apache.org/jira/browse/HADOOP-12344
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: net
>Reporter: Casey Brotherton
>Assignee: Casey Brotherton
>Priority: Trivial
> Attachments: HADOOP-12344.001.patch, HADOOP-12344.002.patch, 
> HADOOP-12344.003.patch, HADOOP-12344.004.patch, HADOOP-12344.patch
>
>
> When a socket path does not have the correct permissions, an error is thrown.
> That error just has the failing component of the path and not the entire path 
> of the socket.
> The entire path of the socket could be printed out to allow for a direct 
> check of the permissions of the entire path.
> {code}
> java.io.IOException: the path component: '/' is world-writable.  Its 
> permissions are 0077.  Please fix this or select a different socket path.
>   at 
> org.apache.hadoop.net.unix.DomainSocket.validateSocketPathSecurity0(Native 
> Method)
>   at 
> org.apache.hadoop.net.unix.DomainSocket.bindAndListen(DomainSocket.java:189)
> ...
> {code}
> The error message could also provide the socket path:
> {code}
> java.io.IOException: the path component: '/' is world-writable.  Its 
> permissions are 0077.  Please fix this or select a different socket path than 
> '/var/run/hdfs-sockets/dn'
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10941) Proxy user verification NPEs if remote host is unresolvable

2015-09-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741143#comment-14741143
 ] 

Steve Loughran commented on HADOOP-10941:
-

Looking at the code, I can't see how a null address comes in either; when the 
server is set up if the socket has no address, it switches to  *Unknown*" ... 
otherwise the inet address value is used, which appears to be the numeric 
address.

But it has happened —hasn't it? This could be a sign of something seriously 
wrong

> Proxy user verification NPEs if remote host is unresolvable
> ---
>
> Key: HADOOP-10941
> URL: https://issues.apache.org/jira/browse/HADOOP-10941
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc, security
>Affects Versions: 2.5.0, 3.0.0
>Reporter: Daryn Sharp
>Assignee: Benoy Antony
>Priority: Critical
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10941.patch
>
>
> A null is passed to the impersonation providers for the remote address if it 
> is unresolvable.  {{DefaultImpersationProvider}} will NPE, ipc will close the 
> connection immediately (correct behavior for such unexpected exceptions), 
> client fails on {{EOFException}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12284) UserGroupInformation doAs can throw misleading exception

2015-09-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12284:

Status: Open  (was: Patch Available)

> UserGroupInformation doAs can throw misleading exception
> 
>
> Key: HADOOP-12284
> URL: https://issues.apache.org/jira/browse/HADOOP-12284
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Aaron Dossett
>Assignee: Aaron Dossett
>Priority: Trivial
> Attachments: HADOOP-12284.example, HADOOP-12284.patch
>
>
> If doAs() catches a PrivilegedActionException it extracts the underlying 
> cause through getCause and then re-throws an exception based on the class of 
> the Cause.  If getCause returns null, this is how it gets re-thrown:
> else {
> throw new UndeclaredThrowableException(cause);
>   }
> If cause == null that seems misleading. I have seen actual instances where 
> cause is null, so this isn't just a theoretical concern.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12284) UserGroupInformation doAs can throw misleading exception

2015-09-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12284:

Attachment: HADOOP-12284-002.patch

LGTM; this is the patch I'll commit if jenkins is happy

# chopped the line so that checkstyle wouldn't complain
# added logging of the PAE to the log.Debug() clause above. That way, if 
someone is logging, they'll see what happened

> UserGroupInformation doAs can throw misleading exception
> 
>
> Key: HADOOP-12284
> URL: https://issues.apache.org/jira/browse/HADOOP-12284
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Aaron Dossett
>Assignee: Aaron Dossett
>Priority: Trivial
> Attachments: HADOOP-12284-002.patch, HADOOP-12284.example, 
> HADOOP-12284.patch
>
>
> If doAs() catches a PrivilegedActionException it extracts the underlying 
> cause through getCause and then re-throws an exception based on the class of 
> the Cause.  If getCause returns null, this is how it gets re-thrown:
> else {
> throw new UndeclaredThrowableException(cause);
>   }
> If cause == null that seems misleading. I have seen actual instances where 
> cause is null, so this isn't just a theoretical concern.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10829) Iteration on CredentialProviderFactory.serviceLoader is thread-unsafe

2015-09-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10829:

Affects Version/s: 2.6.0
 Target Version/s: 2.7.2
   Status: Patch Available  (was: Open)

> Iteration on CredentialProviderFactory.serviceLoader  is thread-unsafe
> --
>
> Key: HADOOP-10829
> URL: https://issues.apache.org/jira/browse/HADOOP-10829
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10829.patch, HADOOP-10829.patch
>
>
> CredentialProviderFactory uses _ServiceLoader_ framework to load 
> _CredentialProviderFactory_
> {code}
>   private static final ServiceLoader serviceLoader 
> =
>   ServiceLoader.load(CredentialProviderFactory.class);
> {code}
> The _ServiceLoader_ framework does lazy initialization of services which 
> makes it thread unsafe. If accessed from multiple threads, it is better to 
> synchronize the access.
> Similar synchronization has been done while loading compression codec 
> providers via HADOOP-8406. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11764) Hadoop should have the option to use directory other than tmp for extracting and loading leveldbjni

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741170#comment-14741170
 ] 

Hudson commented on HADOOP-11764:
-

SUCCESS: Integrated in Ambari-branch-2.1 #518 (See 
[https://builds.apache.org/job/Ambari-branch-2.1/518/])
HADOOP-11764. [ HADOOP-11764] NodeManager should use directory other than tmp 
for extracting and loading leveldbjni (aonishuk) (aonishuk: 
http://git-wip-us.apache.org/repos/asf?p=ambari.git=commit=d9f600e23d42544a3d8f67e2da75dc3db98bc555)
* 
ambari-server/src/main/resources/stacks/HDP/2.2/services/YARN/configuration-mapred/mapred-env.xml
* 
ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/shared_initialization.py
* 
ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/package/scripts/params_linux.py
* 
ambari-server/src/main/resources/stacks/HDP/2.3/services/YARN/configuration/yarn-env.xml
* 
ambari-server/src/main/resources/common-services/YARN/2.1.0.2.0/configuration/yarn-env.xml
* ambari-server/src/test/python/stacks/2.0.6/hooks/before-ANY/test_before_any.py
* 
ambari-server/src/main/resources/stacks/HDP/2.1/services/YARN/configuration/yarn-env.xml
* 
ambari-server/src/main/resources/stacks/HDP/2.0.6/hooks/before-ANY/scripts/params.py


> Hadoop should have the option to use directory other than tmp for extracting 
> and loading leveldbjni
> ---
>
> Key: HADOOP-11764
> URL: https://issues.apache.org/jira/browse/HADOOP-11764
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Anubhav Dhoot
>Assignee: Anubhav Dhoot
> Attachments: YARN-3331.001.patch, YARN-3331.002.patch
>
>
> /tmp can be  required to be noexec in many environments. This causes a 
> problem when  nodemanager tries to load the leveldbjni library which can get 
> unpacked and executed from /tmp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10633) use Time#monotonicNow to avoid system clock reset

2015-09-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741174#comment-14741174
 ] 

Steve Loughran commented on HADOOP-10633:
-

Actually, it'd have to be -1 to using monotonic now for tokens. If I set the 
time on a machine back, I'd expect new tokens to have an expiry time relative 
to the new time; use monotonic time and they'd be valid for longer than you 
expect

> use Time#monotonicNow to avoid system clock reset
> -
>
> Key: HADOOP-10633
> URL: https://issues.apache.org/jira/browse/HADOOP-10633
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io, security
>Affects Versions: 3.0.0
>Reporter: Liang Xie
>Assignee: Liang Xie
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10633.txt
>
>
> let's replace System#currentTimeMillis with Time#monotonicNow



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12399) Wrong help messages in some test-patch plugins

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741216#comment-14741216
 ] 

Hadoop QA commented on HADOOP-12399:


(!) A patch to the files used for the QA process has been detected. 
Re-executing against the patched versions to perform further tests. 
The console is at 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7643/console in case of 
problems.

> Wrong help messages in some test-patch plugins
> --
>
> Key: HADOOP-12399
> URL: https://issues.apache.org/jira/browse/HADOOP-12399
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-12399.HADOOP-12111.00.patch
>
>
> dev-support/personality/bigtop.sh:
> {code}
>  32 function bigtop_usage
>  33 {
>  34   echo "Bigtop specific:"
>  35   echo "--bigtop-puppetsetup=[false|true]   execute the bigtop dev setup 
> (needs sudo to root)"
>  36 }
> {code}
> s/bigtop-puppetsetup/bigtop-puppet/.
> dev-support/test-patch.d/gradle.sh:
> {code}
>  21 function gradle_usage
>  22 {
>  23   echo "gradle specific:"
>  24   echo "--gradle-cmd=The 'gradle' command to use (default 
> 'gradle')"
>  25   echo "--gradlew-cmd=The 'gradle' command to use (default 
> 'basedir/gradlew')"
>  26 }
> {code}
> s/'gradle' command/'gradlew' command/ for the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12399) Wrong help messages in some test-patch plugins

2015-09-11 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N updated HADOOP-12399:
--
Status: Patch Available  (was: Open)

> Wrong help messages in some test-patch plugins
> --
>
> Key: HADOOP-12399
> URL: https://issues.apache.org/jira/browse/HADOOP-12399
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-12399.HADOOP-12111.00.patch
>
>
> dev-support/personality/bigtop.sh:
> {code}
>  32 function bigtop_usage
>  33 {
>  34   echo "Bigtop specific:"
>  35   echo "--bigtop-puppetsetup=[false|true]   execute the bigtop dev setup 
> (needs sudo to root)"
>  36 }
> {code}
> s/bigtop-puppetsetup/bigtop-puppet/.
> dev-support/test-patch.d/gradle.sh:
> {code}
>  21 function gradle_usage
>  22 {
>  23   echo "gradle specific:"
>  24   echo "--gradle-cmd=The 'gradle' command to use (default 
> 'gradle')"
>  25   echo "--gradlew-cmd=The 'gradle' command to use (default 
> 'basedir/gradlew')"
>  26 }
> {code}
> s/'gradle' command/'gradlew' command/ for the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12399) Wrong help messages in some test-patch plugins

2015-09-11 Thread Jagadesh Kiran N (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jagadesh Kiran N updated HADOOP-12399:
--
Attachment: HADOOP-12399.HADOOP-12111.00.patch

Updated the patch [~sekikn] ,please review the same

> Wrong help messages in some test-patch plugins
> --
>
> Key: HADOOP-12399
> URL: https://issues.apache.org/jira/browse/HADOOP-12399
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-12399.HADOOP-12111.00.patch
>
>
> dev-support/personality/bigtop.sh:
> {code}
>  32 function bigtop_usage
>  33 {
>  34   echo "Bigtop specific:"
>  35   echo "--bigtop-puppetsetup=[false|true]   execute the bigtop dev setup 
> (needs sudo to root)"
>  36 }
> {code}
> s/bigtop-puppetsetup/bigtop-puppet/.
> dev-support/test-patch.d/gradle.sh:
> {code}
>  21 function gradle_usage
>  22 {
>  23   echo "gradle specific:"
>  24   echo "--gradle-cmd=The 'gradle' command to use (default 
> 'gradle')"
>  25   echo "--gradlew-cmd=The 'gradle' command to use (default 
> 'basedir/gradlew')"
>  26 }
> {code}
> s/'gradle' command/'gradlew' command/ for the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12403) Enable multiple writes in flight for HBase WAL writing backed by WASB

2015-09-11 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741083#comment-14741083
 ] 

stack commented on HADOOP-12403:


bq. The latest HBase WAL write model (HBASE-8755) uses multiple AsyncSyncer 
threads to sync data to HDFS.

It would be preferable if we did not have to do this against HDFS Client. A 
single thread doing syncs back-to-back would be ideal but experiment had it 
that 5 threads each running a sync seems to be optimal (throughput-wise) for 
setting up a syncing pipeline. Need to dig in as to why 5 and why this is 
needed at all. Just FYI.

> Enable multiple writes in flight for HBase WAL writing backed by WASB
> -
>
> Key: HADOOP-12403
> URL: https://issues.apache.org/jira/browse/HADOOP-12403
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-12403.01.patch, HADOOP-12403.02.patch, 
> HADOOP-12403.03.patch
>
>
> Azure HDI HBase clusters use Azure blob storage as file system. We found that 
> the bottle neck was during writing to write ahead log (WAL). The latest HBase 
> WAL write model (HBASE-8755) uses multiple AsyncSyncer threads to sync data 
> to HDFS. However, our WASB driver is still based on a single thread model. 
> Thus when the sync threads call into WASB layer, every time only one thread 
> will be allowed to send data to Azure storage.This jira is to introduce a new 
> write model in WASB layer to allow multiple writes in parallel.
> 1. Since We use page blob for WAL, this will cause "holes" in the page blob 
> as every write starts on a new page. We use the first two bytes of every page 
> to record the actual data size of the current page.
> 2. When reading WAL, we need to know the actual size of the WAL. This should 
> be the sum of the number represented by the first two bytes of every page. 
> However looping over every page to get the size will be very slow, 
> considering normal WAL size is 128MB and each page is 512 bytes. So during 
> writing, every time a write succeeds, a metadata of the blob called 
> "total_data_uploaded" will be updated.
> 3. Although we allow multiple writes in flight, we need to make sure the sync 
> threads which call into WASB layers return in order. Reading HBase source 
> code FSHLog.java, we find that every sync request is associated with a 
> transaction id. If the sync succeeds, all the transactions prior to this 
> transaction id are assumed to be in Azure Storage. We use a queue to store 
> the sync requests and make sure they return to HBase layer in order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12284) UserGroupInformation doAs can throw misleading exception

2015-09-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12284:

Target Version/s: 2.8.0
  Status: Patch Available  (was: Open)

> UserGroupInformation doAs can throw misleading exception
> 
>
> Key: HADOOP-12284
> URL: https://issues.apache.org/jira/browse/HADOOP-12284
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Aaron Dossett
>Assignee: Aaron Dossett
>Priority: Trivial
> Attachments: HADOOP-12284-002.patch, HADOOP-12284.example, 
> HADOOP-12284.patch
>
>
> If doAs() catches a PrivilegedActionException it extracts the underlying 
> cause through getCause and then re-throws an exception based on the class of 
> the Cause.  If getCause returns null, this is how it gets re-thrown:
> else {
> throw new UndeclaredThrowableException(cause);
>   }
> If cause == null that seems misleading. I have seen actual instances where 
> cause is null, so this isn't just a theoretical concern.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11180) Fix warning of "token.Token: Cannot find class for token kind kms-dt" for KMS when running jobs on Encryption zones

2015-09-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741146#comment-14741146
 ] 

Steve Loughran commented on HADOOP-11180:
-

There's a risk here that downgrading the logs is going to hide fundamental 
problems with other tokens; this is a dangerous patch

> Fix warning of "token.Token: Cannot find class for token kind kms-dt" for KMS 
> when running jobs on Encryption zones
> ---
>
> Key: HADOOP-11180
> URL: https://issues.apache.org/jira/browse/HADOOP-11180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, security
>Affects Versions: 2.6.0
>Reporter: Yi Liu
>Assignee: Yi Liu
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11180.001.patch
>
>
> This issue is produced when running MapReduce job and encryption zones are 
> configured.
> {quote}
> 14/10/09 05:06:02 INFO security.TokenCache: Got dt for 
> hdfs://hnode1.sh.intel.com:9000; Kind: HDFS_DELEGATION_TOKEN, Service: 
> 10.239.47.8:9000, Ident: (HDFS_DELEGATION_TOKEN token 21 for user)
> 14/10/09 05:06:02 WARN token.Token: Cannot find class for token kind kms-dt
> 14/10/09 05:06:02 INFO security.TokenCache: Got dt for 
> hdfs://hnode1.sh.intel.com:9000; Kind: kms-dt, Service: 10.239.47.8:16000, 
> Ident: 00 04 75 73 65 72 04 79 61 72 6e 00 8a 01 48 f1 8e 85 07 8a 01 49 15 
> 9b 09 07 04 02
> 14/10/09 05:06:03 INFO input.FileInputFormat: Total input paths to process : 1
> 14/10/09 05:06:03 INFO mapreduce.JobSubmitter: number of splits:1
> 14/10/09 05:06:03 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
> job_141272197_0004
> 14/10/09 05:06:03 WARN token.Token: Cannot find class for token kind kms-dt
> 14/10/09 05:06:03 WARN token.Token: Cannot find class for token kind kms-dt
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12324) Better exception reporting in SaslPlainServer

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741199#comment-14741199
 ] 

Hudson commented on HADOOP-12324:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #1110 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1110/])
HADOOP-12324. Better exception reporting in SaslPlainServer.   (Mike Yoder via 
stevel) (stevel: rev ca0827a86235dbc4d7e00cc8426ebff9fcc2d421)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Better exception reporting in SaslPlainServer
> -
>
> Key: HADOOP-12324
> URL: https://issues.apache.org/jira/browse/HADOOP-12324
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-12324.000.patch
>
>
> This is a follow up from HADOOP-12318.  The review comment from 
> [~ste...@apache.org]:
> {quote}
> -1. It's critical to use Exception.toString() and not .getMessage(), as some 
> exceptions (NPE) don't have messages.
> {quote}
> This is the promised follow-up Jira.
> CC: [~atm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-7891) KerberosName method typo and log warning when rules are set

2015-09-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741203#comment-14741203
 ] 

Steve Loughran commented on HADOOP-7891:


Are we confident that the method is *never* called by external code? As if not, 
the old method should be retained at deprecated, while the rest of the patch 
goes through

> KerberosName method typo and log warning when rules are set
> ---
>
> Key: HADOOP-7891
> URL: https://issues.apache.org/jira/browse/HADOOP-7891
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.23.1
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-7891.patch, HADOOP-7891_1.patch
>
>
> The method hasRulesBeenSet() should be named haveRulesBeenSet()
> if the rules setting is not done during UGI initialization because they have 
> been already set, a warning should be logged. Along the following lines:
>   "Not setting kerberos name mappings defined in 
> hadoop.security.auth_to_local because name mappings are already set")



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12324) Better exception reporting in SaslPlainServer

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741265#comment-14741265
 ] 

Hudson commented on HADOOP-12324:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2320 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2320/])
HADOOP-12324. Better exception reporting in SaslPlainServer.   (Mike Yoder via 
stevel) (stevel: rev ca0827a86235dbc4d7e00cc8426ebff9fcc2d421)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java


> Better exception reporting in SaslPlainServer
> -
>
> Key: HADOOP-12324
> URL: https://issues.apache.org/jira/browse/HADOOP-12324
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-12324.000.patch
>
>
> This is a follow up from HADOOP-12318.  The review comment from 
> [~ste...@apache.org]:
> {quote}
> -1. It's critical to use Exception.toString() and not .getMessage(), as some 
> exceptions (NPE) don't have messages.
> {quote}
> This is the promised follow-up Jira.
> CC: [~atm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12324) Better exception reporting in SaslPlainServer

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741138#comment-14741138
 ] 

Hudson commented on HADOOP-12324:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #372 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/372/])
HADOOP-12324. Better exception reporting in SaslPlainServer.   (Mike Yoder via 
stevel) (stevel: rev ca0827a86235dbc4d7e00cc8426ebff9fcc2d421)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Better exception reporting in SaslPlainServer
> -
>
> Key: HADOOP-12324
> URL: https://issues.apache.org/jira/browse/HADOOP-12324
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-12324.000.patch
>
>
> This is a follow up from HADOOP-12318.  The review comment from 
> [~ste...@apache.org]:
> {quote}
> -1. It's critical to use Exception.toString() and not .getMessage(), as some 
> exceptions (NPE) don't have messages.
> {quote}
> This is the promised follow-up Jira.
> CC: [~atm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10829) Iteration on CredentialProviderFactory.serviceLoader is thread-unsafe

2015-09-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10829:

Status: Open  (was: Patch Available)

> Iteration on CredentialProviderFactory.serviceLoader  is thread-unsafe
> --
>
> Key: HADOOP-10829
> URL: https://issues.apache.org/jira/browse/HADOOP-10829
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10829.patch, HADOOP-10829.patch
>
>
> CredentialProviderFactory uses _ServiceLoader_ framework to load 
> _CredentialProviderFactory_
> {code}
>   private static final ServiceLoader serviceLoader 
> =
>   ServiceLoader.load(CredentialProviderFactory.class);
> {code}
> The _ServiceLoader_ framework does lazy initialization of services which 
> makes it thread unsafe. If accessed from multiple threads, it is better to 
> synchronize the access.
> Similar synchronization has been done while loading compression codec 
> providers via HADOOP-8406. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12324) Better exception reporting in SaslPlainServer

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741182#comment-14741182
 ] 

Hudson commented on HADOOP-12324:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #378 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/378/])
HADOOP-12324. Better exception reporting in SaslPlainServer.   (Mike Yoder via 
stevel) (stevel: rev ca0827a86235dbc4d7e00cc8426ebff9fcc2d421)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java


> Better exception reporting in SaslPlainServer
> -
>
> Key: HADOOP-12324
> URL: https://issues.apache.org/jira/browse/HADOOP-12324
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-12324.000.patch
>
>
> This is a follow up from HADOOP-12318.  The review comment from 
> [~ste...@apache.org]:
> {quote}
> -1. It's critical to use Exception.toString() and not .getMessage(), as some 
> exceptions (NPE) don't have messages.
> {quote}
> This is the promised follow-up Jira.
> CC: [~atm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12389) allow self-impersonation

2015-09-11 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740387#comment-14740387
 ] 

Masatake Iwasaki commented on HADOOP-12389:
---

bq. I hit it several times while playing around with WebHDFS on an unsecure 
cluster.

If {{ProxyUsers#authorize}} is called when it should not be, it is a bug and 
should be fixed rather than changing authorization scheme.


> allow self-impersonation
> 
>
> Key: HADOOP-12389
> URL: https://issues.apache.org/jira/browse/HADOOP-12389
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>
> This is kind of dumb:
> org.apache.hadoop.security.authorize.AuthorizationException: User: aw is not 
> allowed to impersonate aw
> Users should be able to impersonate themselves in secure and non-secure cases 
> automatically, for free.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11180) Fix warning of "token.Token: Cannot find class for token kind kms-dt" for KMS when running jobs on Encryption zones

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741342#comment-14741342
 ] 

Hadoop QA commented on HADOOP-11180:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m  8s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 52s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  9s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m  7s | There were no new checkstyle 
issues. |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 52s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  22m 40s | Tests failed in 
hadoop-common. |
| | |  63m 18s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.ipc.TestSaslRPC |
|   | hadoop.security.token.delegation.web.TestWebDelegationToken |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12673838/HADOOP-11180.001.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 15a557f |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7641/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7641/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf907.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7641/console |


This message was automatically generated.

> Fix warning of "token.Token: Cannot find class for token kind kms-dt" for KMS 
> when running jobs on Encryption zones
> ---
>
> Key: HADOOP-11180
> URL: https://issues.apache.org/jira/browse/HADOOP-11180
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: kms, security
>Affects Versions: 2.6.0
>Reporter: Yi Liu
>Assignee: Yi Liu
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11180.001.patch
>
>
> This issue is produced when running MapReduce job and encryption zones are 
> configured.
> {quote}
> 14/10/09 05:06:02 INFO security.TokenCache: Got dt for 
> hdfs://hnode1.sh.intel.com:9000; Kind: HDFS_DELEGATION_TOKEN, Service: 
> 10.239.47.8:9000, Ident: (HDFS_DELEGATION_TOKEN token 21 for user)
> 14/10/09 05:06:02 WARN token.Token: Cannot find class for token kind kms-dt
> 14/10/09 05:06:02 INFO security.TokenCache: Got dt for 
> hdfs://hnode1.sh.intel.com:9000; Kind: kms-dt, Service: 10.239.47.8:16000, 
> Ident: 00 04 75 73 65 72 04 79 61 72 6e 00 8a 01 48 f1 8e 85 07 8a 01 49 15 
> 9b 09 07 04 02
> 14/10/09 05:06:03 INFO input.FileInputFormat: Total input paths to process : 1
> 14/10/09 05:06:03 INFO mapreduce.JobSubmitter: number of splits:1
> 14/10/09 05:06:03 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
> job_141272197_0004
> 14/10/09 05:06:03 WARN token.Token: Cannot find class for token kind kms-dt
> 14/10/09 05:06:03 WARN token.Token: Cannot find class for token kind kms-dt
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-7891) KerberosName method typo and log warning when rules are set

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7891?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741416#comment-14741416
 ] 

Hadoop QA commented on HADOOP-7891:
---

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  18m 38s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 2 new or modified test files. |
| {color:green}+1{color} | javac |   7m 56s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 59s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 30s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   3m 23s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | common tests |   5m 18s | Tests passed in 
hadoop-auth. |
| {color:red}-1{color} | common tests |  22m  7s | Tests failed in 
hadoop-common. |
| {color:green}+1{color} | yarn tests |   0m 55s | Tests passed in 
hadoop-yarn-registry. |
| | |  72m 16s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.ipc.TestSaslRPC |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12731385/HADOOP-7891_1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 15a557f |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7646/artifact/patchprocess/whitespace.txt
 |
| hadoop-auth test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7646/artifact/patchprocess/testrun_hadoop-auth.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7646/artifact/patchprocess/testrun_hadoop-common.txt
 |
| hadoop-yarn-registry test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7646/artifact/patchprocess/testrun_hadoop-yarn-registry.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7646/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf903.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7646/console |


This message was automatically generated.

> KerberosName method typo and log warning when rules are set
> ---
>
> Key: HADOOP-7891
> URL: https://issues.apache.org/jira/browse/HADOOP-7891
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.23.1
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>Priority: Minor
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-7891.patch, HADOOP-7891_1.patch
>
>
> The method hasRulesBeenSet() should be named haveRulesBeenSet()
> if the rules setting is not done during UGI initialization because they have 
> been already set, a warning should be logged. Along the following lines:
>   "Not setting kerberos name mappings defined in 
> hadoop.security.auth_to_local because name mappings are already set")



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10829) Iteration on CredentialProviderFactory.serviceLoader is thread-unsafe

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741370#comment-14741370
 ] 

Hadoop QA commented on HADOOP-10829:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  19m 42s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   9m 32s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 54s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 26s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:green}+1{color} | checkstyle |   1m 11s | There were no new checkstyle 
issues. |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 33s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 53s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  23m 16s | Tests failed in 
hadoop-common. |
| | |  69m  3s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.ipc.TestSaslRPC |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12657346/HADOOP-10829.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 15a557f |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7644/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7644/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7644/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf909.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7644/console |


This message was automatically generated.

> Iteration on CredentialProviderFactory.serviceLoader  is thread-unsafe
> --
>
> Key: HADOOP-10829
> URL: https://issues.apache.org/jira/browse/HADOOP-10829
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Benoy Antony
>Assignee: Benoy Antony
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10829.patch, HADOOP-10829.patch
>
>
> CredentialProviderFactory uses _ServiceLoader_ framework to load 
> _CredentialProviderFactory_
> {code}
>   private static final ServiceLoader serviceLoader 
> =
>   ServiceLoader.load(CredentialProviderFactory.class);
> {code}
> The _ServiceLoader_ framework does lazy initialization of services which 
> makes it thread unsafe. If accessed from multiple threads, it is better to 
> synchronize the access.
> Similar synchronization has been done while loading compression codec 
> providers via HADOOP-8406. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12284) UserGroupInformation doAs can throw misleading exception

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741343#comment-14741343
 ] 

Hadoop QA commented on HADOOP-12284:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 26s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 50s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m 45s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 22s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  6s | The applied patch generated  1 
new checkstyle issues (total was 108, now 109). |
| {color:green}+1{color} | whitespace |   0m  0s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 32s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 55s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  22m 13s | Tests failed in 
hadoop-common. |
| | |  63m 40s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.ipc.TestSaslRPC |
|   | hadoop.fs.TestFsShellCopy |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755445/HADOOP-12284-002.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 15a557f |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/7640/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7640/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7640/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf902.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7640/console |


This message was automatically generated.

> UserGroupInformation doAs can throw misleading exception
> 
>
> Key: HADOOP-12284
> URL: https://issues.apache.org/jira/browse/HADOOP-12284
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Aaron Dossett
>Assignee: Aaron Dossett
>Priority: Trivial
> Attachments: HADOOP-12284-002.patch, HADOOP-12284.example, 
> HADOOP-12284.patch
>
>
> If doAs() catches a PrivilegedActionException it extracts the underlying 
> cause through getCause and then re-throws an exception based on the class of 
> the Cause.  If getCause returns null, this is how it gets re-thrown:
> else {
> throw new UndeclaredThrowableException(cause);
>   }
> If cause == null that seems misleading. I have seen actual instances where 
> cause is null, so this isn't just a theoretical concern.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11404) Clarify the "expected client Kerberos principal is null" authorization message

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11404?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741356#comment-14741356
 ] 

Hadoop QA commented on HADOOP-11404:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  17m 19s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:red}-1{color} | tests included |   0m  0s | The patch doesn't appear 
to include any new or modified tests.  Please justify why no new tests are 
needed for this patch. Also please list what manual steps were performed to 
verify this patch. |
| {color:green}+1{color} | javac |   7m 57s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |  10m  9s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 24s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   1m  5s | The applied patch generated  3 
new checkstyle issues (total was 29, now 32). |
| {color:red}-1{color} | whitespace |   0m  0s | The patch has 1  line(s) that 
end in whitespace. Use git apply --whitespace=fix. |
| {color:green}+1{color} | install |   1m 29s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 34s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   1m 52s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:red}-1{color} | common tests |  22m 11s | Tests failed in 
hadoop-common. |
| | |  63m  5s | |
\\
\\
|| Reason || Tests ||
| Failed unit tests | hadoop.ipc.TestSaslRPC |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12687105/HADOOP-11404.001.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 15a557f |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/7642/artifact/patchprocess/diffcheckstylehadoop-common.txt
 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7642/artifact/patchprocess/whitespace.txt
 |
| hadoop-common test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7642/artifact/patchprocess/testrun_hadoop-common.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7642/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7642/console |


This message was automatically generated.

> Clarify the "expected client Kerberos principal is null" authorization message
> --
>
> Key: HADOOP-11404
> URL: https://issues.apache.org/jira/browse/HADOOP-11404
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Stephen Chu
>Assignee: Stephen Chu
>Priority: Minor
>  Labels: BB2015-05-TBR, supportability
> Attachments: HADOOP-11404.001.patch
>
>
> In {{ServiceAuthorizationManager#authorize}}, we throw an 
> {{AuthorizationException}} with message "expected client Kerberos principal 
> is null" when authorization fails.
> However, this is a confusing log message, because it leads users to believe 
> there was a Kerberos authentication problem, when in fact the the user could 
> have authenticated successfully.
> {code}
> if((clientPrincipal != null && !clientPrincipal.equals(user.getUserName())) 
> || 
>acls.length != 2  || !acls[0].isUserAllowed(user) || 
> acls[1].isUserAllowed(user)) {
>   AUDITLOG.warn(AUTHZ_FAILED_FOR + user + " for protocol=" + protocol
>   + ", expected client Kerberos principal is " + clientPrincipal);
>   throw new AuthorizationException("User " + user + 
>   " is not authorized for protocol " + protocol + 
>   ", expected client Kerberos principal is " + clientPrincipal);
> }
> AUDITLOG.info(AUTHZ_SUCCESSFUL_FOR + user + " for protocol="+protocol);
> {code}
> In the above code, if clientPrincipal is null, then the user is authenticated 
> successfully but denied by a configured ACL, not a Kerberos issue. We should 
> improve this log message to state this.
> Thanks to [~tlipcon] for finding this and proposing a fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12405) Expose NN RPC via HTTP / HTTPS

2015-09-11 Thread Haohui Mai (JIRA)
Haohui Mai created HADOOP-12405:
---

 Summary: Expose NN RPC via HTTP / HTTPS
 Key: HADOOP-12405
 URL: https://issues.apache.org/jira/browse/HADOOP-12405
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Haohui Mai


WebHDFS needs to expose NN RPC calls to allow users to access HDFS via HTTP / 
HTTPS.

The current approach is to add REST APIs into WebHDFS one by one manually. It 
requires significant efforts from a maintainability point of view. we found 
that WebHDFS is consistently lagging behind. It's also hard to maintain the 
REST RPC stubs.

There are a lot of values to expose the NN RPC in a HTTP / HTTPS friendly way 
automatically.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12348) MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.

2015-09-11 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741354#comment-14741354
 ] 

Robert Kanter commented on HADOOP-12348:


[~zxu], it looks like this doesn't apply cleanly to branch-2.  Can you take a 
look and make a branch-2 version of the patch?

> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.
> --
>
> Key: HADOOP-12348
> URL: https://issues.apache.org/jira/browse/HADOOP-12348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: zhihai xu
>Assignee: zhihai xu
> Attachments: HADOOP-12348.000.patch, HADOOP-12348.001.patch
>
>
> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit 
> parameter. MetricsSourceAdapter expects time unit millisecond  for 
> jmxCacheTTL but MetricsSystemImpl  passes time unit second to 
> MetricsSourceAdapter constructor.
> {code}
> jmxCacheTS = Time.now() + jmxCacheTTL;
>   /**
>* Current system time.  Do not use this to calculate a duration or interval
>* to sleep, because it will be broken by settimeofday.  Instead, use
>* monotonicNow.
>* @return current time in msec.
>*/
>   public static long now() {
> return System.currentTimeMillis();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12348) MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.

2015-09-11 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741563#comment-14741563
 ] 

zhihai xu commented on HADOOP-12348:


[~rkanter], Yes, you are right, I attached the corresponding patch 
HADOOP-12348.branch-2.patch for branch-2. thanks

> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.
> --
>
> Key: HADOOP-12348
> URL: https://issues.apache.org/jira/browse/HADOOP-12348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: zhihai xu
>Assignee: zhihai xu
> Attachments: HADOOP-12348.000.patch, HADOOP-12348.001.patch, 
> HADOOP-12348.branch-2.patch
>
>
> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit 
> parameter. MetricsSourceAdapter expects time unit millisecond  for 
> jmxCacheTTL but MetricsSystemImpl  passes time unit second to 
> MetricsSourceAdapter constructor.
> {code}
> jmxCacheTS = Time.now() + jmxCacheTTL;
>   /**
>* Current system time.  Do not use this to calculate a duration or interval
>* to sleep, because it will be broken by settimeofday.  Instead, use
>* monotonicNow.
>* @return current time in msec.
>*/
>   public static long now() {
> return System.currentTimeMillis();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12348) MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.

2015-09-11 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-12348:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Thanks Zhihai.  Committed to trunk and branch-2!

> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.
> --
>
> Key: HADOOP-12348
> URL: https://issues.apache.org/jira/browse/HADOOP-12348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: zhihai xu
>Assignee: zhihai xu
> Attachments: HADOOP-12348.000.patch, HADOOP-12348.001.patch, 
> HADOOP-12348.branch-2.patch
>
>
> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit 
> parameter. MetricsSourceAdapter expects time unit millisecond  for 
> jmxCacheTTL but MetricsSystemImpl  passes time unit second to 
> MetricsSourceAdapter constructor.
> {code}
> jmxCacheTS = Time.now() + jmxCacheTTL;
>   /**
>* Current system time.  Do not use this to calculate a duration or interval
>* to sleep, because it will be broken by settimeofday.  Instead, use
>* monotonicNow.
>* @return current time in msec.
>*/
>   public static long now() {
> return System.currentTimeMillis();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12348) MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.

2015-09-11 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-12348:
---
Fix Version/s: 2.8.0

> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.
> --
>
> Key: HADOOP-12348
> URL: https://issues.apache.org/jira/browse/HADOOP-12348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.8.0
>
> Attachments: HADOOP-12348.000.patch, HADOOP-12348.001.patch, 
> HADOOP-12348.branch-2.patch
>
>
> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit 
> parameter. MetricsSourceAdapter expects time unit millisecond  for 
> jmxCacheTTL but MetricsSystemImpl  passes time unit second to 
> MetricsSourceAdapter constructor.
> {code}
> jmxCacheTS = Time.now() + jmxCacheTTL;
>   /**
>* Current system time.  Do not use this to calculate a duration or interval
>* to sleep, because it will be broken by settimeofday.  Instead, use
>* monotonicNow.
>* @return current time in msec.
>*/
>   public static long now() {
> return System.currentTimeMillis();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12348) MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741716#comment-14741716
 ] 

Hudson commented on HADOOP-12348:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8438 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8438/])
HADOOP-12348. MetricsSystemImpl creates MetricsSourceAdapter with wrong time 
unit parameter. (zxu via rkanter) (rkanter: rev 
9538af0e1a7428b8787afa8d5e0b82c1e04adca7)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/impl/TestMetricsSystemImpl.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java


> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.
> --
>
> Key: HADOOP-12348
> URL: https://issues.apache.org/jira/browse/HADOOP-12348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.8.0
>
> Attachments: HADOOP-12348.000.patch, HADOOP-12348.001.patch, 
> HADOOP-12348.branch-2.patch
>
>
> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit 
> parameter. MetricsSourceAdapter expects time unit millisecond  for 
> jmxCacheTTL but MetricsSystemImpl  passes time unit second to 
> MetricsSourceAdapter constructor.
> {code}
> jmxCacheTS = Time.now() + jmxCacheTTL;
>   /**
>* Current system time.  Do not use this to calculate a duration or interval
>* to sleep, because it will be broken by settimeofday.  Instead, use
>* monotonicNow.
>* @return current time in msec.
>*/
>   public static long now() {
> return System.currentTimeMillis();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12348) MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.

2015-09-11 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741718#comment-14741718
 ] 

zhihai xu commented on HADOOP-12348:


thanks robert for reviewing and committing the patch!

> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.
> --
>
> Key: HADOOP-12348
> URL: https://issues.apache.org/jira/browse/HADOOP-12348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.8.0
>
> Attachments: HADOOP-12348.000.patch, HADOOP-12348.001.patch, 
> HADOOP-12348.branch-2.patch
>
>
> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit 
> parameter. MetricsSourceAdapter expects time unit millisecond  for 
> jmxCacheTTL but MetricsSystemImpl  passes time unit second to 
> MetricsSourceAdapter constructor.
> {code}
> jmxCacheTS = Time.now() + jmxCacheTTL;
>   /**
>* Current system time.  Do not use this to calculate a duration or interval
>* to sleep, because it will be broken by settimeofday.  Instead, use
>* monotonicNow.
>* @return current time in msec.
>*/
>   public static long now() {
> return System.currentTimeMillis();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12348) MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.

2015-09-11 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12348?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HADOOP-12348:
---
Attachment: HADOOP-12348.branch-2.patch

> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.
> --
>
> Key: HADOOP-12348
> URL: https://issues.apache.org/jira/browse/HADOOP-12348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: zhihai xu
>Assignee: zhihai xu
> Attachments: HADOOP-12348.000.patch, HADOOP-12348.001.patch, 
> HADOOP-12348.branch-2.patch
>
>
> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit 
> parameter. MetricsSourceAdapter expects time unit millisecond  for 
> jmxCacheTTL but MetricsSystemImpl  passes time unit second to 
> MetricsSourceAdapter constructor.
> {code}
> jmxCacheTS = Time.now() + jmxCacheTTL;
>   /**
>* Current system time.  Do not use this to calculate a duration or interval
>* to sleep, because it will be broken by settimeofday.  Instead, use
>* monotonicNow.
>* @return current time in msec.
>*/
>   public static long now() {
> return System.currentTimeMillis();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10420) Add support to Swift-FS to support tempAuth

2015-09-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740804#comment-14740804
 ] 

Steve Loughran commented on HADOOP-10420:
-

-1

fails {{TestSwiftRestClient}}

Failed tests: 
  TestSwiftRestClient.testUsernameAuthException:137 Expected authentication 
failed exception. but got 
AccessToken{id='AUTH_tk8c9403b42264465bacb57b2d4adc535c', tenant=null, 
expires='null'}
  TestSwiftRestClient.testApikeyAuthException:160 Expected authentication 
failed exception. but got 
AccessToken{id='AUTH_tk8c9403b42264465bacb57b2d4adc535c', tenant=null, 
expires='null'}

These are both expected to fail auth, yet the caller does end up connecting. 
Presumably its the different auth mechanism: the test needs to set things up so 
that auth fails for tempAuth as well as the other mechanisms



> Add support to Swift-FS to support tempAuth
> ---
>
> Key: HADOOP-10420
> URL: https://issues.apache.org/jira/browse/HADOOP-10420
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/swift, tools
>Affects Versions: 2.3.0
>Reporter: Jinghui Wang
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10420-002.patch, HADOOP-10420-003.patch, 
> HADOOP-10420-004.patch, HADOOP-10420-005.patch, HADOOP-10420-006.patch, 
> HADOOP-10420-007.patch, HADOOP-10420-008.patch, HADOOP-10420-009.patch, 
> HADOOP-10420-010.patch, HADOOP-10420-011.patch, HADOOP-10420-012.patch, 
> HADOOP-10420.patch
>
>
> Currently, hadoop-openstack Swift FS supports keystone authentication. The 
> attached patch adds support for tempAuth. Users will be able to configure 
> which authentication to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10420) Add support to Swift-FS to support tempAuth

2015-09-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-10420:

Attachment: HADOOP-10420-013.patch

Patch -013. This is the one for the failing tests.

Diff from -012.
# cut back on the number of whitespace/spacing deltas
# print tokens when {{TestSwiftRestClient}} assertions fail

> Add support to Swift-FS to support tempAuth
> ---
>
> Key: HADOOP-10420
> URL: https://issues.apache.org/jira/browse/HADOOP-10420
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/swift, tools
>Affects Versions: 2.3.0
>Reporter: Jinghui Wang
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10420-002.patch, HADOOP-10420-003.patch, 
> HADOOP-10420-004.patch, HADOOP-10420-005.patch, HADOOP-10420-006.patch, 
> HADOOP-10420-007.patch, HADOOP-10420-008.patch, HADOOP-10420-009.patch, 
> HADOOP-10420-010.patch, HADOOP-10420-011.patch, HADOOP-10420-012.patch, 
> HADOOP-10420-013.patch, HADOOP-10420.patch
>
>
> Currently, hadoop-openstack Swift FS supports keystone authentication. The 
> attached patch adds support for tempAuth. Users will be able to configure 
> which authentication to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10420) Add support to Swift-FS to support tempAuth

2015-09-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740821#comment-14740821
 ] 

Steve Loughran commented on HADOOP-10420:
-

I should add; the test failures were against IBM softlayer; the other swift 
endpoints are happy -this patch doesn't appear to cause any regressions.

> Add support to Swift-FS to support tempAuth
> ---
>
> Key: HADOOP-10420
> URL: https://issues.apache.org/jira/browse/HADOOP-10420
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/swift, tools
>Affects Versions: 2.3.0
>Reporter: Jinghui Wang
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10420-002.patch, HADOOP-10420-003.patch, 
> HADOOP-10420-004.patch, HADOOP-10420-005.patch, HADOOP-10420-006.patch, 
> HADOOP-10420-007.patch, HADOOP-10420-008.patch, HADOOP-10420-009.patch, 
> HADOOP-10420-010.patch, HADOOP-10420-011.patch, HADOOP-10420-012.patch, 
> HADOOP-10420-013.patch, HADOOP-10420.patch
>
>
> Currently, hadoop-openstack Swift FS supports keystone authentication. The 
> attached patch adds support for tempAuth. Users will be able to configure 
> which authentication to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12399) Wrong help messages in some test-patch plugins

2015-09-11 Thread Jagadesh Kiran N (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740822#comment-14740822
 ] 

Jagadesh Kiran N commented on HADOOP-12399:
---

[~sekikn] updating the comments  as below please review 

a. "--bigtop-puppetsetup=[false|true] execute the bigtop dev setup (needs sudo 
to root)"==>"--bigtop-puppetsetup=[false|true] execute the bigtop puppet setup 
(needs sudo to root)"
b. The 'gradle' command to use (default 'basedir/gradlew')" ==>The 'gradlew' 
command to use (default 'basedir/gradlew')"

> Wrong help messages in some test-patch plugins
> --
>
> Key: HADOOP-12399
> URL: https://issues.apache.org/jira/browse/HADOOP-12399
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Minor
>  Labels: newbie
>
> dev-support/personality/bigtop.sh:
> {code}
>  32 function bigtop_usage
>  33 {
>  34   echo "Bigtop specific:"
>  35   echo "--bigtop-puppetsetup=[false|true]   execute the bigtop dev setup 
> (needs sudo to root)"
>  36 }
> {code}
> s/bigtop-puppetsetup/bigtop-puppet/.
> dev-support/test-patch.d/gradle.sh:
> {code}
>  21 function gradle_usage
>  22 {
>  23   echo "gradle specific:"
>  24   echo "--gradle-cmd=The 'gradle' command to use (default 
> 'gradle')"
>  25   echo "--gradlew-cmd=The 'gradle' command to use (default 
> 'basedir/gradlew')"
>  26 }
> {code}
> s/'gradle' command/'gradlew' command/ for the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-6412) Add ability to retrigger hudson with special token in comment

2015-09-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-6412:
---
Component/s: (was: build)
 yetus

> Add ability to retrigger hudson with special token in comment
> -
>
> Key: HADOOP-6412
> URL: https://issues.apache.org/jira/browse/HADOOP-6412
> Project: Hadoop Common
>  Issue Type: Wish
>  Components: yetus
>Reporter: Todd Lipcon
>Priority: Minor
>
> It would be great if you could put some magic token (eg @HudsonTrigger) in 
> the body of any JIRA comment in order to get the QA bot to rerun on the 
> latest patch. Currently you have to toggle Patch Available status, which 
> generates a lot of needless email.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8787) KerberosAuthenticationHandler should include missing property names in configuration

2015-09-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-8787:
---
Status: Open  (was: Patch Available)

> KerberosAuthenticationHandler should include missing property names in 
> configuration
> 
>
> Key: HADOOP-8787
> URL: https://issues.apache.org/jira/browse/HADOOP-8787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.1-alpha, 1.0.3, 3.0.0
>Reporter: Todd Lipcon
>Assignee: Ted Malaska
>Priority: Minor
>  Labels: BB2015-05-TBR, newbie
> Attachments: HADOOP-8787-0.patch, HADOOP-8787-1.patch, 
> HADOOP-8787-2.patch, HADOOP-8787-3.patch
>
>
> Currently, if the spnego keytab is missing from the configuration, the user 
> gets an error like: "javax.servlet.ServletException: Principal not defined in 
> configuration". This should be augmented to actually show the configuration 
> variable which is missing. Otherwise it is hard for a user to know what to 
> fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8787) KerberosAuthenticationHandler should include missing property names in configuration

2015-09-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-8787:
---
Status: Patch Available  (was: Open)

> KerberosAuthenticationHandler should include missing property names in 
> configuration
> 
>
> Key: HADOOP-8787
> URL: https://issues.apache.org/jira/browse/HADOOP-8787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.0.1-alpha, 1.0.3, 3.0.0
>Reporter: Todd Lipcon
>Assignee: Ted Malaska
>Priority: Minor
>  Labels: BB2015-05-TBR, newbie
> Attachments: HADOOP-8787-0.patch, HADOOP-8787-1.patch, 
> HADOOP-8787-2.patch, HADOOP-8787-3.patch
>
>
> Currently, if the spnego keytab is missing from the configuration, the user 
> gets an error like: "javax.servlet.ServletException: Principal not defined in 
> configuration". This should be augmented to actually show the configuration 
> variable which is missing. Otherwise it is hard for a user to know what to 
> fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8787) KerberosAuthenticationHandler should include missing property names in configuration

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740815#comment-14740815
 ] 

Hadoop QA commented on HADOOP-8787:
---

\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12544989/HADOOP-8787-3.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 486d5cb |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7637/console |


This message was automatically generated.

> KerberosAuthenticationHandler should include missing property names in 
> configuration
> 
>
> Key: HADOOP-8787
> URL: https://issues.apache.org/jira/browse/HADOOP-8787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 1.0.3, 2.0.1-alpha, 3.0.0
>Reporter: Todd Lipcon
>Assignee: Ted Malaska
>Priority: Minor
>  Labels: BB2015-05-TBR, newbie
> Attachments: HADOOP-8787-0.patch, HADOOP-8787-1.patch, 
> HADOOP-8787-2.patch, HADOOP-8787-3.patch
>
>
> Currently, if the spnego keytab is missing from the configuration, the user 
> gets an error like: "javax.servlet.ServletException: Principal not defined in 
> configuration". This should be augmented to actually show the configuration 
> variable which is missing. Otherwise it is hard for a user to know what to 
> fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12324) Better exception reporting in SaslPlainServer

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741259#comment-14741259
 ] 

Hudson commented on HADOOP-12324:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #358 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/358/])
HADOOP-12324. Better exception reporting in SaslPlainServer.   (Mike Yoder via 
stevel) (stevel: rev ca0827a86235dbc4d7e00cc8426ebff9fcc2d421)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java


> Better exception reporting in SaslPlainServer
> -
>
> Key: HADOOP-12324
> URL: https://issues.apache.org/jira/browse/HADOOP-12324
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-12324.000.patch
>
>
> This is a follow up from HADOOP-12318.  The review comment from 
> [~ste...@apache.org]:
> {quote}
> -1. It's critical to use Exception.toString() and not .getMessage(), as some 
> exceptions (NPE) don't have messages.
> {quote}
> This is the promised follow-up Jira.
> CC: [~atm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12403) Enable multiple writes in flight for HBase WAL writing backed by WASB

2015-09-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12403:

Component/s: (was: tools)
 azure

> Enable multiple writes in flight for HBase WAL writing backed by WASB
> -
>
> Key: HADOOP-12403
> URL: https://issues.apache.org/jira/browse/HADOOP-12403
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: azure
>Reporter: Duo Xu
>Assignee: Duo Xu
> Attachments: HADOOP-12403.01.patch, HADOOP-12403.02.patch, 
> HADOOP-12403.03.patch
>
>
> Azure HDI HBase clusters use Azure blob storage as file system. We found that 
> the bottle neck was during writing to write ahead log (WAL). The latest HBase 
> WAL write model (HBASE-8755) uses multiple AsyncSyncer threads to sync data 
> to HDFS. However, our WASB driver is still based on a single thread model. 
> Thus when the sync threads call into WASB layer, every time only one thread 
> will be allowed to send data to Azure storage.This jira is to introduce a new 
> write model in WASB layer to allow multiple writes in parallel.
> 1. Since We use page blob for WAL, this will cause "holes" in the page blob 
> as every write starts on a new page. We use the first two bytes of every page 
> to record the actual data size of the current page.
> 2. When reading WAL, we need to know the actual size of the WAL. This should 
> be the sum of the number represented by the first two bytes of every page. 
> However looping over every page to get the size will be very slow, 
> considering normal WAL size is 128MB and each page is 512 bytes. So during 
> writing, every time a write succeeds, a metadata of the blob called 
> "total_data_uploaded" will be updated.
> 3. Although we allow multiple writes in flight, we need to make sure the sync 
> threads which call into WASB layers return in order. Reading HBase source 
> code FSHLog.java, we find that every sync request is associated with a 
> transaction id. If the sync succeeds, all the transactions prior to this 
> transaction id are assumed to be in Azure Storage. We use a queue to store 
> the sync requests and make sure they return to HBase layer in order.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12398) filefilter function in test-patch flink personality is never called

2015-09-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12398:
--
   Resolution: Fixed
Fix Version/s: HADOOP-12111
   Status: Resolved  (was: Patch Available)

+1 committing.

Thanks!

> filefilter function in test-patch flink personality is never called
> ---
>
> Key: HADOOP-12398
> URL: https://issues.apache.org/jira/browse/HADOOP-12398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>  Labels: newbie
> Fix For: HADOOP-12111
>
> Attachments: HADOOP-12398.HADOOP-12111.00.patch, 
> HADOOP-12398.HADOOP-12111.01.patch
>
>
> Wrong function name.
> {code}
>  28 function fliblib_filefilter
>  29 {
>  30   local filename=$1
>  31 
>  32   if [[ ${filename} =~ \.java$
>  33 || ${filename} =~ \.scala$
>  34 || ${filename} =~ pom.xml$ ]]; then
>  35 add_test flinklib
>  36   fi
>  37 }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12400) Wrong comment for scaladoc_rebuild function in test-patch scala plugin

2015-09-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12400:
--
   Resolution: Fixed
Fix Version/s: HADOOP-12111
   Status: Resolved  (was: Patch Available)

+1 committed

Thanks!

> Wrong comment for scaladoc_rebuild function in test-patch scala plugin
> --
>
> Key: HADOOP-12400
> URL: https://issues.apache.org/jira/browse/HADOOP-12400
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Trivial
>  Labels: newbie
> Fix For: HADOOP-12111
>
> Attachments: HADOOP-12400.HADOOP-12111.00.patch, 
> HADOOP-12400.HADOOP-12111.01.patch, HADOOP-12400.HADOOP-12111.02.patch
>
>
> {code}
>  62 ## @description  Count and compare the number of JavaDoc warnings pre- 
> and post- patch
>  63 ## @audience private
>  64 ## @stabilityevolving
>  65 ## @replaceable  no
>  66 ## @return   0 on success
>  67 ## @return   1 on failure
>  68 function scaladoc_rebuild
> {code}
> s/JavaDoc/ScalaDoc/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12348) MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741843#comment-14741843
 ] 

Hudson commented on HADOOP-12348:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk-Java8 #360 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/360/])
HADOOP-12348. MetricsSystemImpl creates MetricsSourceAdapter with wrong time 
unit parameter. (zxu via rkanter) (rkanter: rev 
9538af0e1a7428b8787afa8d5e0b82c1e04adca7)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/impl/TestMetricsSystemImpl.java


> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.
> --
>
> Key: HADOOP-12348
> URL: https://issues.apache.org/jira/browse/HADOOP-12348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.8.0
>
> Attachments: HADOOP-12348.000.patch, HADOOP-12348.001.patch, 
> HADOOP-12348.branch-2.patch
>
>
> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit 
> parameter. MetricsSourceAdapter expects time unit millisecond  for 
> jmxCacheTTL but MetricsSystemImpl  passes time unit second to 
> MetricsSourceAdapter constructor.
> {code}
> jmxCacheTS = Time.now() + jmxCacheTTL;
>   /**
>* Current system time.  Do not use this to calculate a duration or interval
>* to sleep, because it will be broken by settimeofday.  Instead, use
>* monotonicNow.
>* @return current time in msec.
>*/
>   public static long now() {
> return System.currentTimeMillis();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12348) MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741892#comment-14741892
 ] 

Hudson commented on HADOOP-12348:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2300 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2300/])
HADOOP-12348. MetricsSystemImpl creates MetricsSourceAdapter with wrong time 
unit parameter. (zxu via rkanter) (rkanter: rev 
9538af0e1a7428b8787afa8d5e0b82c1e04adca7)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/impl/TestMetricsSystemImpl.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java


> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.
> --
>
> Key: HADOOP-12348
> URL: https://issues.apache.org/jira/browse/HADOOP-12348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.8.0
>
> Attachments: HADOOP-12348.000.patch, HADOOP-12348.001.patch, 
> HADOOP-12348.branch-2.patch
>
>
> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit 
> parameter. MetricsSourceAdapter expects time unit millisecond  for 
> jmxCacheTTL but MetricsSystemImpl  passes time unit second to 
> MetricsSourceAdapter constructor.
> {code}
> jmxCacheTS = Time.now() + jmxCacheTTL;
>   /**
>* Current system time.  Do not use this to calculate a duration or interval
>* to sleep, because it will be broken by settimeofday.  Instead, use
>* monotonicNow.
>* @return current time in msec.
>*/
>   public static long now() {
> return System.currentTimeMillis();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12397) Incomplete comment for test-patch compile_cycle function

2015-09-11 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741803#comment-14741803
 ] 

Allen Wittenauer commented on HADOOP-12397:
---

bq. Allen, the architecture and advanced document seem to mention the _rebuild 
phase already. Do you have something to worry about?

Just my memory. ;)

+1 committing this.

Thanks!

> Incomplete comment for test-patch compile_cycle function
> 
>
> Key: HADOOP-12397
> URL: https://issues.apache.org/jira/browse/HADOOP-12397
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-12397.HADOOP-12111.00.patch, 
> HADOOP-12397.HADOOP-12111.01.patch
>
>
> Its comment says:
> {code}
> ## @description  This will callout to _precompile, compile, and _postcompile
> {code}
> but it calls _rebuild also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12397) Incomplete comment for test-patch compile_cycle function

2015-09-11 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-12397:
--
   Resolution: Fixed
Fix Version/s: HADOOP-12111
   Status: Resolved  (was: Patch Available)

> Incomplete comment for test-patch compile_cycle function
> 
>
> Key: HADOOP-12397
> URL: https://issues.apache.org/jira/browse/HADOOP-12397
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Trivial
>  Labels: newbie
> Fix For: HADOOP-12111
>
> Attachments: HADOOP-12397.HADOOP-12111.00.patch, 
> HADOOP-12397.HADOOP-12111.01.patch
>
>
> Its comment says:
> {code}
> ## @description  This will callout to _precompile, compile, and _postcompile
> {code}
> but it calls _rebuild also.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12348) MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741820#comment-14741820
 ] 

Hudson commented on HADOOP-12348:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #375 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/375/])
HADOOP-12348. MetricsSystemImpl creates MetricsSourceAdapter with wrong time 
unit parameter. (zxu via rkanter) (rkanter: rev 
9538af0e1a7428b8787afa8d5e0b82c1e04adca7)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/impl/TestMetricsSystemImpl.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.
> --
>
> Key: HADOOP-12348
> URL: https://issues.apache.org/jira/browse/HADOOP-12348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.8.0
>
> Attachments: HADOOP-12348.000.patch, HADOOP-12348.001.patch, 
> HADOOP-12348.branch-2.patch
>
>
> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit 
> parameter. MetricsSourceAdapter expects time unit millisecond  for 
> jmxCacheTTL but MetricsSystemImpl  passes time unit second to 
> MetricsSourceAdapter constructor.
> {code}
> jmxCacheTS = Time.now() + jmxCacheTTL;
>   /**
>* Current system time.  Do not use this to calculate a duration or interval
>* to sleep, because it will be broken by settimeofday.  Instead, use
>* monotonicNow.
>* @return current time in msec.
>*/
>   public static long now() {
> return System.currentTimeMillis();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12348) MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741760#comment-14741760
 ] 

Hudson commented on HADOOP-12348:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk-Java8 #381 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/381/])
HADOOP-12348. MetricsSystemImpl creates MetricsSourceAdapter with wrong time 
unit parameter. (zxu via rkanter) (rkanter: rev 
9538af0e1a7428b8787afa8d5e0b82c1e04adca7)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/impl/TestMetricsSystemImpl.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java


> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.
> --
>
> Key: HADOOP-12348
> URL: https://issues.apache.org/jira/browse/HADOOP-12348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.8.0
>
> Attachments: HADOOP-12348.000.patch, HADOOP-12348.001.patch, 
> HADOOP-12348.branch-2.patch
>
>
> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit 
> parameter. MetricsSourceAdapter expects time unit millisecond  for 
> jmxCacheTTL but MetricsSystemImpl  passes time unit second to 
> MetricsSourceAdapter constructor.
> {code}
> jmxCacheTS = Time.now() + jmxCacheTTL;
>   /**
>* Current system time.  Do not use this to calculate a duration or interval
>* to sleep, because it will be broken by settimeofday.  Instead, use
>* monotonicNow.
>* @return current time in msec.
>*/
>   public static long now() {
> return System.currentTimeMillis();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12348) MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741887#comment-14741887
 ] 

Hudson commented on HADOOP-12348:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2323 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2323/])
HADOOP-12348. MetricsSystemImpl creates MetricsSourceAdapter with wrong time 
unit parameter. (zxu via rkanter) (rkanter: rev 
9538af0e1a7428b8787afa8d5e0b82c1e04adca7)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/impl/TestMetricsSystemImpl.java


> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.
> --
>
> Key: HADOOP-12348
> URL: https://issues.apache.org/jira/browse/HADOOP-12348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.8.0
>
> Attachments: HADOOP-12348.000.patch, HADOOP-12348.001.patch, 
> HADOOP-12348.branch-2.patch
>
>
> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit 
> parameter. MetricsSourceAdapter expects time unit millisecond  for 
> jmxCacheTTL but MetricsSystemImpl  passes time unit second to 
> MetricsSourceAdapter constructor.
> {code}
> jmxCacheTS = Time.now() + jmxCacheTTL;
>   /**
>* Current system time.  Do not use this to calculate a duration or interval
>* to sleep, because it will be broken by settimeofday.  Instead, use
>* monotonicNow.
>* @return current time in msec.
>*/
>   public static long now() {
> return System.currentTimeMillis();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12385) include nested stack trace in SaslRpcClient.getServerToken()

2015-09-11 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12385?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741742#comment-14741742
 ] 

Chris Nauroth commented on HADOOP-12385:


Hi [~ste...@apache.org].  This looks good.  I have a minor nitpick on these 
lines:

{code}
+  LOG.debug("tokens aren't supported for this protocol or user doesn't 
have one");
   return null; // tokens aren't supported or user doesn't have one
{code}

{code}
+  LOG.debug("client isn't using kerberos");
   return null; // client isn't using kerberos
{code}

{code}
+  LOG.debug("protocol doesn't use kerberos");
   return null; // protocol doesn't use kerberos
{code}

Now that you've added these log statements, the code is self-documenting, and 
the comments are redundant.  Would you please remove the comments?

> include nested stack trace in SaslRpcClient.getServerToken()
> 
>
> Key: HADOOP-12385
> URL: https://issues.apache.org/jira/browse/HADOOP-12385
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.7.1
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-12385-001.patch, HADOOP-12385-002.patch
>
>
> The {{SaslRpcClient.getServerToken()}} method loses the stack traces when an 
> attempt to instantiate a {{TokenSelector}}. It should include them in the 
> generated exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12348) MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12348?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741795#comment-14741795
 ] 

Hudson commented on HADOOP-12348:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #1113 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/1113/])
HADOOP-12348. MetricsSystemImpl creates MetricsSourceAdapter with wrong time 
unit parameter. (zxu via rkanter) (rkanter: rev 
9538af0e1a7428b8787afa8d5e0b82c1e04adca7)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSystemImpl.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/impl/TestMetricsSystemImpl.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/metrics2/impl/MetricsSourceAdapter.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit parameter.
> --
>
> Key: HADOOP-12348
> URL: https://issues.apache.org/jira/browse/HADOOP-12348
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: zhihai xu
>Assignee: zhihai xu
> Fix For: 2.8.0
>
> Attachments: HADOOP-12348.000.patch, HADOOP-12348.001.patch, 
> HADOOP-12348.branch-2.patch
>
>
> MetricsSystemImpl creates MetricsSourceAdapter with wrong time unit 
> parameter. MetricsSourceAdapter expects time unit millisecond  for 
> jmxCacheTTL but MetricsSystemImpl  passes time unit second to 
> MetricsSourceAdapter constructor.
> {code}
> jmxCacheTS = Time.now() + jmxCacheTTL;
>   /**
>* Current system time.  Do not use this to calculate a duration or interval
>* to sleep, because it will be broken by settimeofday.  Instead, use
>* monotonicNow.
>* @return current time in msec.
>*/
>   public static long now() {
> return System.currentTimeMillis();
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12399) Wrong help messages in some test-patch plugins

2015-09-11 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741810#comment-14741810
 ] 

Allen Wittenauer commented on HADOOP-12399:
---

{code}
echo "--bigtop-puppetsetup=[false|true]
{code} 

isn't correct.

It needs to match the code in bigtop_parse_args.

> Wrong help messages in some test-patch plugins
> --
>
> Key: HADOOP-12399
> URL: https://issues.apache.org/jira/browse/HADOOP-12399
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-12399.HADOOP-12111.00.patch
>
>
> dev-support/personality/bigtop.sh:
> {code}
>  32 function bigtop_usage
>  33 {
>  34   echo "Bigtop specific:"
>  35   echo "--bigtop-puppetsetup=[false|true]   execute the bigtop dev setup 
> (needs sudo to root)"
>  36 }
> {code}
> s/bigtop-puppetsetup/bigtop-puppet/.
> dev-support/test-patch.d/gradle.sh:
> {code}
>  21 function gradle_usage
>  22 {
>  23   echo "gradle specific:"
>  24   echo "--gradle-cmd=The 'gradle' command to use (default 
> 'gradle')"
>  25   echo "--gradlew-cmd=The 'gradle' command to use (default 
> 'basedir/gradlew')"
>  26 }
> {code}
> s/'gradle' command/'gradlew' command/ for the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11838) Externalize AuthToken interface

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740929#comment-14740929
 ] 

Hadoop QA commented on HADOOP-11838:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | patch |   0m  0s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12725725/HADOOP-11838-v1.patch |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 486d5cb |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7639/console |


This message was automatically generated.

> Externalize AuthToken interface
> ---
>
> Key: HADOOP-11838
> URL: https://issues.apache.org/jira/browse/HADOOP-11838
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11838-v1.patch
>
>
> This suggests refactoring existing {{AuthToken}} utility class and 
> externalizing a generic {{AuthToken}} API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12057) swiftfs rename on partitioned file attempts to consolidate partitions

2015-09-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740933#comment-14740933
 ] 

Steve Loughran commented on HADOOP-12057:
-

-1; tests failing, at least locally. But repeatedly.

{code}
  
TestSwiftFileSystemContract>FileSystemContractBaseTest.testRenameDirectoryMoveToExistingDirectory:417->FileSystemContractBaseTest.rename:495
 Source exists expected: but was:
  
TestSwiftFileSystemContract>FileSystemContractBaseTest.testRenameDirectoryAsExistingDirectory:449->FileSystemContractBaseTest.rename:495
 Source exists expected: but was:
{code}

That is, after the rename the source directories are still there



> swiftfs rename on partitioned file attempts to consolidate partitions
> -
>
> Key: HADOOP-12057
> URL: https://issues.apache.org/jira/browse/HADOOP-12057
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/swift
>Reporter: David Dobbins
>Assignee: David Dobbins
> Attachments: HADOOP-12057-006.patch, HADOOP-12057-008.patch, 
> HADOOP-12057.007.patch, HADOOP-12057.patch, HADOOP-12057.patch, 
> HADOOP-12057.patch, HADOOP-12057.patch, HADOOP-12057.patch
>
>
> In the swift filesystem for openstack, a rename operation on a partitioned 
> file uses the swift COPY operation, which attempts to consolidate all of the 
> partitions into a single object.  This causes the rename to fail when the 
> total size of all the partitions exceeds the maximum object size for swift.  
> Since partitioned files are primarily created to allow a file to exceed the 
> maximum object size, this bug makes writing to swift extremely unreliable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12324) Better exception reporting in SaslPlainServer

2015-09-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740944#comment-14740944
 ] 

Steve Loughran commented on HADOOP-12324:
-

+1, committed!

> Better exception reporting in SaslPlainServer
> -
>
> Key: HADOOP-12324
> URL: https://issues.apache.org/jira/browse/HADOOP-12324
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-12324.000.patch
>
>
> This is a follow up from HADOOP-12318.  The review comment from 
> [~ste...@apache.org]:
> {quote}
> -1. It's critical to use Exception.toString() and not .getMessage(), as some 
> exceptions (NPE) don't have messages.
> {quote}
> This is the promised follow-up Jira.
> CC: [~atm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12389) allow self-impersonation

2015-09-11 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740946#comment-14740946
 ] 

Allen Wittenauer commented on HADOOP-12389:
---

bq. If ProxyUsers#authorize is called when it should not be, it is a bug and 
should be fixed rather than changing authorization scheme.

There is no difference between a user authenticating as themselves and a user 
authenticating as themselves and requesting to proxy to their own account, 
especially if we put in the clause about there mustn't already exist a proxy 
config entry.

As to a bug, I'd love to go back to pre-2.5.0 NN HTTP handling before 
everything got massively screwed up despite us (a year later) still dealing 
with the fallout, but I don't see that happening.  But again: it's sort of 
irrelevant.  There's no real difference here from a security perspective.

> allow self-impersonation
> 
>
> Key: HADOOP-12389
> URL: https://issues.apache.org/jira/browse/HADOOP-12389
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>
> This is kind of dumb:
> org.apache.hadoop.security.authorize.AuthorizationException: User: aw is not 
> allowed to impersonate aw
> Users should be able to impersonate themselves in secure and non-secure cases 
> automatically, for free.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10420) Add support to Swift-FS to support tempAuth

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10420?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740897#comment-14740897
 ] 

Hadoop QA commented on HADOOP-10420:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | pre-patch |  16m  0s | Pre-patch trunk compilation is 
healthy. |
| {color:green}+1{color} | @author |   0m  0s | The patch does not contain any 
@author tags. |
| {color:green}+1{color} | tests included |   0m  0s | The patch appears to 
include 1 new or modified test files. |
| {color:green}+1{color} | javac |   7m 51s | There were no new javac warning 
messages. |
| {color:green}+1{color} | javadoc |   9m 52s | There were no new javadoc 
warning messages. |
| {color:green}+1{color} | release audit |   0m 25s | The applied patch does 
not increase the total number of release audit warnings. |
| {color:red}-1{color} | checkstyle |   0m 19s | The applied patch generated  1 
new checkstyle issues (total was 276, now 277). |
| {color:green}+1{color} | whitespace |   0m  2s | The patch has no lines that 
end in whitespace. |
| {color:green}+1{color} | install |   1m 28s | mvn install still works. |
| {color:green}+1{color} | eclipse:eclipse |   0m 33s | The patch built with 
eclipse:eclipse. |
| {color:green}+1{color} | findbugs |   0m 45s | The patch does not introduce 
any new Findbugs (version 3.0.0) warnings. |
| {color:green}+1{color} | tools/hadoop tests |   0m 14s | Tests passed in 
hadoop-openstack. |
| | |  37m 33s | |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12755406/HADOOP-10420-013.patch 
|
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 486d5cb |
| checkstyle |  
https://builds.apache.org/job/PreCommit-HADOOP-Build/7638/artifact/patchprocess/diffcheckstylehadoop-openstack.txt
 |
| hadoop-openstack test log | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7638/artifact/patchprocess/testrun_hadoop-openstack.txt
 |
| Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7638/testReport/ |
| Java | 1.7.0_55 |
| uname | Linux asf906.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7638/console |


This message was automatically generated.

> Add support to Swift-FS to support tempAuth
> ---
>
> Key: HADOOP-10420
> URL: https://issues.apache.org/jira/browse/HADOOP-10420
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs, fs/swift, tools
>Affects Versions: 2.3.0
>Reporter: Jinghui Wang
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10420-002.patch, HADOOP-10420-003.patch, 
> HADOOP-10420-004.patch, HADOOP-10420-005.patch, HADOOP-10420-006.patch, 
> HADOOP-10420-007.patch, HADOOP-10420-008.patch, HADOOP-10420-009.patch, 
> HADOOP-10420-010.patch, HADOOP-10420-011.patch, HADOOP-10420-012.patch, 
> HADOOP-10420-013.patch, HADOOP-10420.patch
>
>
> Currently, hadoop-openstack Swift FS supports keystone authentication. The 
> attached patch adds support for tempAuth. Users will be able to configure 
> which authentication to use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11838) Externalize AuthToken interface

2015-09-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740919#comment-14740919
 ] 

Steve Loughran commented on HADOOP-11838:
-

# what's the benefit/rationale for this
# ` This class currently is only used internally, so it's safe.`  That's always 
a bit dangerous. Why not leave where it is?

> Externalize AuthToken interface
> ---
>
> Key: HADOOP-11838
> URL: https://issues.apache.org/jira/browse/HADOOP-11838
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Reporter: Kai Zheng
>Assignee: Kai Zheng
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-11838-v1.patch
>
>
> This suggests refactoring existing {{AuthToken}} utility class and 
> externalizing a generic {{AuthToken}} API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12324) Better exception reporting in SaslPlainServer

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740954#comment-14740954
 ] 

Hudson commented on HADOOP-12324:
-

FAILURE: Integrated in Hadoop-trunk-Commit #8434 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/8434/])
HADOOP-12324. Better exception reporting in SaslPlainServer.   (Mike Yoder via 
stevel) (stevel: rev ca0827a86235dbc4d7e00cc8426ebff9fcc2d421)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java
* hadoop-common-project/hadoop-common/CHANGES.txt


> Better exception reporting in SaslPlainServer
> -
>
> Key: HADOOP-12324
> URL: https://issues.apache.org/jira/browse/HADOOP-12324
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-12324.000.patch
>
>
> This is a follow up from HADOOP-12318.  The review comment from 
> [~ste...@apache.org]:
> {quote}
> -1. It's critical to use Exception.toString() and not .getMessage(), as some 
> exceptions (NPE) don't have messages.
> {quote}
> This is the promised follow-up Jira.
> CC: [~atm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11404) Clarify the "expected client Kerberos principal is null" authorization message

2015-09-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11404:

Status: Open  (was: Patch Available)

> Clarify the "expected client Kerberos principal is null" authorization message
> --
>
> Key: HADOOP-11404
> URL: https://issues.apache.org/jira/browse/HADOOP-11404
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Stephen Chu
>Assignee: Stephen Chu
>Priority: Minor
>  Labels: BB2015-05-TBR, supportability
> Attachments: HADOOP-11404.001.patch
>
>
> In {{ServiceAuthorizationManager#authorize}}, we throw an 
> {{AuthorizationException}} with message "expected client Kerberos principal 
> is null" when authorization fails.
> However, this is a confusing log message, because it leads users to believe 
> there was a Kerberos authentication problem, when in fact the the user could 
> have authenticated successfully.
> {code}
> if((clientPrincipal != null && !clientPrincipal.equals(user.getUserName())) 
> || 
>acls.length != 2  || !acls[0].isUserAllowed(user) || 
> acls[1].isUserAllowed(user)) {
>   AUDITLOG.warn(AUTHZ_FAILED_FOR + user + " for protocol=" + protocol
>   + ", expected client Kerberos principal is " + clientPrincipal);
>   throw new AuthorizationException("User " + user + 
>   " is not authorized for protocol " + protocol + 
>   ", expected client Kerberos principal is " + clientPrincipal);
> }
> AUDITLOG.info(AUTHZ_SUCCESSFUL_FOR + user + " for protocol="+protocol);
> {code}
> In the above code, if clientPrincipal is null, then the user is authenticated 
> successfully but denied by a configured ACL, not a Kerberos issue. We should 
> improve this log message to state this.
> Thanks to [~tlipcon] for finding this and proposing a fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11404) Clarify the "expected client Kerberos principal is null" authorization message

2015-09-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11404?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11404:

Status: Patch Available  (was: Open)

+1

submitting a patch to make sure it takes and I'll commit it if all is well

> Clarify the "expected client Kerberos principal is null" authorization message
> --
>
> Key: HADOOP-11404
> URL: https://issues.apache.org/jira/browse/HADOOP-11404
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.2.0
>Reporter: Stephen Chu
>Assignee: Stephen Chu
>Priority: Minor
>  Labels: BB2015-05-TBR, supportability
> Attachments: HADOOP-11404.001.patch
>
>
> In {{ServiceAuthorizationManager#authorize}}, we throw an 
> {{AuthorizationException}} with message "expected client Kerberos principal 
> is null" when authorization fails.
> However, this is a confusing log message, because it leads users to believe 
> there was a Kerberos authentication problem, when in fact the the user could 
> have authenticated successfully.
> {code}
> if((clientPrincipal != null && !clientPrincipal.equals(user.getUserName())) 
> || 
>acls.length != 2  || !acls[0].isUserAllowed(user) || 
> acls[1].isUserAllowed(user)) {
>   AUDITLOG.warn(AUTHZ_FAILED_FOR + user + " for protocol=" + protocol
>   + ", expected client Kerberos principal is " + clientPrincipal);
>   throw new AuthorizationException("User " + user + 
>   " is not authorized for protocol " + protocol + 
>   ", expected client Kerberos principal is " + clientPrincipal);
> }
> AUDITLOG.info(AUTHZ_SUCCESSFUL_FOR + user + " for protocol="+protocol);
> {code}
> In the above code, if clientPrincipal is null, then the user is authenticated 
> successfully but denied by a configured ACL, not a Kerberos issue. We should 
> improve this log message to state this.
> Thanks to [~tlipcon] for finding this and proposing a fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10633) use Time#monotonicNow to avoid system clock reset

2015-09-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741168#comment-14741168
 ] 

Steve Loughran commented on HADOOP-10633:
-

Just reviewed this: I agreee with Andrew.  {{AbstractService}} is only 
recording the events for forwarding to others, so doesn't need to change, but 
then: "probably" doesn't suffer. I don't know about the tokens. 

Andrew: if you are happy with the NativeIO one —why not just check that bit in?

> use Time#monotonicNow to avoid system clock reset
> -
>
> Key: HADOOP-10633
> URL: https://issues.apache.org/jira/browse/HADOOP-10633
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io, security
>Affects Versions: 3.0.0
>Reporter: Liang Xie
>Assignee: Liang Xie
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10633.txt
>
>
> let's replace System#currentTimeMillis with Time#monotonicNow



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12399) Wrong help messages in some test-patch plugins

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741217#comment-14741217
 ] 

Hadoop QA commented on HADOOP-12399:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} precommit patch detected. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green} 0m 
6s {color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 0m 26s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12755437/HADOOP-12399.HADOOP-12111.00.patch
 |
| JIRA Issue | HADOOP-12399 |
| git revision | HADOOP-12111 / 1e2eeb0 |
| Optional Tests | asflicense unit  shellcheck  |
| uname | Linux asf900.gq1.ygridcore.net 3.13.0-36-lowlatency #63-Ubuntu SMP 
PREEMPT Wed Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build@2/patchprocess/dev-support-test/personality/hadoop.sh
 |
| Default Java | 1.7.0_55 |
| Multi-JDK versions |  /home/jenkins/tools/java/jdk1.8.0:1.8.0 
/home/jenkins/tools/java/jdk1.7.0_55:1.7.0_55 |
| shellcheck | v0.3.3 (This is an old version that has serious bugs. Consider 
upgrading.) |
| JDK v1.7.0_55  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7643/testReport/ |
| Max memory used | 48MB |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7643/console |


This message was automatically generated.



> Wrong help messages in some test-patch plugins
> --
>
> Key: HADOOP-12399
> URL: https://issues.apache.org/jira/browse/HADOOP-12399
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: yetus
>Affects Versions: HADOOP-12111
>Reporter: Kengo Seki
>Assignee: Jagadesh Kiran N
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-12399.HADOOP-12111.00.patch
>
>
> dev-support/personality/bigtop.sh:
> {code}
>  32 function bigtop_usage
>  33 {
>  34   echo "Bigtop specific:"
>  35   echo "--bigtop-puppetsetup=[false|true]   execute the bigtop dev setup 
> (needs sudo to root)"
>  36 }
> {code}
> s/bigtop-puppetsetup/bigtop-puppet/.
> dev-support/test-patch.d/gradle.sh:
> {code}
>  21 function gradle_usage
>  22 {
>  23   echo "gradle specific:"
>  24   echo "--gradle-cmd=The 'gradle' command to use (default 
> 'gradle')"
>  25   echo "--gradlew-cmd=The 'gradle' command to use (default 
> 'basedir/gradlew')"
>  26 }
> {code}
> s/'gradle' command/'gradlew' command/ for the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12324) Better exception reporting in SaslPlainServer

2015-09-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741239#comment-14741239
 ] 

Hudson commented on HADOOP-12324:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #2297 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2297/])
HADOOP-12324. Better exception reporting in SaslPlainServer.   (Mike Yoder via 
stevel) (stevel: rev ca0827a86235dbc4d7e00cc8426ebff9fcc2d421)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/SaslPlainServer.java


> Better exception reporting in SaslPlainServer
> -
>
> Key: HADOOP-12324
> URL: https://issues.apache.org/jira/browse/HADOOP-12324
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.8.0
>Reporter: Mike Yoder
>Assignee: Mike Yoder
>Priority: Minor
> Attachments: HADOOP-12324.000.patch
>
>
> This is a follow up from HADOOP-12318.  The review comment from 
> [~ste...@apache.org]:
> {quote}
> -1. It's critical to use Exception.toString() and not .getMessage(), as some 
> exceptions (NPE) don't have messages.
> {quote}
> This is the promised follow-up Jira.
> CC: [~atm]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10633) use Time#monotonicNow to avoid system clock reset

2015-09-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10633?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14741270#comment-14741270
 ] 

Hadoop QA commented on HADOOP-10633:


\\
\\
| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | patch |   0m  1s | The patch file was not named 
according to hadoop's naming conventions. Please see 
https://wiki.apache.org/hadoop/HowToContribute for instructions. |
| {color:red}-1{color} | patch |   0m  1s | The patch command could not apply 
the patch during dryrun. |
\\
\\
|| Subsystem || Report/Notes ||
| Patch URL | 
http://issues.apache.org/jira/secure/attachment/12647069/HADOOP-10633.txt |
| Optional Tests | javadoc javac unit findbugs checkstyle |
| git revision | trunk / 15a557f |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/7645/console |


This message was automatically generated.

> use Time#monotonicNow to avoid system clock reset
> -
>
> Key: HADOOP-10633
> URL: https://issues.apache.org/jira/browse/HADOOP-10633
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io, security
>Affects Versions: 3.0.0
>Reporter: Liang Xie
>Assignee: Liang Xie
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10633.txt
>
>
> let's replace System#currentTimeMillis with Time#monotonicNow



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)