[jira] [Updated] (HADOOP-10606) NodeManager cannot launch container when using RawLocalFileSystem for fs.file.impl

2016-04-11 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-10606:
--
Assignee: (was: Brahma Reddy Battula)

> NodeManager cannot launch container when using RawLocalFileSystem for 
> fs.file.impl
> --
>
> Key: HADOOP-10606
> URL: https://issues.apache.org/jira/browse/HADOOP-10606
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs, io, util
>Affects Versions: 2.4.0
> Environment: The environment does not matter with this issue. But I 
> use Windows 8 64bits, Open JDK 7.0.
>Reporter: BoYang
>Priority: Critical
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> Node manager failed to launch container when I set fs.file.impl to 
> org.apache.hadoop.fs.RawLocalFileSystem in core-site.xml.
> The log is:
> WARN ContainersLauncher #11 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch
>  - Failed to launch container.
> java.lang.ClassCastException: org.apache.hadoop.fs.RawLocalFileSystem cannot 
> be cast to org.apache.hadoop.fs.LocalFileSystem
>   at org.apache.hadoop.fs.FileSystem.getLocal(FileSystem.java:339)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:270)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:344)
>   at 
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.LocalDirsHandlerService.getLogPathForWrite(LocalDirsHandlerService.java:307)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:185)
>   at 
> org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:81)
>   at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>   at java.lang.Thread.run(Thread.java:722)
> The issue is in 
> hadoop-common-project\hadoop-common\src\main\java\org\apache\hadoop\fs\LocalDirAllocator.java.
>  It invokes FileSystem.getLocal(), which tries to convert the FileSystem to 
> LocalFileSystem and will fail when FileSystem object is RawLocalFileSystem 
> (RawLocalFileSystem is not a sub class of LocalFileSystem).
>   public static LocalFileSystem getLocal(Configuration conf)
> throws IOException {
> return (LocalFileSystem)get(LocalFileSystem.NAME, conf);
>   }
> The fix for LocalDirAllocator.java seems to be invoking LocalFileSystem.get() 
> instead of LocalFileSystem.getLocal()?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13011) Clearly Document the Password Details for Keystore-based Credential Providers

2016-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13011?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236535#comment-15236535
 ] 

Hadoop QA commented on HADOOP-13011:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
4s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 9m 43s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12798179/HADOOP-13011-003.patch
 |
| JIRA Issue | HADOOP-13011 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 30ab3ac1acfe 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 44bbc50 |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9061/console |
| Powered by | Apache Yetus 0.2.0   http://yetus.apache.org |


This message was automatically generated.



> Clearly Document the Password Details for Keystore-based Credential Providers
> -
>
> Key: HADOOP-13011
> URL: https://issues.apache.org/jira/browse/HADOOP-13011
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-13011-001.patch, HADOOP-13011-002.patch, 
> HADOOP-13011-003.patch
>
>
> HADOOP-12942 discusses the unobviousness of the use of a default password for 
> the keystores for keystore-based credential providers. This patch adds 
> documentation to the CredentialProviderAPI.md that describes the different 
> types of credential providers available and the password management details 
> of the keystore-based ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13011) Clearly Document the Password Details for Keystore-based Credential Providers

2016-04-11 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-13011:
-
Status: Patch Available  (was: Open)

> Clearly Document the Password Details for Keystore-based Credential Providers
> -
>
> Key: HADOOP-13011
> URL: https://issues.apache.org/jira/browse/HADOOP-13011
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-13011-001.patch, HADOOP-13011-002.patch, 
> HADOOP-13011-003.patch
>
>
> HADOOP-12942 discusses the unobviousness of the use of a default password for 
> the keystores for keystore-based credential providers. This patch adds 
> documentation to the CredentialProviderAPI.md that describes the different 
> types of credential providers available and the password management details 
> of the keystore-based ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13011) Clearly Document the Password Details for Keystore-based Credential Providers

2016-04-11 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-13011:
-
Attachment: HADOOP-13011-003.patch

Based on offline conversation with [~yoderme] - we agreed to add more language 
around default passwords and the ability to write tooling.

> Clearly Document the Password Details for Keystore-based Credential Providers
> -
>
> Key: HADOOP-13011
> URL: https://issues.apache.org/jira/browse/HADOOP-13011
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-13011-001.patch, HADOOP-13011-002.patch, 
> HADOOP-13011-003.patch
>
>
> HADOOP-12942 discusses the unobviousness of the use of a default password for 
> the keystores for keystore-based credential providers. This patch adds 
> documentation to the CredentialProviderAPI.md that describes the different 
> types of credential providers available and the password management details 
> of the keystore-based ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13011) Clearly Document the Password Details for Keystore-based Credential Providers

2016-04-11 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13011?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-13011:
-
Status: Open  (was: Patch Available)

> Clearly Document the Password Details for Keystore-based Credential Providers
> -
>
> Key: HADOOP-13011
> URL: https://issues.apache.org/jira/browse/HADOOP-13011
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: documentation
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 2.8.0
>
> Attachments: HADOOP-13011-001.patch, HADOOP-13011-002.patch
>
>
> HADOOP-12942 discusses the unobviousness of the use of a default password for 
> the keystores for keystore-based credential providers. This patch adds 
> documentation to the CredentialProviderAPI.md that describes the different 
> types of credential providers available and the password management details 
> of the keystore-based ones.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12892) fix/rewrite create-release

2016-04-11 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12892?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236415#comment-15236415
 ] 

Andrew Wang commented on HADOOP-12892:
--

Thanks for the rev Allen, sorry for not getting to this sooner. Some review 
comments:

* Typo "relase" in help text and the docker imgname
* Shall we change "patchprocess" to "releaseprocess" or some such for LOGDIR?
* --asfrelease sets NATIVE and SIGN, so the "--asfrelease requires --sign" 
validation will never trigger
* Should we even support building on Darwin? I'd rather it just abort for 
unsupported platforms. Is windows supported for that matter?
* The docker run command will pass along --dockercache, which then prints a 
spurious error msg:

{noformat}
$ docker run -i -t --privileged -v /home/andrew/.gnupg:/home/andrew/.gnupg -v 
/home/andrew/dev/apache/trunk/patchprocess:/home/andrew/dev/apache/trunk/patchprocess
 -v 
/home/andrew/dev/apache/trunk/target/artifacts:/home/andrew/dev/apache/trunk/target/artifacts
 -v /home/andrew/dev/apache/trunk:/build/source -u andrew -w /build/source 
hadoop/createrelase:3.0.0-SNAPSHOT_norc 
/build/source/dev-support/bin/create-release --mvncache=/maven --asfrelease 
--dockercache --indocker
ERROR: docker mode not enabled. Disabling dockercache.
{noformat}

* DOCKERAN isn't set anywhere, remove this check?
* MVNCACHE is ignored when --dockercache is specified and not --indocker right? 
That would be a good validation.
* Nit: some extra newlines at the end of makearelease
* When I did --asfrelease with --docker, it hung without output waiting for my 
GPG passphrase. I typed in my PW a few times, but it kept prompting, possibly 
because there wasn't a GPG agent running. Ended up Ctrl-D'ing it.
* Related to previous, it'd be nice to be able to do --sign independent of the 
build. Means we should filter out ".asc" and ".mds" files in signartifacts in 
case it's invoked twice though. I moved signartifacts out of its if statement 
as a test, which worked.

> fix/rewrite create-release
> --
>
> Key: HADOOP-12892
> URL: https://issues.apache.org/jira/browse/HADOOP-12892
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-12892.00.patch, HADOOP-12892.01.patch
>
>
> create-release needs some major surgery.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12973) make DU pluggable

2016-04-11 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236389#comment-15236389
 ] 

Colin Patrick McCabe edited comment on HADOOP-12973 at 4/12/16 1:13 AM:


Cool.  Thanks, [~eclark].

Hmm... TestDU failure looks related.  +1 pending fixing that unit test


was (Author: cmccabe):
Cool.  Thanks, [~eclark].  +1

> make DU pluggable
> -
>
> Key: HADOOP-12973
> URL: https://issues.apache.org/jira/browse/HADOOP-12973
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12973v0.patch, HADOOP-12973v1.patch, 
> HADOOP-12973v10.patch, HADOOP-12973v11.patch, HADOOP-12973v12.patch, 
> HADOOP-12973v2.patch, HADOOP-12973v3.patch, HADOOP-12973v5.patch, 
> HADOOP-12973v6.patch, HADOOP-12973v7.patch, HADOOP-12973v8.patch, 
> HADOOP-12973v9.patch
>
>
> If people are concerned about replacing the call to DU. Then an easy first 
> step is to make it pluggable. Then it's possible to replace it with something 
> while leaving the default alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-12973) make DU pluggable

2016-04-11 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236389#comment-15236389
 ] 

Colin Patrick McCabe edited comment on HADOOP-12973 at 4/12/16 1:11 AM:


Cool.  Thanks, [~eclark].  +1


was (Author: cmccabe):
Cool.  Thanks, [~eclark].  +1 pending jenkins.

> make DU pluggable
> -
>
> Key: HADOOP-12973
> URL: https://issues.apache.org/jira/browse/HADOOP-12973
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12973v0.patch, HADOOP-12973v1.patch, 
> HADOOP-12973v10.patch, HADOOP-12973v11.patch, HADOOP-12973v12.patch, 
> HADOOP-12973v2.patch, HADOOP-12973v3.patch, HADOOP-12973v5.patch, 
> HADOOP-12973v6.patch, HADOOP-12973v7.patch, HADOOP-12973v8.patch, 
> HADOOP-12973v9.patch
>
>
> If people are concerned about replacing the call to DU. Then an easy first 
> step is to make it pluggable. Then it's possible to replace it with something 
> while leaving the default alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12973) make DU pluggable

2016-04-11 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236389#comment-15236389
 ] 

Colin Patrick McCabe commented on HADOOP-12973:
---

Cool.  Thanks, [~eclark].  +1 pending jenkins.

> make DU pluggable
> -
>
> Key: HADOOP-12973
> URL: https://issues.apache.org/jira/browse/HADOOP-12973
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12973v0.patch, HADOOP-12973v1.patch, 
> HADOOP-12973v10.patch, HADOOP-12973v11.patch, HADOOP-12973v12.patch, 
> HADOOP-12973v2.patch, HADOOP-12973v3.patch, HADOOP-12973v5.patch, 
> HADOOP-12973v6.patch, HADOOP-12973v7.patch, HADOOP-12973v8.patch, 
> HADOOP-12973v9.patch
>
>
> If people are concerned about replacing the call to DU. Then an easy first 
> step is to make it pluggable. Then it's possible to replace it with something 
> while leaving the default alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13017) Implementations of IOStream.read(buffer, offset, bytes) to exit 0 if bytes==0

2016-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236297#comment-15236297
 ] 

Hadoop QA commented on HADOOP-13017:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 39s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 58s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
56s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 5s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
7s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 58s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
53s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 6s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 32s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 57s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
7s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 54s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
52s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
54s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 35s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 47s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 21s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 8m 17s 
{color} | {color:green} hadoop-distcp in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-openstack in the patch passed with JDK 
v1.8.0_77. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 12s 
{color} | {color:green} hadoop-aws in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 39s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} 

[jira] [Updated] (HADOOP-12993) Change ShutdownHookManger complete shutdown log from INFO to DEBUG

2016-04-11 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12993:

Fix Version/s: 2.8.0

> Change ShutdownHookManger complete shutdown log from INFO to DEBUG 
> ---
>
> Key: HADOOP-12993
> URL: https://issues.apache.org/jira/browse/HADOOP-12993
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Fix For: 2.8.0
>
> Attachments: HADOOP-12993.00.patch
>
>
> "INFO util.ShutdownHookManager: ShutdownHookManger complete shutdown." should 
> be "DEBUG".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12993) Change ShutdownHookManger complete shutdown log from INFO to DEBUG

2016-04-11 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12993:

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Thanks [~arpitagarwal] and [~templedf] for the review. I've committed the patch 
to trunk, branch-2 and branch-2.8.

> Change ShutdownHookManger complete shutdown log from INFO to DEBUG 
> ---
>
> Key: HADOOP-12993
> URL: https://issues.apache.org/jira/browse/HADOOP-12993
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HADOOP-12993.00.patch
>
>
> "INFO util.ShutdownHookManager: ShutdownHookManger complete shutdown." should 
> be "DEBUG".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12993) Change ShutdownHookManger complete shutdown log from INFO to DEBUG

2016-04-11 Thread Xiaoyu Yao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiaoyu Yao updated HADOOP-12993:

Hadoop Flags: Reviewed

> Change ShutdownHookManger complete shutdown log from INFO to DEBUG 
> ---
>
> Key: HADOOP-12993
> URL: https://issues.apache.org/jira/browse/HADOOP-12993
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HADOOP-12993.00.patch
>
>
> "INFO util.ShutdownHookManager: ShutdownHookManger complete shutdown." should 
> be "DEBUG".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12993) Change ShutdownHookManger complete shutdown log from INFO to DEBUG

2016-04-11 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12993?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236189#comment-15236189
 ] 

Arpit Agarwal commented on HADOOP-12993:


+1

> Change ShutdownHookManger complete shutdown log from INFO to DEBUG 
> ---
>
> Key: HADOOP-12993
> URL: https://issues.apache.org/jira/browse/HADOOP-12993
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Xiaoyu Yao
>Assignee: Xiaoyu Yao
>Priority: Minor
> Attachments: HADOOP-12993.00.patch
>
>
> "INFO util.ShutdownHookManager: ShutdownHookManger complete shutdown." should 
> be "DEBUG".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13018) Make Kdiag fail fast if hadoop.token.files points to non-existent file

2016-04-11 Thread Ravi Prakash (JIRA)
Ravi Prakash created HADOOP-13018:
-

 Summary: Make Kdiag fail fast if hadoop.token.files points to 
non-existent file
 Key: HADOOP-13018
 URL: https://issues.apache.org/jira/browse/HADOOP-13018
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Ravi Prakash






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13018) Make Kdiag fail fast if hadoop.token.files points to non-existent file

2016-04-11 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13018?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash updated HADOOP-13018:
--
Description: Steve proposed that KDiag should fail fast to help debug the 
case that hadoop.token.files points to a file not found. This JIRA is to affect 
that.

> Make Kdiag fail fast if hadoop.token.files points to non-existent file
> --
>
> Key: HADOOP-13018
> URL: https://issues.apache.org/jira/browse/HADOOP-13018
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Ravi Prakash
>
> Steve proposed that KDiag should fail fast to help debug the case that 
> hadoop.token.files points to a file not found. This JIRA is to affect that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12973) make DU pluggable

2016-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236134#comment-15236134
 ] 

Hadoop QA commented on HADOOP-12973:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 27s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 8m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 19s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 52s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
14s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 2m 18s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
38s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 
47s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 59s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 49s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 14s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
26s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 53s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 53s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 48s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red} 1m 6s 
{color} | {color:red} root: patch generated 10 new + 21 unchanged - 1 fixed = 
31 total (was 22) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 5s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 1s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 54s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 4s {color} | 
{color:red} hadoop-common in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 67m 56s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.8.0_77. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 7m 10s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 73m 19s {color} 
| {color:red} hadoop-hdfs in the patch failed with JDK v1.7.0_95. {color} |
| {color:red}-1{color} | {color:red} asflicense {color} | {color:red} 0m 27s 
{color} | {color:red} Patch generated 1 ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 221m 11s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.8.0_77

[jira] [Assigned] (HADOOP-13017) Implementations of IOStream.read(buffer, offset, bytes) to exit 0 if bytes==0

2016-04-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reassigned HADOOP-13017:
---

Assignee: Steve Loughran

> Implementations of IOStream.read(buffer, offset, bytes) to exit 0 if bytes==0
> -
>
> Key: HADOOP-13017
> URL: https://issues.apache.org/jira/browse/HADOOP-13017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HDFS-13017-001.patch
>
>
> HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
> no data left in the stream; Java IO says 
> bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
> otherwise, there is an attempt to read at least one byte.
> Review the implementations of {{IOStream.(buffer, offset, bytes)} and, where 
> necessary and considered safe, add a fast exit if the length is 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13017) Implementations of IOStream.read(buffer, offset, bytes) to exit 0 if bytes==0

2016-04-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13017:

Status: Patch Available  (was: Open)

> Implementations of IOStream.read(buffer, offset, bytes) to exit 0 if bytes==0
> -
>
> Key: HADOOP-13017
> URL: https://issues.apache.org/jira/browse/HADOOP-13017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
> Attachments: HDFS-13017-001.patch
>
>
> HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
> no data left in the stream; Java IO says 
> bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
> otherwise, there is an attempt to read at least one byte.
> Review the implementations of {{IOStream.(buffer, offset, bytes)} and, where 
> necessary and considered safe, add a fast exit if the length is 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13017) Implementations of IOStream.read(buffer, offset, bytes) to exit 0 if bytes==0

2016-04-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13017:

Attachment: HDFS-13017-001.patch

> Implementations of IOStream.read(buffer, offset, bytes) to exit 0 if bytes==0
> -
>
> Key: HADOOP-13017
> URL: https://issues.apache.org/jira/browse/HADOOP-13017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
> Attachments: HDFS-13017-001.patch
>
>
> HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
> no data left in the stream; Java IO says 
> bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
> otherwise, there is an attempt to read at least one byte.
> Review the implementations of {{IOStream.(buffer, offset, bytes)} and, where 
> necessary and considered safe, add a fast exit if the length is 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13017) Implementations of IOStream.read(buffer, offset, bytes) to exit 0 if bytes==0

2016-04-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13017:

Summary: Implementations of IOStream.read(buffer, offset, bytes) to exit 0 
if bytes==0  (was: verify implementations of IOStream.read(buffer, offset, 
bytes) fail fast if the bytes==0)

> Implementations of IOStream.read(buffer, offset, bytes) to exit 0 if bytes==0
> -
>
> Key: HADOOP-13017
> URL: https://issues.apache.org/jira/browse/HADOOP-13017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
> no data left in the stream; Java IO says 
> bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
> otherwise, there is an attempt to read at least one byte.
> Review the implementations of {{IOStream.(buffer, offset, bytes)} and, where 
> necessary and considered safe, add a fast exit if the length is 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13017) verify implementations of IOStream.read(buffer, offset, bytes) fail fast if the bytes==0

2016-04-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13017:

Description: 
HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
no data left in the stream; Java IO says 

bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
otherwise, there is an attempt to read at least one byte.

Review the implementations of {{IOStream.(buffer, offset, bytes)} and, where 
necessary and considered safe, add a fast exit if the length is 0.

  was:
HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
no data left in the stream; Java IO says 

bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
otherwise, there is an attempt to read at least one byte.

Review the implementations of {{IOStream.(buffer, offset, bytes)} and 


> verify implementations of IOStream.read(buffer, offset, bytes) fail fast if 
> the bytes==0
> 
>
> Key: HADOOP-13017
> URL: https://issues.apache.org/jira/browse/HADOOP-13017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
> no data left in the stream; Java IO says 
> bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
> otherwise, there is an attempt to read at least one byte.
> Review the implementations of {{IOStream.(buffer, offset, bytes)} and, where 
> necessary and considered safe, add a fast exit if the length is 0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13017) verify implementations of IOStream.read(buffer, offset, bytes) fail fast if the bytes==0

2016-04-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13017:

Description: 
HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
no data left in the stream; Java IO says 

bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
otherwise, there is an attempt to read at least one byte.

Review the implementations of {{IOStream.(buffer, offset, bytes)} and 

  was:
HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
no data left in the stream; Java IO says 

bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
otherwise, there is an attempt to read at least one byte.

Review the implementations of {{IOStream.read


> verify implementations of IOStream.read(buffer, offset, bytes) fail fast if 
> the bytes==0
> 
>
> Key: HADOOP-13017
> URL: https://issues.apache.org/jira/browse/HADOOP-13017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
> no data left in the stream; Java IO says 
> bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
> otherwise, there is an attempt to read at least one byte.
> Review the implementations of {{IOStream.(buffer, offset, bytes)} and 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13017) verify implementations of IOStream.read(buffer, offset, bytes) fail fast if the bytes==0

2016-04-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13017:

Description: 
HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
no data left in the stream; Java IO says 

bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
otherwise, there is an attempt to read at least one byte.

Review the implementations of {{IOStream.read

  was:
HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
no data left in the stream; Java IO says 

bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
otherwise, there is an attempt to read at least one byte.

This patch clarifies the fsinputstream specification, and fixes 
{{DFSInputStream}} to address this


> verify implementations of IOStream.read(buffer, offset, bytes) fail fast if 
> the bytes==0
> 
>
> Key: HADOOP-13017
> URL: https://issues.apache.org/jira/browse/HADOOP-13017
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
> no data left in the stream; Java IO says 
> bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
> otherwise, there is an attempt to read at least one byte.
> Review the implementations of {{IOStream.read



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13017) verify implementations of IOStream.read(buffer, offset, bytes) fail fast if the bytes==0

2016-04-11 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13017:
---

 Summary: verify implementations of IOStream.read(buffer, offset, 
bytes) fail fast if the bytes==0
 Key: HADOOP-13017
 URL: https://issues.apache.org/jira/browse/HADOOP-13017
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 2.8.0
Reporter: Steve Loughran


HDFS-10277 showed that HDFS was return -1 on read(buf[], 0, 0) when there was 
no data left in the stream; Java IO says 

bq. If {{len}} is zero, then no bytes are read and {{0}} is returned; 
otherwise, there is an attempt to read at least one byte.

This patch clarifies the fsinputstream specification, and fixes 
{{DFSInputStream}} to address this



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12563) Updated utility to create/modify token files

2016-04-11 Thread Ravi Prakash (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15236011#comment-15236011
 ] 

Ravi Prakash commented on HADOOP-12563:
---

Thanks for the patch and all your work Matt! Thanks also for your reviews and 
guidance Steve! I see that this patch has come a long way

# Could you please file and patch a follow-on JIRA for adding documentation. 
Perhaps here : 
https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/site/markdown/CommandsManual.md
# Do you think it'd be good to create an Enum for different versions? {code}
if (version == 0) {
  readFields(in);
} else if (version == 1) {
  readProtos(in);
}{code}
# Do you know if there would be any difference between 
{{CredentialsKVProto.newBuilder().setAlias(e.getKey().toString())}} and what 
you have 
({{CredentialsKVProto.newBuilder().setAliasBytes(ByteString.copyFrom(e.getKey().getBytes(),
 0, e.getKey().getLength()))}}) ? Would one be encoded/decoded differently on 
varying platforms? Looking into the generated code, I see in one case 
{{alias_}} would be an {{Object}} of type {{String }} vs {{ByteString}} in the 
other case. I guess the deviation may only be in the case when the encoding is 
different than UTF-8 . Do you know if we should prefer one way over the other?
# Could {{setKindBytes(ByteString.copyFrom(this.getKind().getBytes(), 0, 
this.getKind().getLength()))}} simply be 
{{setKindBytes(ByteString.copyFrom(this.getKind().getBytes()))}}?


> Updated utility to create/modify token files
> 
>
> Key: HADOOP-12563
> URL: https://issues.apache.org/jira/browse/HADOOP-12563
> Project: Hadoop Common
>  Issue Type: New Feature
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Matthew Paduano
> Attachments: HADOOP-12563.01.patch, HADOOP-12563.02.patch, 
> HADOOP-12563.03.patch, HADOOP-12563.04.patch, HADOOP-12563.05.patch, 
> HADOOP-12563.06.patch, HADOOP-12563.07.patch, HADOOP-12563.07.patch, 
> HADOOP-12563.08.patch, HADOOP-12563.09.patch, dtutil-test-out, 
> dtutil_diff_07_08, example_dtutil_commands_and_output.txt, 
> generalized_token_case.pdf
>
>
> hdfs fetchdt is missing some critical features and is geared almost 
> exclusively towards HDFS operations.  Additionally, the token files that are 
> created use Java serializations which are hard/impossible to deal with in 
> other languages. It should be replaced with a better utility in common that 
> can read/write protobuf-based token files, has enough flexibility to be used 
> with other services, and offers key functionality such as append and rename. 
> The old version file format should still be supported for backward 
> compatibility, but will be effectively deprecated.
> A follow-on JIRA will deprecrate fetchdt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-04-11 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235923#comment-15235923
 ] 

Allen Wittenauer commented on HADOOP-12893:
---

When HADOOP-10956 was added, the contents of YARN's custom LICENSE.txt and 
NOTICE.txt weren't merged into the master one.  Effectively another victim of 
the "oh one day we'll split the sub-projects" madness that sweeps through the 
PMC every-so-often since we had four of these files instead of just one.

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0, 2.7.3, 2.6.5
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12406) AbstractMapWritable.readFields throws ClassNotFoundException with custom writables

2016-04-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235792#comment-15235792
 ] 

Hudson commented on HADOOP-12406:
-

FAILURE: Integrated in Hadoop-trunk-Commit #9593 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/9593/])
HADOOP-12406. Fixed AbstractMapWritable.readFields to use the thread's 
(vinodkv: rev 069c6c62def4a0f94382e9f149581d8e22f6d31c)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/AbstractMapWritable.java


> AbstractMapWritable.readFields throws ClassNotFoundException with custom 
> writables
> --
>
> Key: HADOOP-12406
> URL: https://issues.apache.org/jira/browse/HADOOP-12406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.7.1
> Environment: Ubuntu Linux 14.04 LTS amd64
>Reporter: Nadeem Douba
>Assignee: Nadeem Douba
>Priority: Blocker
> Fix For: 2.7.3
>
> Attachments: HADOOP-12406.1.patch, HADOOP-12406.patch
>
>
> Note: I am not an expert at JAVA, Class loaders, or Hadoop. I am just a 
> hacker. My solution might be entirely wrong.
> AbstractMapWritable.readFields throws a ClassNotFoundException when reading 
> custom writables. Debugging the job using remote debugging in IntelliJ 
> revealed that the class loader being used in Class.forName() is different 
> than that used by the Thread's current context 
> (Thread.currentThread().getContextClassLoader()). The class path for the 
> system class loader does not include the libraries of the job jar. However, 
> the class path for the context class loader does. The proposed patch changes 
> the class loading mechanism in readFields to use the Thread's context class 
> loader instead of the system's default class loader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12893) Verify LICENSE.txt and NOTICE.txt

2016-04-11 Thread Billie Rinaldi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235778#comment-15235778
 ] 

Billie Rinaldi commented on HADOOP-12893:
-

I looked into L&N changes needed for leveldbjni-all-1.8.jar in YARN-1704. The 
files that I changed no longer exist, so I'm not sure what happened to those, 
but someone could reuse the content.

> Verify LICENSE.txt and NOTICE.txt
> -
>
> Key: HADOOP-12893
> URL: https://issues.apache.org/jira/browse/HADOOP-12893
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.8.0, 3.0.0, 2.7.3, 2.6.5
>Reporter: Allen Wittenauer
>Priority: Blocker
>
> We have many bundled dependencies in both the source and the binary artifacts 
> that are not in LICENSE.txt and NOTICE.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12406) AbstractMapWritable.readFields throws ClassNotFoundException with custom writables

2016-04-11 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-12406:
-
   Resolution: Fixed
Fix Version/s: 2.7.3
   Status: Resolved  (was: Patch Available)

Committed this to trunk, branch-2 and branch-2.7. Thanks [~ndouba]!

Forgot to mention that I've tested that the nutch code fails without the patch 
and passes with.

> AbstractMapWritable.readFields throws ClassNotFoundException with custom 
> writables
> --
>
> Key: HADOOP-12406
> URL: https://issues.apache.org/jira/browse/HADOOP-12406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.7.1
> Environment: Ubuntu Linux 14.04 LTS amd64
>Reporter: Nadeem Douba
>Assignee: Nadeem Douba
>Priority: Blocker
> Fix For: 2.7.3
>
> Attachments: HADOOP-12406.1.patch, HADOOP-12406.patch
>
>
> Note: I am not an expert at JAVA, Class loaders, or Hadoop. I am just a 
> hacker. My solution might be entirely wrong.
> AbstractMapWritable.readFields throws a ClassNotFoundException when reading 
> custom writables. Debugging the job using remote debugging in IntelliJ 
> revealed that the class loader being used in Class.forName() is different 
> than that used by the Thread's current context 
> (Thread.currentThread().getContextClassLoader()). The class path for the 
> system class loader does not include the libraries of the job jar. However, 
> the class path for the context class loader does. The proposed patch changes 
> the class loading mechanism in readFields to use the Thread's context class 
> loader instead of the system's default class loader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12406) AbstractMapWritable.readFields throws ClassNotFoundException with custom writables

2016-04-11 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235733#comment-15235733
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-12406:
--

TestReloadingX509TrustManager failure is not related, I'll see if there is an 
existing JIRA.

Checking this in now..

> AbstractMapWritable.readFields throws ClassNotFoundException with custom 
> writables
> --
>
> Key: HADOOP-12406
> URL: https://issues.apache.org/jira/browse/HADOOP-12406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.7.1
> Environment: Ubuntu Linux 14.04 LTS amd64
>Reporter: Nadeem Douba
>Assignee: Nadeem Douba
>Priority: Blocker
> Attachments: HADOOP-12406.1.patch, HADOOP-12406.patch
>
>
> Note: I am not an expert at JAVA, Class loaders, or Hadoop. I am just a 
> hacker. My solution might be entirely wrong.
> AbstractMapWritable.readFields throws a ClassNotFoundException when reading 
> custom writables. Debugging the job using remote debugging in IntelliJ 
> revealed that the class loader being used in Class.forName() is different 
> than that used by the Thread's current context 
> (Thread.currentThread().getContextClassLoader()). The class path for the 
> system class loader does not include the libraries of the job jar. However, 
> the class path for the context class loader does. The proposed patch changes 
> the class loading mechanism in readFields to use the Thread's context class 
> loader instead of the system's default class loader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12973) make DU pluggable

2016-04-11 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12973?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HADOOP-12973:
---
Attachment: HADOOP-12973v12.patch

* Created CachingGetUsedSpace that does all the threading.
* Used IOUtils#cleanup

> make DU pluggable
> -
>
> Key: HADOOP-12973
> URL: https://issues.apache.org/jira/browse/HADOOP-12973
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: HADOOP-12973v0.patch, HADOOP-12973v1.patch, 
> HADOOP-12973v10.patch, HADOOP-12973v11.patch, HADOOP-12973v12.patch, 
> HADOOP-12973v2.patch, HADOOP-12973v3.patch, HADOOP-12973v5.patch, 
> HADOOP-12973v6.patch, HADOOP-12973v7.patch, HADOOP-12973v8.patch, 
> HADOOP-12973v9.patch
>
>
> If people are concerned about replacing the call to DU. Then an easy first 
> step is to make it pluggable. Then it's possible to replace it with something 
> while leaving the default alone.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13012) yetus-wrapper should fail sooner when download fails

2016-04-11 Thread Steven Wong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13012?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Wong updated HADOOP-13012:
-
Affects Version/s: 2.8.0

> yetus-wrapper should fail sooner when download fails
> 
>
> Key: HADOOP-13012
> URL: https://issues.apache.org/jira/browse/HADOOP-13012
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: yetus
>Affects Versions: 2.8.0
>Reporter: Steven Wong
>Assignee: Steven Wong
>Priority: Minor
>  Labels: easyfix
> Attachments: HADOOP-13012.0.patch
>
>
> When yetus-wrapper cannot download the Yetus tarball (because the download 
> server is down, for example), currently it fails during the later gunzip 
> step. It's better if it fails right away during the download (curl) step.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13016) reinstate hadoop-hdfs as dependency of hadoop-client, create hadoop-lean-client for minimal deployments

2016-04-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235596#comment-15235596
 ] 

Steve Loughran commented on HADOOP-13016:
-

w.r.t clients, I expect {{hdfs-client}} to always be the lean hdfs client. I'm 
discussing leaving {{hadoop-client}} overweight, and having a general 
{{hadoop-lean-client}}, without any dependencies on: apache ds, jetset, junit, 
hadoop-hdfs (full), etc: anything that reallly isn't needed in the clients

regarding your points

1. OK, it was just the moved s3m thing I was thinking of. But as you note: its 
been like that for a a while.
2. aws already has jets3t in that hadoop-client CP (because s3n was there)
3. I concur, I was only thinking of aws in fat-client. But I accept your 
arguments

If you look at aws, the reason for a separate hadoop-aws JAR was actually to 
put in new stuff (s3a, etc), without adding more dependencies to the 
hadoop-client POM; with the plan being to leave hadoop-client alone, with s3, 
s3n and jets3t. I think at some point someone got overenthusiastic and moved 
those FS classes, without realising the rationale for leaving them where they 
were.


> reinstate hadoop-hdfs as dependency of hadoop-client, create 
> hadoop-lean-client for minimal deployments
> ---
>
> Key: HADOOP-13016
> URL: https://issues.apache.org/jira/browse/HADOOP-13016
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> the split of hadoop-hdfs and hadoop-hdfs-client is breaking code of mine 
> whose builds declared a dependency on hadoop-client and expected all of HDFS 
> to make it in.
> I'm finding this first, because I'm building and testing downstream code 
> against branch-2; I find myself having to explicitly declare a dependency on 
> hadoop-hdfs to make things work again.
> We've also seen problems downstream (e.g. spark) where the move of s3n 
> classes to hadoop-aws has broken code which expects it to be there.
> At the same time, I see the merits in a lean, low-dependency client, which 
> hadoop-client and its dependencies is not today.
> I propose
> # reinstate hadoop-hdfs as dependency of hadoop-client
> # add hadoop-aws as a dependency of hadoop-client —but excluding adding any 
> amazon-aws JARs.
> # create hadoop-lean-client for minimal deployments, stripping out all 
> extraneous dependencies,
> # for hadoop-lean-client, have a compatibility statement of "we will strip 
> out anything we can from this, even over point releases". That is, anything 
> that can be dropped in future, will.
> This will give downstream projects a choice: the old POM with everything, the 
> lean POM for new apps.
> And, by reinstating hadoop-hdfs, things will build again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12994) Specify PositionedReadable, add contract tests, fix problems

2016-04-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235460#comment-15235460
 ] 

Steve Loughran commented on HADOOP-12994:
-

can't have jenkins unhappy. I thought I'd been checking things locally, but 
clearly not..


> Specify PositionedReadable, add contract tests, fix problems
> 
>
> Key: HADOOP-12994
> URL: https://issues.apache.org/jira/browse/HADOOP-12994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0
>
> Attachments: HADOOP-12994-001.patch, HADOOP-12994-002.patch, 
> HADOOP-12994-003.patch, HADOOP-12994-004.patch, HADOOP-12994-005.patch, 
> HADOOP-12994-006.patch
>
>
> Some work on S3a has shown up that there aren't tests catching regressions in 
> readFully, reviewing the documentation shows that its specification could be 
> improved.
> # review the spec
> # review the implementations
> # add tests (proposed: to the seek contract; streams which support seek 
> should support positioned readable)
> # fix code, where it differs significantly from HDFS or LocalFS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2016-04-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235440#comment-15235440
 ] 

Steve Loughran commented on HADOOP-12751:
-

yes, I mean the existing pattern check will take place unless explicitly 
disabled

> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13016) reinstate hadoop-hdfs as dependency of hadoop-client, create hadoop-lean-client for minimal deployments

2016-04-11 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235234#comment-15235234
 ] 

Allen Wittenauer commented on HADOOP-13016:
---


a) We should fix hadoop-hdfs-client to actually be the lean client rather than 
building Yet Another Client jar.  "Do I use hdfs-client or hdfs-lean-client?"

b) Adding aws is a huge can of worms and I'm very much against it:

  1. It's a slippery slope of file system support; everyone and their dog is 
going to want their custom fs in it.  Either they all get it or none do.
  2. It means moving aws back into the main classpath again, which means also 
getting extra dependencies that not all clusters want or need.
  3. You can't have a lean client with "minimal" components AND have aws 
support.  It's completely contradictory as to purpose.

AWS shouldn't have been moved like it was in branch-2, but the damage is done.  
It wasn't the first massively incompatible change in branch-2 and won't be the 
last given the track record.

> reinstate hadoop-hdfs as dependency of hadoop-client, create 
> hadoop-lean-client for minimal deployments
> ---
>
> Key: HADOOP-13016
> URL: https://issues.apache.org/jira/browse/HADOOP-13016
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>
> the split of hadoop-hdfs and hadoop-hdfs-client is breaking code of mine 
> whose builds declared a dependency on hadoop-client and expected all of HDFS 
> to make it in.
> I'm finding this first, because I'm building and testing downstream code 
> against branch-2; I find myself having to explicitly declare a dependency on 
> hadoop-hdfs to make things work again.
> We've also seen problems downstream (e.g. spark) where the move of s3n 
> classes to hadoop-aws has broken code which expects it to be there.
> At the same time, I see the merits in a lean, low-dependency client, which 
> hadoop-client and its dependencies is not today.
> I propose
> # reinstate hadoop-hdfs as dependency of hadoop-client
> # add hadoop-aws as a dependency of hadoop-client —but excluding adding any 
> amazon-aws JARs.
> # create hadoop-lean-client for minimal deployments, stripping out all 
> extraneous dependencies,
> # for hadoop-lean-client, have a compatibility statement of "we will strip 
> out anything we can from this, even over point releases". That is, anything 
> that can be dropped in future, will.
> This will give downstream projects a choice: the old POM with everything, the 
> lean POM for new apps.
> And, by reinstating hadoop-hdfs, things will build again



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12822) Change "Metrics intern cache overflow" log level from WARN to INFO

2016-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235145#comment-15235145
 ] 

Hadoop QA commented on HADOOP-12822:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 59s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
2s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 52s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 43s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 0s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
15s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
39s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 53s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 5m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 45s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 45s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 58s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
57s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 7m 10s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 6m 52s {color} 
| {color:red} hadoop-common in the patch failed with JDK v1.7.0_95. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
24s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 70m 7s {color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| JDK v1.7.0_95 Failed junit tests | 
hadoop.security.ssl.TestReloadingX509TrustManager |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12798030/HADOOP-12822.patch |
| JIRA Issue | HADOOP-12822 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fc1f1581d22a 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86

[jira] [Updated] (HADOOP-12822) Change "Metrics intern cache overflow" log level from WARN to INFO

2016-04-11 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-12822:
--
Assignee: Andras Bokor
  Status: Patch Available  (was: Open)

[~ajisakaa] What do you mean here? I changed two log level. That was you meant 
here? Please check the patch.

> Change "Metrics intern cache overflow" log level from WARN to INFO
> --
>
> Key: HADOOP-12822
> URL: https://issues.apache.org/jira/browse/HADOOP-12822
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Akira AJISAKA
>Assignee: Andras Bokor
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-12822.patch
>
>
> Interns.java outputs "Metrics intern cache over flow" warn log for metrics 
> info/tag when the cache reaches the hard-coded limit and the oldest cache is 
> discarded for the first time. I'm thinking this log level can be changed to 
> info because
> * there is no problem when the oldest cache is removed. if the metrics 
> info/tag is not in the cache, simply create it.
> * we cannot configure the maximum size of the cache, so there is no way to 
> stop the warn log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12822) Change "Metrics intern cache overflow" log level from WARN to INFO

2016-04-11 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12822?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-12822:
--
Attachment: HADOOP-12822.patch

> Change "Metrics intern cache overflow" log level from WARN to INFO
> --
>
> Key: HADOOP-12822
> URL: https://issues.apache.org/jira/browse/HADOOP-12822
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: metrics
>Reporter: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-12822.patch
>
>
> Interns.java outputs "Metrics intern cache over flow" warn log for metrics 
> info/tag when the cache reaches the hard-coded limit and the oldest cache is 
> discarded for the first time. I'm thinking this log level can be changed to 
> info because
> * there is no problem when the oldest cache is removed. if the metrics 
> info/tag is not in the cache, simply create it.
> * we cannot configure the maximum size of the cache, so there is no way to 
> stop the warn log.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12994) Specify PositionedReadable, add contract tests, fix problems

2016-04-11 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12994?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15235054#comment-15235054
 ] 

Brahma Reddy Battula commented on HADOOP-12994:
---

Following Test fails after this in...Since jenkins did not run on hdfs project, 
it was not shown in QA report..

{noformat}
java.lang.AssertionError: expected:<0> but was:<-1>
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:555)
at org.junit.Assert.assertEquals(Assert.java:542)
at 
org.apache.hadoop.fs.contract.AbstractContractSeekTest.testReadFullyZeroByteFile(AbstractContractSeekTest.java:373)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
at java.lang.reflect.Method.invoke(Unknown Source)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{noformat}


> Specify PositionedReadable, add contract tests, fix problems
> 
>
> Key: HADOOP-12994
> URL: https://issues.apache.org/jira/browse/HADOOP-12994
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Fix For: 2.8.0
>
> Attachments: HADOOP-12994-001.patch, HADOOP-12994-002.patch, 
> HADOOP-12994-003.patch, HADOOP-12994-004.patch, HADOOP-12994-005.patch, 
> HADOOP-12994-006.patch
>
>
> Some work on S3a has shown up that there aren't tests catching regressions in 
> readFully, reviewing the documentation shows that its specification could be 
> improved.
> # review the spec
> # review the implementations
> # add tests (proposed: to the seek contract; streams which support seek 
> should support positioned readable)
> # fix code, where it differs significantly from HDFS or LocalFS



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13016) reinstate hadoop-hdfs as dependency of hadoop-client, create hadoop-lean-client for minimal deployments

2016-04-11 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13016:
---

 Summary: reinstate hadoop-hdfs as dependency of hadoop-client, 
create hadoop-lean-client for minimal deployments
 Key: HADOOP-13016
 URL: https://issues.apache.org/jira/browse/HADOOP-13016
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 2.8.0
Reporter: Steve Loughran


the split of hadoop-hdfs and hadoop-hdfs-client is breaking code of mine whose 
builds declared a dependency on hadoop-client and expected all of HDFS to make 
it in.

I'm finding this first, because I'm building and testing downstream code 
against branch-2; I find myself having to explicitly declare a dependency on 
hadoop-hdfs to make things work again.

We've also seen problems downstream (e.g. spark) where the move of s3n classes 
to hadoop-aws has broken code which expects it to be there.

At the same time, I see the merits in a lean, low-dependency client, which 
hadoop-client and its dependencies is not today.

I propose

# reinstate hadoop-hdfs as dependency of hadoop-client
# add hadoop-aws as a dependency of hadoop-client —but excluding adding any 
amazon-aws JARs.
# create hadoop-lean-client for minimal deployments, stripping out all 
extraneous dependencies,
# for hadoop-lean-client, have a compatibility statement of "we will strip out 
anything we can from this, even over point releases". That is, anything that 
can be dropped in future, will.

This will give downstream projects a choice: the old POM with everything, the 
lean POM for new apps.

And, by reinstating hadoop-hdfs, things will build again




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2016-04-11 Thread Bolke de Bruin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15234821#comment-15234821
 ] 

Bolke de Bruin commented on HADOOP-12751:
-

[~steve_l] I assume that you mean "make it configurable"? That's fine to me and 
I will update the patch to do so.

> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2016-04-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15234814#comment-15234814
 ] 

Steve Loughran commented on HADOOP-12751:
-

One issue with OS login is that it is inevitably going to fail with "GSSAPI 
Unknown Exception". + . The pattern not only fails fast, 
it fails meaningfully, which is useful when there are people trying to debug it.

I think we should retain that check, giving people the option of disabling it 
if there are problems. Either as a regexp or a simple "use standard check" 
pattern

> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-12751) While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple

2016-04-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-12751:

Labels: kerberos  (was: kerberos stevel-to-review)

> While using kerberos Hadoop incorrectly assumes names with '@' to be 
> non-simple
> ---
>
> Key: HADOOP-12751
> URL: https://issues.apache.org/jira/browse/HADOOP-12751
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.7.2
> Environment: kerberos
>Reporter: Bolke de Bruin
>Assignee: Bolke de Bruin
>Priority: Critical
>  Labels: kerberos
> Attachments: 0001-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0001-Remove-check-for-user-name-characters-and.patch, 
> 0002-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0003-HADOOP-12751-leave-user-validation-to-os.patch, 
> 0004-HADOOP-12751-leave-user-validation-to-os.patch
>
>
> In the scenario of a trust between two directories, eg. FreeIPA (ipa.local) 
> and Active Directory (ad.local) users can be made available on the OS level 
> by something like sssd. The trusted users will be of the form 'user@ad.local' 
> while other users are will not contain the domain. Executing 'id -Gn 
> user@ad.local' will successfully return the groups the user belongs to if 
> configured correctly. 
> However, it is assumed by Hadoop that users of the format with '@' cannot be 
> correct. This code is in KerberosName.java and seems to be a validator if the 
> 'auth_to_local' rules are applied correctly.
> In my opinion this should be removed or changed to a different kind of check 
> or maybe logged as a warning while still proceeding, as the current behavior 
> limits integration possibilities with other standard tools.
> Workaround are difficult to apply (by having a rewrite by system tools to for 
> example user_ad_local) due to down stream consequences.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-12406) AbstractMapWritable.readFields throws ClassNotFoundException with custom writables

2016-04-11 Thread Markus Jelsma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15234772#comment-15234772
 ] 

Markus Jelsma commented on HADOOP-12406:


Hi Vinod and Nadeem, thanks for taking care of this. At Apache Nutch, we're 
looking forward to 2.7.3!

> AbstractMapWritable.readFields throws ClassNotFoundException with custom 
> writables
> --
>
> Key: HADOOP-12406
> URL: https://issues.apache.org/jira/browse/HADOOP-12406
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 2.7.1
> Environment: Ubuntu Linux 14.04 LTS amd64
>Reporter: Nadeem Douba
>Assignee: Nadeem Douba
>Priority: Blocker
> Attachments: HADOOP-12406.1.patch, HADOOP-12406.patch
>
>
> Note: I am not an expert at JAVA, Class loaders, or Hadoop. I am just a 
> hacker. My solution might be entirely wrong.
> AbstractMapWritable.readFields throws a ClassNotFoundException when reading 
> custom writables. Debugging the job using remote debugging in IntelliJ 
> revealed that the class loader being used in Class.forName() is different 
> than that used by the Thread's current context 
> (Thread.currentThread().getContextClassLoader()). The class path for the 
> system class loader does not include the libraries of the job jar. However, 
> the class path for the context class loader does. The proposed patch changes 
> the class loading mechanism in readFields to use the Thread's context class 
> loader instead of the system's default class loader.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13010) Refactor raw erasure coders

2016-04-11 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15234768#comment-15234768
 ] 

Kai Zheng commented on HADOOP-13010:


Hi [~cmccabe], would you help review this and check some of your comments in 
HADOOP-11540 are well addressed in this refactoring? Thanks!

> Refactor raw erasure coders
> ---
>
> Key: HADOOP-13010
> URL: https://issues.apache.org/jira/browse/HADOOP-13010
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-13010-v1.patch, HADOOP-13010-v2.patch
>
>
> This will refactor raw erasure coders according to some comments received so 
> far.
> * As discussed in HADOOP-11540 and suggested by [~cmccabe], better not to 
> rely class inheritance to reuse the codes, instead they can be moved to some 
> utility.
> * Suggested by [~jingzhao] somewhere quite some time ago, better to have a 
> state holder to keep some checking results for later reuse during an 
> encode/decode call.
> This would not get rid of some inheritance levels as doing so isn't clear yet 
> for the moment and also incurs big impact. I do wish the end result by this 
> refactoring will make all the levels more clear and easier to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11540) Raw Reed-Solomon coder using Intel ISA-L library

2016-04-11 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11540?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11540:
---
Labels: hdfs-ec-3.0-must-do  (was: )

> Raw Reed-Solomon coder using Intel ISA-L library
> 
>
> Key: HADOOP-11540
> URL: https://issues.apache.org/jira/browse/HADOOP-11540
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: HDFS-7285
>Reporter: Zhe Zhang
>Assignee: Kai Zheng
>  Labels: hdfs-ec-3.0-must-do
> Attachments: HADOOP-11540-initial.patch, HADOOP-11540-v1.patch, 
> HADOOP-11540-v10.patch, HADOOP-11540-v2.patch, HADOOP-11540-v4.patch, 
> HADOOP-11540-v5.patch, HADOOP-11540-v6.patch, HADOOP-11540-v7.patch, 
> HADOOP-11540-v8.patch, HADOOP-11540-v9.patch, 
> HADOOP-11540-with-11996-codes.patch, Native Erasure Coder Performance - Intel 
> ISAL-v1.pdf
>
>
> This is to provide RS codec implementation using Intel ISA-L library for 
> encoding and decoding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10672) Add support for pushing metrics to OpenTSDB

2016-04-11 Thread zhangyubiao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10672?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhangyubiao updated HADOOP-10672:
-
Status: In Progress  (was: Patch Available)

> Add support for pushing metrics to OpenTSDB
> ---
>
> Key: HADOOP-10672
> URL: https://issues.apache.org/jira/browse/HADOOP-10672
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: metrics
>Affects Versions: 0.21.0
>Reporter: Kamaldeep Singh
>Assignee: zhangyubiao
>Priority: Minor
> Attachments: HADOOP-10672-v1.patch, HADOOP-10672-v2.patch, 
> HADOOP-10672-v3.patch, HADOOP-10672-v4.patch, HADOOP-10672-v5.patch, 
> HADOOP-10672.patch
>
>
> We wish to add support for pushing metrics to OpenTSDB from hadoop 
> Code and instructions at - https://github.com/eBay/hadoop-tsdb-connector



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-13010) Refactor raw erasure coders

2016-04-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15234695#comment-15234695
 ] 

Hadoop QA commented on HADOOP-13010:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 19s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 
33s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 12m 
53s {color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 9m 30s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
23s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 1m 11s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 3s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 5s 
{color} | {color:green} trunk passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 4s 
{color} | {color:green} trunk passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 8m 16s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 8m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 7m 25s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 7m 25s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 57s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} Patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed with JDK v1.8.0_77 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 1m 7s 
{color} | {color:green} the patch passed with JDK v1.7.0_95 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 10m 1s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.8.0_77. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 9m 34s 
{color} | {color:green} hadoop-common in the patch passed with JDK v1.7.0_95. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
23s {color} | {color:green} Patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 83m 21s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:fbe3e86 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12797980/HADOOP-13010-v2.patch 
|
| JIRA Issue | HADOOP-13010 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 879fee56ee27 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 1ff27f9 |
| Default Java | 1.7.0_95 |
| Multi-JDK versions |  /usr/lib/jvm/j

[jira] [Created] (HADOOP-13015) Implement kinit command execution facility in Java by leveraging Apache Kerby

2016-04-11 Thread Jiajia Li (JIRA)
Jiajia Li created HADOOP-13015:
--

 Summary: Implement kinit command execution facility in Java by 
leveraging Apache Kerby
 Key: HADOOP-13015
 URL: https://issues.apache.org/jira/browse/HADOOP-13015
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jiajia Li
Assignee: Jiajia Li


The kinit command is configured and invoked for obtaining and renewing tickets 
in Hadoop. There will be some benefits to implement the facility by leveraging 
the Kerby KrbClient API:
1.   Resolve the hard dependency to MIT Kerberos client package;
2.   Pure in Java, no need to configure and set up any runnable 
environment, easy to support on various platforms, easy to maintain and test.

This proposes to do the change with a prototype patch for the review and 
discussion. Welcome your feedback. Thanks.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-13014) Add the MiniKdcService in Hadoop

2016-04-11 Thread Jiajia Li (JIRA)
Jiajia Li created HADOOP-13014:
--

 Summary: Add the MiniKdcService in Hadoop
 Key: HADOOP-13014
 URL: https://issues.apache.org/jira/browse/HADOOP-13014
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Jiajia Li
Assignee: Jiajia Li


As discussed in HADOOP-12911, Steve advised "Perhaps we could have a MiniKDC 
service, which the existing MiniKDC code instantiated on its existing 
lifecycle."



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-13010) Refactor raw erasure coders

2016-04-11 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-13010:
---
Attachment: HADOOP-13010-v2.patch

Updated the patch to address the Jenkins findings.

> Refactor raw erasure coders
> ---
>
> Key: HADOOP-13010
> URL: https://issues.apache.org/jira/browse/HADOOP-13010
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Kai Zheng
>Assignee: Kai Zheng
> Fix For: 3.0.0
>
> Attachments: HADOOP-13010-v1.patch, HADOOP-13010-v2.patch
>
>
> This will refactor raw erasure coders according to some comments received so 
> far.
> * As discussed in HADOOP-11540 and suggested by [~cmccabe], better not to 
> rely class inheritance to reuse the codes, instead they can be moved to some 
> utility.
> * Suggested by [~jingzhao] somewhere quite some time ago, better to have a 
> state holder to keep some checking results for later reuse during an 
> encode/decode call.
> This would not get rid of some inheritance levels as doing so isn't clear yet 
> for the moment and also incurs big impact. I do wish the end result by this 
> refactoring will make all the levels more clear and easier to follow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)