Re: [DISCUSS] About the details of JDK-8 support

2015-10-14 Thread Steve Loughran

> On 13 Oct 2015, at 17:32, Haohui Mai  wrote:
> 
> Just to echo Steve's idea -- if we're seriously considering supporting
> JDK 8, maybe the first thing to do is to set up the jenkins to run
> with JDK 8? I'm happy to help. Does anyone know who I can talk to if I
> need to play around with all the Jenkins knob?

Jenkins is building with JAva 7 and 8. all that's needed is to turn off the 
Java 7 build, which I will  happily do. The POM can be changed to set the 
minimum JVM version -though that's most likely to be visible to people building 
locally, as you'll need to make sure that you have access to java 7 and java 8 
JVMs if you want to build and test for both.

Jenkins-wise, the big issue is one I've mentioned before: the builds are 
failing an not enough people are caring

https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Hdfs-trunk-Java8/488/

Please, lets fix this



[jira] [Created] (HADOOP-12478) Shell.getWinUtilsPath() has been renamed Shell.getWinutilsPath()

2015-10-14 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-12478:
---

 Summary: Shell.getWinUtilsPath()  has been renamed 
Shell.getWinutilsPath() 
 Key: HADOOP-12478
 URL: https://issues.apache.org/jira/browse/HADOOP-12478
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 2.8.0
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Critical


The shell enhancements of HADOOP-10775 managed to unintentionally change 
{{Shell.getWinUtilsPath()}}  has been renamed {{Shell.getWinutilsPath()}}. This 
didn't crop in Hadoop's code, but it's broken some of my stuff downstream 
-stuff in Groovy which the IDE didn't pick up as a regression.

I'll fix things by renaming the method to its original name, and changing the 
name of the {{getWinutilsFile()}} to match



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


hadoop-hdfs-client splitoff is going to break code

2015-10-14 Thread Steve Loughran
just an FYI, the split off of hadoop hdfs into client and server is going to 
break things.

I know that, as my code is broken; DFSConfigKeys off the path, 
HdfsConfiguration, the class I've been loading to force pickup of hdfs-site.xml 
-all missing.

This is because hadoop-client  POM now depends on hadoop-hdfs-client, not 
hadoop-hdfs, so the things I'm referencing are gone. I'm particularly sad about 
DfsConfigKeys, as everybody uses it as the one hard-coded resource of HDFS 
constants, HDFS-6566 covering the issue of making this public, something that's 
been sitting around for a year.

I'm fixing my build by explicitly adding a hadoop-hdfs dependency.

Any application which used stuff which has now been declared server-side isn't 
going to compile any more, which does appear to break the compatibility 
guidelines we've adopted, specifically "The hadoop-client artifact (maven 
groupId:artifactId) stays compatible within a major release"

http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Compatibility.html#Build_artifacts


We need to do one of

1. agree that this change, is considered acceptable according to policy, and 
mark it as incompatible in hdfs/CHANGES.TXT
2. Change the POMs to add both hdfs-client and -hdfs server in hadoop-client 
-with downstream users free to exclude the server code

We unintentionally caused similar grief with the move of the s3n clients to 
hadoop-aws , HADOOP-11074 -something we should have picked up and -1'd. This 
time we know the problems going to arise, so lets explicitly make a decision 
this time, and share it with our users.

-steve


[jira] [Created] (HADOOP-12475) Replace guava Cache with ConcurrentHashMap for caching Connection in ipc Client

2015-10-14 Thread Walter Su (JIRA)
Walter Su created HADOOP-12475:
--

 Summary: Replace guava Cache with ConcurrentHashMap for caching 
Connection in ipc Client
 Key: HADOOP-12475
 URL: https://issues.apache.org/jira/browse/HADOOP-12475
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Walter Su
Assignee: Walter Su


quote [~daryn] from HADOOP-11772:
{quote}
CacheBuilder is obscenely expensive for concurrent map, and it requires 
generating unnecessary garbage even just to look up a key. Replace it with 
ConcurrentHashMap.

I identified this issue that impaired my own perf testing under load. The 
slowdown isn't just the sync. It's the expensive of Connection's ctor stalling 
other connections. The expensive of ConnectionId#equals causes delays. 
Synch'ing on connections causes unfair contention unlike a sync'ed method. 
Concurrency simply hides this.
{quote}

BTW, guava Cache is heavyweight. Per local test, ConcurrentHashMap has better 
overal performance.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-12476) TestDNS.testLookupWithoutHostsFallback

2015-10-14 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-12476:
---

 Summary: TestDNS.testLookupWithoutHostsFallback
 Key: HADOOP-12476
 URL: https://issues.apache.org/jira/browse/HADOOP-12476
 Project: Hadoop Common
  Issue Type: Bug
  Components: net, test
Affects Versions: 3.0.0
Reporter: Steve Loughran
Assignee: Steve Loughran


Presumably triggered by HADOOP-12449, one of the jenkins patch runs has failed 
in {{TestDNS.testLookupWithoutHostsFallback}}, 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-Common-trunk #1847

2015-10-14 Thread Apache Jenkins Server
See 

Changes:

[aw] HADOOP-12364. Deleting pid file after stop is causing the daemons to

--
[...truncated 5398 lines...]
Running org.apache.hadoop.io.TestTextNonUTF8
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.167 sec - in 
org.apache.hadoop.io.TestTextNonUTF8
Running org.apache.hadoop.io.TestSequenceFileSerialization
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.839 sec - in 
org.apache.hadoop.io.TestSequenceFileSerialization
Running org.apache.hadoop.security.TestNetgroupCache
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.078 sec - in 
org.apache.hadoop.security.TestNetgroupCache
Running org.apache.hadoop.security.TestUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.553 sec - in 
org.apache.hadoop.security.TestUserFromEnv
Running org.apache.hadoop.security.TestGroupsCaching
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.849 sec - in 
org.apache.hadoop.security.TestGroupsCaching
Running org.apache.hadoop.security.ssl.TestSSLFactory
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.787 sec - in 
org.apache.hadoop.security.ssl.TestSSLFactory
Running org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.523 sec - in 
org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Running org.apache.hadoop.security.TestUGILoginFromKeytab
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.546 sec - in 
org.apache.hadoop.security.TestUGILoginFromKeytab
Running org.apache.hadoop.security.TestUserGroupInformation
Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.649 sec - in 
org.apache.hadoop.security.TestUserGroupInformation
Running org.apache.hadoop.security.TestUGIWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.083 sec - in 
org.apache.hadoop.security.TestUGIWithExternalKdc
Running org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.556 sec - in 
org.apache.hadoop.security.TestProxyUserFromEnv
Running org.apache.hadoop.security.authorize.TestProxyUsers
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.918 sec - in 
org.apache.hadoop.security.authorize.TestProxyUsers
Running org.apache.hadoop.security.authorize.TestProxyServers
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.428 sec - in 
org.apache.hadoop.security.authorize.TestProxyServers
Running org.apache.hadoop.security.authorize.TestServiceAuthorization
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.228 sec - in 
org.apache.hadoop.security.authorize.TestServiceAuthorization
Running org.apache.hadoop.security.authorize.TestAccessControlList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.95 sec - in 
org.apache.hadoop.security.authorize.TestAccessControlList
Running org.apache.hadoop.security.alias.TestCredShell
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.228 sec - in 
org.apache.hadoop.security.alias.TestCredShell
Running org.apache.hadoop.security.alias.TestCredentialProviderFactory
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.295 sec - in 
org.apache.hadoop.security.alias.TestCredentialProviderFactory
Running org.apache.hadoop.security.alias.TestCredentialProvider
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.085 sec - in 
org.apache.hadoop.security.alias.TestCredentialProvider
Running org.apache.hadoop.security.TestAuthenticationFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.578 sec - in 
org.apache.hadoop.security.TestAuthenticationFilter
Running org.apache.hadoop.security.TestLdapGroupsMapping
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.421 sec - in 
org.apache.hadoop.security.TestLdapGroupsMapping
Running org.apache.hadoop.security.token.TestToken
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.498 sec - in 
org.apache.hadoop.security.token.TestToken
Running org.apache.hadoop.security.token.delegation.TestDelegationToken
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.542 sec - 
in org.apache.hadoop.security.token.delegation.TestDelegationToken
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.422 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.873 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Running 

Build failed in Jenkins: Hadoop-Common-trunk #1848

2015-10-14 Thread Apache Jenkins Server
See 

Changes:

[aw] fix CHANGES.txt name typo

[stevel] HADOOP-12478. Shell.getWinUtilsPath() has been renamed

--
[...truncated 8524 lines...]
  [javadoc] Loading source files for package org.apache.hadoop.log...
  [javadoc] Loading source files for package org.apache.hadoop.log.metrics...
  [javadoc] Loading source files for package org.apache.hadoop.metrics...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics.ganglia...
  [javadoc] Loading source files for package org.apache.hadoop.metrics.jvm...
  [javadoc] Loading source files for package org.apache.hadoop.metrics.spi...
  [javadoc] Loading source files for package org.apache.hadoop.metrics.util...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.annotation...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.filter...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.impl...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.lib...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.sink...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.sink.ganglia...
  [javadoc] Loading source files for package 
org.apache.hadoop.metrics2.source...
  [javadoc] Loading source files for package org.apache.hadoop.metrics2.util...
  [javadoc] Loading source files for package org.apache.hadoop.net...
  [javadoc] Loading source files for package org.apache.hadoop.net.unix...
  [javadoc] Loading source files for package org.apache.hadoop.security...
  [javadoc] Loading source files for package org.apache.hadoop.security.alias...
  [javadoc] Loading source files for package 
org.apache.hadoop.security.authorize...
  [javadoc] Loading source files for package 
org.apache.hadoop.security.protocolPB...
  [javadoc] Loading source files for package org.apache.hadoop.security.ssl...
  [javadoc] Loading source files for package org.apache.hadoop.security.token...
  [javadoc] Loading source files for package 
org.apache.hadoop.security.token.delegation...
  [javadoc] Loading source files for package 
org.apache.hadoop.security.token.delegation.web...
  [javadoc] Loading source files for package org.apache.hadoop.service...
  [javadoc] Loading source files for package org.apache.hadoop.tools...
  [javadoc] Loading source files for package 
org.apache.hadoop.tools.protocolPB...
  [javadoc] Loading source files for package org.apache.hadoop.tracing...
  [javadoc] Loading source files for package org.apache.hadoop.util...
  [javadoc] Loading source files for package org.apache.hadoop.util.bloom...
  [javadoc] Loading source files for package org.apache.hadoop.util.curator...
  [javadoc] Loading source files for package org.apache.hadoop.util.hash...
  [javadoc] Constructing Javadoc information...
  [javadoc] 
:25:
 warning: Unsafe is internal proprietary API and may be removed in a future 
release
  [javadoc] import sun.misc.Unsafe;
  [javadoc]^
  [javadoc] 
:46:
 warning: Unsafe is internal proprietary API and may be removed in a future 
release
  [javadoc] import sun.misc.Unsafe;
  [javadoc]^
  [javadoc] 
:54:
 warning: ResolverConfiguration is internal proprietary API and may be removed 
in a future release
  [javadoc] import sun.net.dns.ResolverConfiguration;
  [javadoc]   ^
  [javadoc] 
:55:
 warning: IPAddressUtil is internal proprietary API and may be removed in a 
future release
  [javadoc] import sun.net.util.IPAddressUtil;
  [javadoc]^
  [javadoc] 
:21:
 warning: Signal is internal proprietary API and may be removed in a future 
release
  [javadoc] import sun.misc.Signal;
  [javadoc]^
  [javadoc] 
:22:
 warning: SignalHandler is internal proprietary API and may be removed in a 
future release
  [javadoc] import sun.misc.SignalHandler;
  [javadoc]^
  [javadoc] 

Re: hadoop-hdfs-client splitoff is going to break code

2015-10-14 Thread Haohui Mai
Option 2 sounds good to me. It might make sense to make hadoop-client
directly depends on Hadoop-hdfs?


Haohui
On Wed, Oct 14, 2015 at 10:56 AM larry mccay  wrote:

> Interesting...
>
> As long as #2 provides full backward compatibility and the ability to
> explicitly exclude the server dependencies that seems the best way to go.
> That would get my non-binding +1.
> :)
>
> Perhaps we could add another artifact called hadoop-thin-client that would
> not be backward compatible at some point?
>
> On Wed, Oct 14, 2015 at 1:36 PM, Steve Loughran 
> wrote:
>
> > just an FYI, the split off of hadoop hdfs into client and server is going
> > to break things.
> >
> > I know that, as my code is broken; DFSConfigKeys off the path,
> > HdfsConfiguration, the class I've been loading to force pickup of
> > hdfs-site.xml -all missing.
> >
> > This is because hadoop-client  POM now depends on hadoop-hdfs-client, not
> > hadoop-hdfs, so the things I'm referencing are gone. I'm particularly sad
> > about DfsConfigKeys, as everybody uses it as the one hard-coded resource
> of
> > HDFS constants, HDFS-6566 covering the issue of making this public,
> > something that's been sitting around for a year.
> >
> > I'm fixing my build by explicitly adding a hadoop-hdfs dependency.
> >
> > Any application which used stuff which has now been declared server-side
> > isn't going to compile any more, which does appear to break the
> > compatibility guidelines we've adopted, specifically "The hadoop-client
> > artifact (maven groupId:artifactId) stays compatible within a major
> release"
> >
> >
> >
> http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Compatibility.html#Build_artifacts
> >
> >
> > We need to do one of
> >
> > 1. agree that this change, is considered acceptable according to policy,
> > and mark it as incompatible in hdfs/CHANGES.TXT
> > 2. Change the POMs to add both hdfs-client and -hdfs server in
> > hadoop-client -with downstream users free to exclude the server code
> >
> > We unintentionally caused similar grief with the move of the s3n clients
> > to hadoop-aws , HADOOP-11074 -something we should have picked up and
> -1'd.
> > This time we know the problems going to arise, so lets explicitly make a
> > decision this time, and share it with our users.
> >
> > -steve
> >
>


Re: hadoop-hdfs-client splitoff is going to break code

2015-10-14 Thread Ted Yu
+1 on option 2.

On Wed, Oct 14, 2015 at 10:56 AM, larry mccay  wrote:

> Interesting...
>
> As long as #2 provides full backward compatibility and the ability to
> explicitly exclude the server dependencies that seems the best way to go.
> That would get my non-binding +1.
> :)
>
> Perhaps we could add another artifact called hadoop-thin-client that would
> not be backward compatible at some point?
>
> On Wed, Oct 14, 2015 at 1:36 PM, Steve Loughran 
> wrote:
>
> > just an FYI, the split off of hadoop hdfs into client and server is going
> > to break things.
> >
> > I know that, as my code is broken; DFSConfigKeys off the path,
> > HdfsConfiguration, the class I've been loading to force pickup of
> > hdfs-site.xml -all missing.
> >
> > This is because hadoop-client  POM now depends on hadoop-hdfs-client, not
> > hadoop-hdfs, so the things I'm referencing are gone. I'm particularly sad
> > about DfsConfigKeys, as everybody uses it as the one hard-coded resource
> of
> > HDFS constants, HDFS-6566 covering the issue of making this public,
> > something that's been sitting around for a year.
> >
> > I'm fixing my build by explicitly adding a hadoop-hdfs dependency.
> >
> > Any application which used stuff which has now been declared server-side
> > isn't going to compile any more, which does appear to break the
> > compatibility guidelines we've adopted, specifically "The hadoop-client
> > artifact (maven groupId:artifactId) stays compatible within a major
> release"
> >
> >
> >
> http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Compatibility.html#Build_artifacts
> >
> >
> > We need to do one of
> >
> > 1. agree that this change, is considered acceptable according to policy,
> > and mark it as incompatible in hdfs/CHANGES.TXT
> > 2. Change the POMs to add both hdfs-client and -hdfs server in
> > hadoop-client -with downstream users free to exclude the server code
> >
> > We unintentionally caused similar grief with the move of the s3n clients
> > to hadoop-aws , HADOOP-11074 -something we should have picked up and
> -1'd.
> > This time we know the problems going to arise, so lets explicitly make a
> > decision this time, and share it with our users.
> >
> > -steve
> >
>


Re: hadoop-hdfs-client splitoff is going to break code

2015-10-14 Thread Mingliang Liu
The jira tracking this issue is: https://issues.apache.org/jira/browse/HDFS-9241

 +1 on option 2

I think it makes sense to make hadoop-client directly depend on hadoop-hdfs 
(which itself depends on hadoop-hdfs-client).

Ciao,

Mingliang Liu
Member of Technical Staff - HDFS,
Hortonworks Inc.
m...@hortonworks.com



> On Oct 14, 2015, at 10:36 AM, Steve Loughran  wrote:
> 
> just an FYI, the split off of hadoop hdfs into client and server is going to 
> break things.
> 
> I know that, as my code is broken; DFSConfigKeys off the path, 
> HdfsConfiguration, the class I've been loading to force pickup of 
> hdfs-site.xml -all missing.
> 
> This is because hadoop-client  POM now depends on hadoop-hdfs-client, not 
> hadoop-hdfs, so the things I'm referencing are gone. I'm particularly sad 
> about DfsConfigKeys, as everybody uses it as the one hard-coded resource of 
> HDFS constants, HDFS-6566 covering the issue of making this public, something 
> that's been sitting around for a year.
> 
> I'm fixing my build by explicitly adding a hadoop-hdfs dependency.
> 
> Any application which used stuff which has now been declared server-side 
> isn't going to compile any more, which does appear to break the compatibility 
> guidelines we've adopted, specifically "The hadoop-client artifact (maven 
> groupId:artifactId) stays compatible within a major release"
> 
> http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Compatibility.html#Build_artifacts
> 
> 
> We need to do one of
> 
> 1. agree that this change, is considered acceptable according to policy, and 
> mark it as incompatible in hdfs/CHANGES.TXT
> 2. Change the POMs to add both hdfs-client and -hdfs server in hadoop-client 
> -with downstream users free to exclude the server code
> 
> We unintentionally caused similar grief with the move of the s3n clients to 
> hadoop-aws , HADOOP-11074 -something we should have picked up and -1'd. This 
> time we know the problems going to arise, so lets explicitly make a decision 
> this time, and share it with our users.
> 
> -steve



[jira] [Created] (HADOOP-12479) ProtocMojo does not log the reason for a protoc compilation failure.

2015-10-14 Thread Chris Nauroth (JIRA)
Chris Nauroth created HADOOP-12479:
--

 Summary: ProtocMojo does not log the reason for a protoc 
compilation failure.
 Key: HADOOP-12479
 URL: https://issues.apache.org/jira/browse/HADOOP-12479
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Minor


If protoc fails with a compilation error in the proto files, our Maven plugin 
won't print the details.  The only way to figure it out is to repeat running 
the {{protoc}} command manually from outside the Hadoop build.  This is because 
our {{ProtocMojo}} only captures stdout from the {{protoc}} command, and 
compilation errors get written to {{stderr}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : Hadoop-Common-trunk #1850

2015-10-14 Thread Apache Jenkins Server
See 



Build failed in Jenkins: Hadoop-Common-trunk #1849

2015-10-14 Thread Apache Jenkins Server
See 

Changes:

[lei] HDFS-9238. Update TestFileCreation.testLeaseExpireHardLimit() to avoid

--
[...truncated 3859 lines...]
Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Generating 

Building index for all the packages and classes...
Generating 

Generating 

Generating 

Building index for all classes...
Generating 

Generating 

Generating 

Generating 

Generating 

[INFO] Building jar: 

[INFO] 
[INFO] 
[INFO] Building Apache Hadoop MiniKDC 3.0.0-SNAPSHOT
[INFO] 
[INFO] 
[INFO] --- maven-clean-plugin:2.5:clean (default-clean) @ hadoop-minikdc ---
[INFO] Deleting 

[INFO] 
[INFO] --- findbugs-maven-plugin:3.0.0:findbugs (default-cli) @ hadoop-minikdc 
---
[INFO] 
[INFO] --- maven-checkstyle-plugin:2.15:checkstyle (default-cli) @ 
hadoop-minikdc ---
[INFO] There are 10 errors reported by Checkstyle 6.6 with 
checkstyle/checkstyle.xml ruleset.
[WARNING] Unable to locate Source XRef to link to - DISABLED
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-minikdc ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 

[INFO] Executed tasks
[INFO] 
[INFO] --- maven-resources-plugin:2.6:resources (default-resources) @ 
hadoop-minikdc ---
[INFO] Using 'UTF-8' encoding to copy filtered resources.
[INFO] Copying 3 resources
[INFO] 
[INFO] --- maven-compiler-plugin:3.1:compile (default-compile) @ hadoop-minikdc 
---
[INFO] Changes detected - recompiling the module!
[INFO] Compiling 2 source files to 

[INFO] 
[INFO] --- maven-resources-plugin:2.6:testResources (default-testResources) @ 
hadoop-minikdc 

Re: [DISCUSS] About the details of JDK-8 support

2015-10-14 Thread Robert Kanter
The only problem with trying to get the JDK 8 trunk builds green (or blue I
guess) is that it's like trying to hit a moving target because of how many
new commits keep coming in.  I was looking at fixing these a while ago, and
managed to at least make them compile and fixed (or worked with others to
fix) some of the unit tests.  I've been really busy on other tasks and
haven't had time to continue working on this in quite a while though.

Currently, it looks like Common is still green mostly, Yarn is having a
build failure with checkstyle, MR has between 1 and 10 test failures, and
HDFS had between 3 and 10 test failures.

I think it's going to be difficult to get these green, and to keep them
green, unless we get more buy in from everyone on new commits being tested
against JDK 8.  Otherwise, it's too hard to keep up with the number of
commits coming in, even if we do get it green.  Perhaps we could have
test-patch also run the patch against JDK 8?


- Robert

On Wed, Oct 14, 2015 at 8:27 AM, Steve Loughran 
wrote:

>
> > On 13 Oct 2015, at 17:32, Haohui Mai  wrote:
> >
> > Just to echo Steve's idea -- if we're seriously considering supporting
> > JDK 8, maybe the first thing to do is to set up the jenkins to run
> > with JDK 8? I'm happy to help. Does anyone know who I can talk to if I
> > need to play around with all the Jenkins knob?
>
> Jenkins is building with JAva 7 and 8. all that's needed is to turn off
> the Java 7 build, which I will  happily do. The POM can be changed to set
> the minimum JVM version -though that's most likely to be visible to people
> building locally, as you'll need to make sure that you have access to java
> 7 and java 8 JVMs if you want to build and test for both.
>
> Jenkins-wise, the big issue is one I've mentioned before: the builds are
> failing an not enough people are caring
>
>
> https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Hdfs-trunk-Java8/488/
>
> Please, lets fix this
>
>


[jira] [Created] (HADOOP-12477) Add test to make sure hadoop-minicluster provides enough dependencies

2015-10-14 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HADOOP-12477:
-

 Summary: Add test to make sure hadoop-minicluster provides enough 
dependencies
 Key: HADOOP-12477
 URL: https://issues.apache.org/jira/browse/HADOOP-12477
 Project: Hadoop Common
  Issue Type: Improvement
  Components: test
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki


Dependencies of test-jar is not transitive by design (MNG-1378). If mini 
cluster depends on artifacts depended by only test-jars, downstream projects 
using mini cluster need to add the dependencies in addition to 
hadoop-minicluster. It would be kind for downstream projects to make sure 
hadoop-minicluster provides enough dependencies.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: Hadoop-common-trunk-Java8 #545

2015-10-14 Thread Apache Jenkins Server
See 

Changes:

[rohithsharmaks] YARN-4250. NPE in AppSchedulingInfo#isRequestLabelChanged. 
(Brahma Reddy

--
[...truncated 5789 lines...]
Running org.apache.hadoop.util.TestVersionUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.184 sec - in 
org.apache.hadoop.util.TestVersionUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.233 sec - in 
org.apache.hadoop.util.TestProtoUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestLightWeightGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.168 sec - in 
org.apache.hadoop.util.TestLightWeightGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGSet
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.666 sec - in 
org.apache.hadoop.util.TestGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringInterner
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.108 sec - in 
org.apache.hadoop.util.TestStringInterner
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestZKUtil
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.115 sec - in 
org.apache.hadoop.util.TestZKUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringUtils
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.27 sec - in 
org.apache.hadoop.util.TestStringUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFindClass
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.572 sec - in 
org.apache.hadoop.util.TestFindClass
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGenericOptionsParser
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.799 sec - in 
org.apache.hadoop.util.TestGenericOptionsParser
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestRunJar
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.408 sec - in 
org.apache.hadoop.util.TestRunJar
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestSysInfoLinux
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.22 sec - in 
org.apache.hadoop.util.TestSysInfoLinux
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestDirectBufferPool
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.135 sec - in 
org.apache.hadoop.util.TestDirectBufferPool
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFileBasedIPList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.166 sec - in 
org.apache.hadoop.util.TestFileBasedIPList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIndexedSort
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.633 sec - in 
org.apache.hadoop.util.TestIndexedSort
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIdentityHashStore
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.182 sec - in 
org.apache.hadoop.util.TestIdentityHashStore
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestMachineList
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.614 sec - in 
org.apache.hadoop.util.TestMachineList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestWinUtils
Tests run: 11, Failures: 0, Errors: 0, Skipped: 11, Time elapsed: 0.209 sec - 
in org.apache.hadoop.util.TestWinUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.hash.TestHash
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.333 sec - in 

Re: [DISCUSS] About the details of JDK-8 support

2015-10-14 Thread Allen Wittenauer

If people want, I could setup a cut off of yetus master to run the jenkins 
test-patch.  (multiple maven repos, docker support, multijdk support, … ) Yetus 
would get some real world testing out of it and hadoop common-dev could stop 
spinning in circles over some of the same issues month after month. ;)


On Oct 14, 2015, at 3:05 PM, Robert Kanter  wrote:

> The only problem with trying to get the JDK 8 trunk builds green (or blue I
> guess) is that it's like trying to hit a moving target because of how many
> new commits keep coming in.  I was looking at fixing these a while ago, and
> managed to at least make them compile and fixed (or worked with others to
> fix) some of the unit tests.  I've been really busy on other tasks and
> haven't had time to continue working on this in quite a while though.
> 
> Currently, it looks like Common is still green mostly, Yarn is having a
> build failure with checkstyle, MR has between 1 and 10 test failures, and
> HDFS had between 3 and 10 test failures.
> 
> I think it's going to be difficult to get these green, and to keep them
> green, unless we get more buy in from everyone on new commits being tested
> against JDK 8.  Otherwise, it's too hard to keep up with the number of
> commits coming in, even if we do get it green.  Perhaps we could have
> test-patch also run the patch against JDK 8?
> 
> 
> - Robert
> 
> On Wed, Oct 14, 2015 at 8:27 AM, Steve Loughran 
> wrote:
> 
>> 
>>> On 13 Oct 2015, at 17:32, Haohui Mai  wrote:
>>> 
>>> Just to echo Steve's idea -- if we're seriously considering supporting
>>> JDK 8, maybe the first thing to do is to set up the jenkins to run
>>> with JDK 8? I'm happy to help. Does anyone know who I can talk to if I
>>> need to play around with all the Jenkins knob?
>> 
>> Jenkins is building with JAva 7 and 8. all that's needed is to turn off
>> the Java 7 build, which I will  happily do. The POM can be changed to set
>> the minimum JVM version -though that's most likely to be visible to people
>> building locally, as you'll need to make sure that you have access to java
>> 7 and java 8 JVMs if you want to build and test for both.
>> 
>> Jenkins-wise, the big issue is one I've mentioned before: the builds are
>> failing an not enough people are caring
>> 
>> 
>> https://builds.apache.org/view/H-L/view/Hadoop/job/Hadoop-Hdfs-trunk-Java8/488/
>> 
>> Please, lets fix this
>> 
>> 



Build failed in Jenkins: Hadoop-common-trunk-Java8 #544

2015-10-14 Thread Apache Jenkins Server
See 

Changes:

[vvasudev] YARN-4253. Standardize on using PrivilegedOperationExecutor for all

[vvasudev] YARN-4252. Log container-executor invocation details when exit code 
is

[vvasudev] YARN-4255. container-executor does not clean up docker operation 
command

--
[...truncated 5789 lines...]
Running org.apache.hadoop.util.TestVersionUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.174 sec - in 
org.apache.hadoop.util.TestVersionUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.235 sec - in 
org.apache.hadoop.util.TestProtoUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestLightWeightGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.167 sec - in 
org.apache.hadoop.util.TestLightWeightGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGSet
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.701 sec - in 
org.apache.hadoop.util.TestGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringInterner
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.115 sec - in 
org.apache.hadoop.util.TestStringInterner
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestZKUtil
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.119 sec - in 
org.apache.hadoop.util.TestZKUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringUtils
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.274 sec - in 
org.apache.hadoop.util.TestStringUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFindClass
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.574 sec - in 
org.apache.hadoop.util.TestFindClass
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGenericOptionsParser
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.777 sec - in 
org.apache.hadoop.util.TestGenericOptionsParser
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestRunJar
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.405 sec - in 
org.apache.hadoop.util.TestRunJar
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestSysInfoLinux
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.276 sec - in 
org.apache.hadoop.util.TestSysInfoLinux
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestDirectBufferPool
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.14 sec - in 
org.apache.hadoop.util.TestDirectBufferPool
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFileBasedIPList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.191 sec - in 
org.apache.hadoop.util.TestFileBasedIPList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIndexedSort
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.722 sec - in 
org.apache.hadoop.util.TestIndexedSort
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIdentityHashStore
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.178 sec - in 
org.apache.hadoop.util.TestIdentityHashStore
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestMachineList
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.725 sec - in 
org.apache.hadoop.util.TestMachineList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestWinUtils
Tests run: 11, Failures: 0, Errors: 0, Skipped: 11, Time elapsed: 0.2 sec - in 
org.apache.hadoop.util.TestWinUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option 

Jenkins build is back to normal : Hadoop-common-trunk-Java8 #546

2015-10-14 Thread Apache Jenkins Server
See 



Re: hadoop-hdfs-client splitoff is going to break code

2015-10-14 Thread larry mccay
Interesting...

As long as #2 provides full backward compatibility and the ability to
explicitly exclude the server dependencies that seems the best way to go.
That would get my non-binding +1.
:)

Perhaps we could add another artifact called hadoop-thin-client that would
not be backward compatible at some point?

On Wed, Oct 14, 2015 at 1:36 PM, Steve Loughran 
wrote:

> just an FYI, the split off of hadoop hdfs into client and server is going
> to break things.
>
> I know that, as my code is broken; DFSConfigKeys off the path,
> HdfsConfiguration, the class I've been loading to force pickup of
> hdfs-site.xml -all missing.
>
> This is because hadoop-client  POM now depends on hadoop-hdfs-client, not
> hadoop-hdfs, so the things I'm referencing are gone. I'm particularly sad
> about DfsConfigKeys, as everybody uses it as the one hard-coded resource of
> HDFS constants, HDFS-6566 covering the issue of making this public,
> something that's been sitting around for a year.
>
> I'm fixing my build by explicitly adding a hadoop-hdfs dependency.
>
> Any application which used stuff which has now been declared server-side
> isn't going to compile any more, which does appear to break the
> compatibility guidelines we've adopted, specifically "The hadoop-client
> artifact (maven groupId:artifactId) stays compatible within a major release"
>
>
> http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Compatibility.html#Build_artifacts
>
>
> We need to do one of
>
> 1. agree that this change, is considered acceptable according to policy,
> and mark it as incompatible in hdfs/CHANGES.TXT
> 2. Change the POMs to add both hdfs-client and -hdfs server in
> hadoop-client -with downstream users free to exclude the server code
>
> We unintentionally caused similar grief with the move of the s3n clients
> to hadoop-aws , HADOOP-11074 -something we should have picked up and -1'd.
> This time we know the problems going to arise, so lets explicitly make a
> decision this time, and share it with our users.
>
> -steve
>


Jenkins build is back to normal : Hadoop-common-trunk-Java8 #550

2015-10-14 Thread Apache Jenkins Server
See 



Build failed in Jenkins: Hadoop-common-trunk-Java8 #549

2015-10-14 Thread Apache Jenkins Server
See 

Changes:

[xyao] HDFS-9210. Fix some misuse of %n in VolumeScanner#printStats.

[xyao] Revert "HDFS-9210. Fix some misuse of %n in VolumeScanner#printStats.

--
[...truncated 5800 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestSequenceFileSync
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.743 sec - in 
org.apache.hadoop.io.TestSequenceFileSync
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.retry.TestFailoverProxy
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 13.325 sec - in 
org.apache.hadoop.io.retry.TestFailoverProxy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.retry.TestDefaultRetryPolicy
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.331 sec - in 
org.apache.hadoop.io.retry.TestDefaultRetryPolicy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.retry.TestRetryProxy
Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.19 sec - in 
org.apache.hadoop.io.retry.TestRetryProxy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestDefaultStringifier
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.343 sec - in 
org.apache.hadoop.io.TestDefaultStringifier
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBloomMapFile
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.243 sec - in 
org.apache.hadoop.io.TestBloomMapFile
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBytesWritable
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.167 sec - in 
org.apache.hadoop.io.TestBytesWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.rawcoder.TestXORRawCoder
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.169 sec - in 
org.apache.hadoop.io.erasurecode.rawcoder.TestXORRawCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.rawcoder.TestRSRawCoder
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.247 sec - in 
org.apache.hadoop.io.erasurecode.rawcoder.TestRSRawCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.TestECSchema
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.073 sec - in 
org.apache.hadoop.io.erasurecode.TestECSchema
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.coder.TestXORCoder
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.168 sec - in 
org.apache.hadoop.io.erasurecode.coder.TestXORCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.erasurecode.coder.TestRSErasureCoder
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.621 sec - in 
org.apache.hadoop.io.erasurecode.coder.TestRSErasureCoder
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestWritableUtils
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.134 sec - in 
org.apache.hadoop.io.TestWritableUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestBooleanWritable
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.177 sec - in 
org.apache.hadoop.io.TestBooleanWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestDataByteBuffers
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.403 sec - in 
org.apache.hadoop.io.TestDataByteBuffers
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.io.TestVersionedWritable
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.153 sec - in 
org.apache.hadoop.io.TestVersionedWritable
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 

Build failed in Jenkins: Hadoop-Common-trunk #1852

2015-10-14 Thread Apache Jenkins Server
See 

Changes:

[lei] HDFS-9188. Make block corruption related tests FsDataset-agnostic. (lei)

--
[...truncated 5398 lines...]
Running org.apache.hadoop.io.TestTextNonUTF8
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.172 sec - in 
org.apache.hadoop.io.TestTextNonUTF8
Running org.apache.hadoop.io.TestSequenceFileSerialization
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.844 sec - in 
org.apache.hadoop.io.TestSequenceFileSerialization
Running org.apache.hadoop.security.TestNetgroupCache
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.081 sec - in 
org.apache.hadoop.security.TestNetgroupCache
Running org.apache.hadoop.security.TestUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.566 sec - in 
org.apache.hadoop.security.TestUserFromEnv
Running org.apache.hadoop.security.TestGroupsCaching
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.852 sec - in 
org.apache.hadoop.security.TestGroupsCaching
Running org.apache.hadoop.security.ssl.TestSSLFactory
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.846 sec - in 
org.apache.hadoop.security.ssl.TestSSLFactory
Running org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.492 sec - in 
org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Running org.apache.hadoop.security.TestUGILoginFromKeytab
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.529 sec - in 
org.apache.hadoop.security.TestUGILoginFromKeytab
Running org.apache.hadoop.security.TestUserGroupInformation
Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.711 sec - in 
org.apache.hadoop.security.TestUserGroupInformation
Running org.apache.hadoop.security.TestUGIWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.079 sec - in 
org.apache.hadoop.security.TestUGIWithExternalKdc
Running org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.576 sec - in 
org.apache.hadoop.security.TestProxyUserFromEnv
Running org.apache.hadoop.security.authorize.TestProxyUsers
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.944 sec - in 
org.apache.hadoop.security.authorize.TestProxyUsers
Running org.apache.hadoop.security.authorize.TestProxyServers
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.436 sec - in 
org.apache.hadoop.security.authorize.TestProxyServers
Running org.apache.hadoop.security.authorize.TestServiceAuthorization
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.191 sec - in 
org.apache.hadoop.security.authorize.TestServiceAuthorization
Running org.apache.hadoop.security.authorize.TestAccessControlList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.877 sec - in 
org.apache.hadoop.security.authorize.TestAccessControlList
Running org.apache.hadoop.security.alias.TestCredShell
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.24 sec - in 
org.apache.hadoop.security.alias.TestCredShell
Running org.apache.hadoop.security.alias.TestCredentialProviderFactory
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.363 sec - in 
org.apache.hadoop.security.alias.TestCredentialProviderFactory
Running org.apache.hadoop.security.alias.TestCredentialProvider
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.088 sec - in 
org.apache.hadoop.security.alias.TestCredentialProvider
Running org.apache.hadoop.security.TestAuthenticationFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.58 sec - in 
org.apache.hadoop.security.TestAuthenticationFilter
Running org.apache.hadoop.security.TestLdapGroupsMapping
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.44 sec - in 
org.apache.hadoop.security.TestLdapGroupsMapping
Running org.apache.hadoop.security.token.TestToken
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.496 sec - in 
org.apache.hadoop.security.token.TestToken
Running org.apache.hadoop.security.token.delegation.TestDelegationToken
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 30.748 sec - 
in org.apache.hadoop.security.token.delegation.TestDelegationToken
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.438 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.904 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Running