Re: [DISCUSS] About the details of JDK-8 support

2015-10-07 Thread Elliott Clark
On Mon, Oct 5, 2015 at 5:35 PM, Tsuyoshi Ozawa  wrote:

> Do you have any concern about this? I’ve not
> tested with HBase yet.
>

We've been running JDK 8u60 in production with Hadoop 2.6.X and HBase for
quite a while. Everything has been very stable for us. We're running and
compiling with jdk8.

We had to turn off yarn.nodemanager.vmem-check-enabled. Otherwise mr jobs
didn't do too well.

I'd be +1 on dropping jdk7 support. However as downstream developer it
would be very weird for that to happen on anything but a major release.


Re: [DISCUSS] About the details of JDK-8 support

2015-10-07 Thread Andrew Wang
>
> > On 7 Oct 2015, at 17:23, Andrew Wang  wrote:
> >
> > We've been supporting JDK8 as a runtime for CDH5 for a while now (meaning
> > the full stack including HBase), so I agree that we're good there.
> >
>
>
> with Kerberos on?
>
> Yea, I haven't been that involved with our internal JDK validation
efforts, but I know there have been an assortment of JDK8 bugs related to
Kerberos. Our latest docs currently recommend 1.8.0_40 or above:

http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/cdh_ig_req_supported_versions.html#concept_pdd_kzf_vp_unique_1


Build failed in Jenkins: Hadoop-common-trunk-Java8 #510

2015-10-07 Thread Apache Jenkins Server
See 

Changes:

[aajisaka] HADOOP-12465. Incorrect javadoc in WritableUtils.java. Contributed by

[lei] HDFS-9176. Fix TestDirectoryScanner#testThrottling often fails. (Daniel

[jing9] HDFS-9209. Erasure coding: Add apache license header in

--
[...truncated 5749 lines...]
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestChRootedFs
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.269 sec - in 
org.apache.hadoop.fs.viewfs.TestChRootedFs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.viewfs.TestFcMainOperationsLocalFs
Tests run: 60, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.303 sec - in 
org.apache.hadoop.fs.viewfs.TestFcMainOperationsLocalFs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem
Tests run: 49, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.324 sec - in 
org.apache.hadoop.fs.TestFSMainOperationsLocalFileSystem
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestFcLocalFsPermission
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.756 sec - in 
org.apache.hadoop.fs.TestFcLocalFsPermission
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestAfsCheckPath
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.184 sec - in 
org.apache.hadoop.fs.TestAfsCheckPath
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestFileUtil
Tests run: 27, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 7.59 sec - in 
org.apache.hadoop.fs.TestFileUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestGlobPattern
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.093 sec - in 
org.apache.hadoop.fs.TestGlobPattern
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestDFVariations
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.27 sec - in 
org.apache.hadoop.fs.TestDFVariations
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.TestBlockLocation
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.103 sec - in 
org.apache.hadoop.fs.TestBlockLocation
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestCopy
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.938 sec - in 
org.apache.hadoop.fs.shell.TestCopy
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestCopyPreserveFlag
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.818 sec - in 
org.apache.hadoop.fs.shell.TestCopyPreserveFlag
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestPathExceptions
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.097 sec - in 
org.apache.hadoop.fs.shell.TestPathExceptions
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestAclCommands
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.779 sec - in 
org.apache.hadoop.fs.shell.TestAclCommands
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestLs
Tests run: 33, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.264 sec - in 
org.apache.hadoop.fs.shell.TestLs
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestTextCommand
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.017 sec - in 
org.apache.hadoop.fs.shell.TestTextCommand
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestMove
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.807 sec - in 
org.apache.hadoop.fs.shell.TestMove
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.shell.TestPathData
Tests run: 11, Failures: 

Re: DomainSocket issues on Solaris

2015-10-07 Thread Alan Burlison

On 07/10/15 18:53, Colin P. McCabe wrote:


I think you could come up with a select/poll solution while using the
old function signatures.  A 4-byte int is more than enough information
to pass in, given that you can use it as an index into a table in the
C code.


I have thought about that but a simple table would not work very well. 
It would have to be potentially quite large and would be sparsely 
populated. It would really have to be some sort of map and would most 
likely have to be implemented in C. However it is done it becomes a 
Solaris-only maintenance burden. Yes it's possible, but it seemed 
distinctly undesirable.



 There are also a lot of other solution to this problem, like
I pointed out earlier.  For example, you dismissed the timer wheel
suggestion because of a detail of a unit test, but we could easily
change the test.


Unfortunately there are somewhere around 100 test failures that I think 
are related to the socket timeout issue, which is why I focussed on it.



Anyway, changing the function signatures in the way you described is
certainly reasonable and I wouldn't object to it.  It is probably the
most natural solution.


That's the conclusion I came to, but I fully understand there has to be 
a solution for the Java/JNI versioning issue as well.



Does that sound acceptable? If so I can draft up a proposal for native
library version and platform naming, library search locations etc.


Yes, I think it would be good to make some progress on HADOOP-11127.
We have been putting off the issue for too long.


Even if I put together a solution for DomainSocket that doesn't need 
changes to the JNI interface I'm almost certain that subsequent work 
will hit the same issue. I'd rather spend the time up front and come up 
with a once-and-for-all solution, I think overall that will work out to 
be less effort and certainly less risky.


I'll draft up a proposal and attach it to HADOOP-11127.

Thanks,

--
Alan Burlison
--


Update BUILDING.txt instructions for Eclipse

2015-10-07 Thread Andrew Wang
Hi all,

I happened to see this on the maven list, looks like the Maven eclipse
plugin (aka "eclipse:eclipse") is being retired.

Our BUILDING.txt still mentions running the "eclipse:eclipse" goal. Has
anyone tried alternative ways of importing Hadoop into Eclipse? Seems like
a good time to update our instructions.

I've switched over to IDEA so this doesn't really affect me, but I'm happy
to test and update the Eclipse instructions if someone provides them.

Best,
Andrew

-- Forwarded message --
From: Robert Scholte 
Date: Wed, Oct 7, 2015 at 11:06 AM
Subject: [RESULT] [VOTE] Retire Maven Eclipse Plugin / Donation to Mojohaus
To: Maven Developers List 
Cc: Maven Users List , Maven Project Management
Committee List 


Hi,

The vote has passed with the following result:

+1 (binding): Barrie Treloar, Arnaud Héritier, Tamás Cservenák, Dennis
Lundberg, Hervé BOUTEMY, Olivier Lamy, Karl Heinz Marbaise, Kristian
Rosenvold, Jason van Zyl, Robert Scholte
+1 (non binding): Tibor Digana, Anders Hammar, Michael Osipov, Andreas
Gudian
+0.5 (non binding): Baptiste Mathus

thanks for the huge number of votes!!

I will continue with the steps required to retire this plugin.

Robert

Op Sun, 04 Oct 2015 11:18:55 +0200 schreef Robert Scholte <
rfscho...@apache.org>:

Hi,
>
> during the latest upgrade of the plugin-parent I faced several issues with
> the maven-eclipse-plugin.
> It will take quite some time to fix these issues, but is it worth
> maintaining it here?
> Nowadays the Maven support for Eclipse is good and stable.
> The maven-eclipse-plugin has a lot of integration tests which should be
> rewritten, because it always launches a new Maven fork and it takes ages to
> complete. This simply blocks good continuous integration of the plugins.
> I know there are still some projects with can't use the Maven Integration
> of Eclipse and depend on this plugin, so the sources need to stay available
> for users so the can extend it for their own usage.
>
> I therefor propose that we retire maven-eclipse-plugin for the Apache
> Maven project and donate it to the Mojohaus project
>
> If this vote is successful I will make one final release of the plugin,
> making
> it clear on the plugin site that it has been retired. After that the
> source code
> will be moved into the "retired" area in Subversion.
>
> The process for retiring a plugin is described here:
> http://maven.apache.org/developers/retirement-plan-plugins.html
>
> The vote is open for 72 hours.
>
> [ ] +1 Yes, it's about time
> [ ] -1 No, because...
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
> For additional commands, e-mail: dev-h...@maven.apache.org
>

-
To unsubscribe, e-mail: users-unsubscr...@maven.apache.org
For additional commands, e-mail: users-h...@maven.apache.org


Build failed in Jenkins: Hadoop-common-trunk-Java8 #506

2015-10-07 Thread Apache Jenkins Server
See 

Changes:

[umamahesh] HDFS-9182. Cleanup the findbugs and other issues after HDFS EC 
merged to

--
[...truncated 5671 lines...]
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.792 sec - in 
org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 
org.apache.hadoop.fs.contract.rawlocal.TestRawLocalContractUnderlyingFileBehavior
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.25 sec - in 
org.apache.hadoop.fs.contract.rawlocal.TestRawLocalContractUnderlyingFileBehavior
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.831 sec - in 
org.apache.hadoop.fs.contract.rawlocal.TestRawlocalContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractCreate
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 0.528 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractCreate
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 8, Time elapsed: 0.672 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 0.539 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 0.5 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.ftp.TestFTPContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 6, Time elapsed: 0.683 sec - in 
org.apache.hadoop.fs.contract.ftp.TestFTPContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractAppend
Tests run: 5, Failures: 0, Errors: 0, Skipped: 5, Time elapsed: 0.886 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractAppend
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractGetFileStatus
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.693 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractGetFileStatus
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.906 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractOpen
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractLoaded
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.791 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractLoaded
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractDelete
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.912 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractDelete
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractMkdir
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.879 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractMkdir
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.fs.contract.localfs.TestLocalFSContractRename
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.931 sec - in 
org.apache.hadoop.fs.contract.localfs.TestLocalFSContractRename
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 

Re: [DISCUSS] About the details of JDK-8 support

2015-10-07 Thread Masatake Iwasaki

Thanks for clear summary, Tsuyoshi.

I read some related past discussions.

  https://wiki.apache.org/hadoop/MovingToJdk7and8
  http://search-hadoop.com/m/uOzYtGSiCs1acRnh
  http://search-hadoop.com/m/uOzYthdWJqpGdSZ1

Though there seems to be no consensus about when to drop java 7 support yet,
it would not be 2.8 for which the preparation is already started.
If the works for making source compatible with java 8 does not result in
dropping java 7 support, it would be nice and easy to backport to branch-2.


> we need to upgrade grizzly to 2.2.16 to use
> jersey-test-framework-grizzly2. I’d like to discuss which version we
> will target this change. Can we do this in branch-2?

At lease, the newest grizzly, jersey and asm seems to support java 7 too
and HADOOP-11993 may work in branch-2.


Masatake Iwasaki


On 10/6/15 09:35, Tsuyoshi Ozawa wrote:
> Hi commiters and users of Hadoop stack,
>
> I’ll share the current status of JDK-8 support here. We can take a
> two-step approach to support JDK-8 - runtime-level support and
> source-level support.
>
> About runtime-level support, I’ve tested Hadoop stack with JDK-8  e.g.
> MapReduce, Spark, Tez, Flink on YARN and HDFS for a few months. As
> long as I tested, it works well completely since JDK-8 doesn’t have
> any incompatibility at binary level. We can say Hadoop has supported
> JDK8 runtime already. Do you have any concern about this? I’ve not
> tested with HBase yet. I need help of HBase community. I think only
> problem about runtime is HADOOP-11364, default value of
> colntainer-killer of YARN. After fixing the issue, we can declare the
> support of JDK on Hadoop Wiki to make it clear for users.
> https://wiki.apache.org/hadoop/HadoopJavaVersions
>
> About source-level, however, we have one big problem - upgrading
> dependency of asm and cglib. We need to upgrade all libraries which
> depends on asm to support new byte code introduced in JDK8[1]. The
> dependencies which uses asm are jersey-server for compile and provide
> scope, and cglib for test scope(I checked it with mvn dependency:tree
> command). HADOOP-9613 is addressing the problem.
>
> One complex problem I’ve faced is Jersey depends on grizzly - to
> upgrade jersey to 1.19, which supports JDK8,
>  we need to upgrade grizzly to 2.2.16 to use
> jersey-test-framework-grizzly2. I’d like to discuss which version we
> will target this change. Can we do this in branch-2? Should we take
> care of HADOOP-11656 and HADOOP-11993 at the same time? I’d also
> confirm whether HADOOP-11993 means to remove Jersey, which depends on
> asm, or not. I think we can collaborate with Yetus community here.
>
> Also, another simple problem is that source code cannot be compiled
> because javadoc format or variable identifier are illegal(e.g.
> HADOOP-12457, HADOOP-11875). I think this can be solved
> straightforwardly.
>
> Please share any concern I’ve missed. The opinions of users are also 
welcome :-)

> I'd like to go forward this step by step to make Hadoop user friendly.
>
> Thanks Steve, Sean, Allen, Robert, Brahma, Akira, Larry, Allen, Andrew
> Purtell, Tsz-wo Sze, Sethen and other guys for having lots works about
> JDK-8.
>
> Best regards,
> - Tsuyoshi
>
> [1] http://product.hubspot.com/blog/upgrading-to-java-8-at-scale
> [2] http://ispras.linuxbase.org/index.php/Java_API_Compliance_Checker



Build failed in Jenkins: Hadoop-Common-trunk #1807

2015-10-07 Thread Apache Jenkins Server
See 

Changes:

[jing9] HDFS-9206. Inconsistent default value of

[jing9] HDFS-9196. Fix TestWebHdfsContentLength. Contributed by Masatake

[wheat9] HDFS-9170. Move libhdfs / fuse-dfs / libwebhdfs to hdfs-client.

--
[...truncated 5360 lines...]
Running org.apache.hadoop.io.TestTextNonUTF8
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.165 sec - in 
org.apache.hadoop.io.TestTextNonUTF8
Running org.apache.hadoop.io.TestSequenceFileSerialization
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.089 sec - in 
org.apache.hadoop.io.TestSequenceFileSerialization
Running org.apache.hadoop.security.TestNetgroupCache
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.08 sec - in 
org.apache.hadoop.security.TestNetgroupCache
Running org.apache.hadoop.security.TestUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.55 sec - in 
org.apache.hadoop.security.TestUserFromEnv
Running org.apache.hadoop.security.TestGroupsCaching
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.833 sec - in 
org.apache.hadoop.security.TestGroupsCaching
Running org.apache.hadoop.security.ssl.TestSSLFactory
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.593 sec - in 
org.apache.hadoop.security.ssl.TestSSLFactory
Running org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.466 sec - in 
org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Running org.apache.hadoop.security.TestUGILoginFromKeytab
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.456 sec - in 
org.apache.hadoop.security.TestUGILoginFromKeytab
Running org.apache.hadoop.security.TestUserGroupInformation
Tests run: 34, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 3.645 sec - in 
org.apache.hadoop.security.TestUserGroupInformation
Running org.apache.hadoop.security.TestUGIWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.08 sec - in 
org.apache.hadoop.security.TestUGIWithExternalKdc
Running org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.614 sec - in 
org.apache.hadoop.security.TestProxyUserFromEnv
Running org.apache.hadoop.security.authorize.TestProxyUsers
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.916 sec - in 
org.apache.hadoop.security.authorize.TestProxyUsers
Running org.apache.hadoop.security.authorize.TestProxyServers
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.476 sec - in 
org.apache.hadoop.security.authorize.TestProxyServers
Running org.apache.hadoop.security.authorize.TestServiceAuthorization
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.198 sec - in 
org.apache.hadoop.security.authorize.TestServiceAuthorization
Running org.apache.hadoop.security.authorize.TestAccessControlList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.914 sec - in 
org.apache.hadoop.security.authorize.TestAccessControlList
Running org.apache.hadoop.security.alias.TestCredShell
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.238 sec - in 
org.apache.hadoop.security.alias.TestCredShell
Running org.apache.hadoop.security.alias.TestCredentialProviderFactory
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.305 sec - in 
org.apache.hadoop.security.alias.TestCredentialProviderFactory
Running org.apache.hadoop.security.alias.TestCredentialProvider
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.097 sec - in 
org.apache.hadoop.security.alias.TestCredentialProvider
Running org.apache.hadoop.security.TestAuthenticationFilter
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.554 sec - in 
org.apache.hadoop.security.TestAuthenticationFilter
Running org.apache.hadoop.security.TestLdapGroupsMapping
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.388 sec - in 
org.apache.hadoop.security.TestLdapGroupsMapping
Running org.apache.hadoop.security.token.TestToken
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.477 sec - in 
org.apache.hadoop.security.token.TestToken
Running org.apache.hadoop.security.token.delegation.TestDelegationToken
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 31.527 sec - 
in org.apache.hadoop.security.token.delegation.TestDelegationToken
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.432 sec - in 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenAuthenticationHandlerWithMocks
Running 
org.apache.hadoop.security.token.delegation.web.TestDelegationTokenManager
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.857 sec - in 

Build failed in Jenkins: Hadoop-common-trunk-Java8 #507

2015-10-07 Thread Apache Jenkins Server
See 

Changes:

[jing9] HDFS-9206. Inconsistent default value of

[jing9] HDFS-9196. Fix TestWebHdfsContentLength. Contributed by Masatake

[wheat9] HDFS-9170. Move libhdfs / fuse-dfs / libwebhdfs to hdfs-client.

--
[...truncated 5752 lines...]
Running org.apache.hadoop.util.TestVersionUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.186 sec - in 
org.apache.hadoop.util.TestVersionUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.225 sec - in 
org.apache.hadoop.util.TestProtoUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestLightWeightGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.166 sec - in 
org.apache.hadoop.util.TestLightWeightGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGSet
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.623 sec - in 
org.apache.hadoop.util.TestGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringInterner
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.107 sec - in 
org.apache.hadoop.util.TestStringInterner
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestZKUtil
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.127 sec - in 
org.apache.hadoop.util.TestZKUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringUtils
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.284 sec - in 
org.apache.hadoop.util.TestStringUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFindClass
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.525 sec - in 
org.apache.hadoop.util.TestFindClass
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGenericOptionsParser
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.755 sec - in 
org.apache.hadoop.util.TestGenericOptionsParser
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestRunJar
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.429 sec - in 
org.apache.hadoop.util.TestRunJar
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestSysInfoLinux
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.218 sec - in 
org.apache.hadoop.util.TestSysInfoLinux
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestDirectBufferPool
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.128 sec - in 
org.apache.hadoop.util.TestDirectBufferPool
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFileBasedIPList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.165 sec - in 
org.apache.hadoop.util.TestFileBasedIPList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIndexedSort
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.644 sec - in 
org.apache.hadoop.util.TestIndexedSort
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIdentityHashStore
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.179 sec - in 
org.apache.hadoop.util.TestIdentityHashStore
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestMachineList
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.823 sec - in 
org.apache.hadoop.util.TestMachineList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestWinUtils
Tests run: 11, Failures: 0, Errors: 0, Skipped: 11, Time elapsed: 0.183 sec - 
in org.apache.hadoop.util.TestWinUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running 

Jenkins build is back to normal : Hadoop-Common-trunk #1806

2015-10-07 Thread Apache Jenkins Server
See 



Build failed in Jenkins: Hadoop-common-trunk-Java8 #511

2015-10-07 Thread Apache Jenkins Server
See 

Changes:

[wang] HDFS-8632. Add InterfaceAudience annotation to the erasure coding

--
[...truncated 5753 lines...]
Running org.apache.hadoop.util.TestVersionUtil
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.183 sec - in 
org.apache.hadoop.util.TestVersionUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestProtoUtil
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.226 sec - in 
org.apache.hadoop.util.TestProtoUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestLightWeightGSet
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.152 sec - in 
org.apache.hadoop.util.TestLightWeightGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGSet
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.594 sec - in 
org.apache.hadoop.util.TestGSet
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringInterner
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.112 sec - in 
org.apache.hadoop.util.TestStringInterner
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestZKUtil
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.115 sec - in 
org.apache.hadoop.util.TestZKUtil
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestStringUtils
Tests run: 14, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.289 sec - in 
org.apache.hadoop.util.TestStringUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFindClass
Tests run: 16, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.563 sec - in 
org.apache.hadoop.util.TestFindClass
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestGenericOptionsParser
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.764 sec - in 
org.apache.hadoop.util.TestGenericOptionsParser
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestRunJar
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.385 sec - in 
org.apache.hadoop.util.TestRunJar
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestSysInfoLinux
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.241 sec - in 
org.apache.hadoop.util.TestSysInfoLinux
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestDirectBufferPool
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.147 sec - in 
org.apache.hadoop.util.TestDirectBufferPool
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestFileBasedIPList
Tests run: 8, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.169 sec - in 
org.apache.hadoop.util.TestFileBasedIPList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIndexedSort
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.717 sec - in 
org.apache.hadoop.util.TestIndexedSort
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestIdentityHashStore
Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.176 sec - in 
org.apache.hadoop.util.TestIdentityHashStore
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestMachineList
Tests run: 13, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.864 sec - in 
org.apache.hadoop.util.TestMachineList
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.TestWinUtils
Tests run: 11, Failures: 0, Errors: 0, Skipped: 11, Time elapsed: 0.188 sec - 
in org.apache.hadoop.util.TestWinUtils
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.util.hash.TestHash
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.318 sec - in 

Build failed in Jenkins: Hadoop-common-trunk-Java8 #512

2015-10-07 Thread Apache Jenkins Server
See 

Changes:

[yliu] HDFS-9137. DeadLock between DataNode#refreshVolumes and

--
[...truncated 5745 lines...]
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.518 sec - in 
org.apache.hadoop.security.TestGroupFallback
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestProxyUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.478 sec - in 
org.apache.hadoop.security.TestProxyUserFromEnv
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestUGILoginFromKeytab
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.108 sec - in 
org.apache.hadoop.security.TestUGILoginFromKeytab
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestLdapGroupsMappingWithPosixGroup
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.562 sec - in 
org.apache.hadoop.security.TestLdapGroupsMappingWithPosixGroup
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestShellBasedIdMapping
Tests run: 7, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.537 sec - in 
org.apache.hadoop.security.TestShellBasedIdMapping
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestDoAsEffectiveUser
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.15 sec - in 
org.apache.hadoop.security.TestDoAsEffectiveUser
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestUGIWithExternalKdc
Tests run: 1, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 0.08 sec - in 
org.apache.hadoop.security.TestUGIWithExternalKdc
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestJNIGroupsMapping
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.443 sec - in 
org.apache.hadoop.security.TestJNIGroupsMapping
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestGroupsCaching
Tests run: 10, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.591 sec - in 
org.apache.hadoop.security.TestGroupsCaching
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.alias.TestCredentialProviderFactory
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.03 sec - in 
org.apache.hadoop.security.alias.TestCredentialProviderFactory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.alias.TestCredentialProvider
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.088 sec - in 
org.apache.hadoop.security.alias.TestCredentialProvider
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.alias.TestCredShell
Tests run: 9, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.98 sec - in 
org.apache.hadoop.security.alias.TestCredShell
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.251 sec - in 
org.apache.hadoop.security.ssl.TestReloadingX509TrustManager
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.ssl.TestSSLFactory
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.766 sec - in 
org.apache.hadoop.security.ssl.TestSSLFactory
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestLdapGroupsMapping
Tests run: 5, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.124 sec - in 
org.apache.hadoop.security.TestLdapGroupsMapping
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestUserFromEnv
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.465 sec - in 
org.apache.hadoop.security.TestUserFromEnv
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=768m; 
support was removed in 8.0
Running org.apache.hadoop.security.TestCredentials
Tests run: 4, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 

[jira] [Resolved] (HADOOP-7730) Allow TestCLI to be run against a cluster

2015-10-07 Thread Konstantin Boudnik (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7730?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Boudnik resolved HADOOP-7730.

  Resolution: Won't Fix
Release Note: The resolution has been done in the Bigtop a few years 
ago. Closing this one as of irrelevant to the project.
Target Version/s: 1.3.0, 0.22.1  (was: 0.22.1, 1.3.0)

> Allow TestCLI to be run against a cluster
> -
>
> Key: HADOOP-7730
> URL: https://issues.apache.org/jira/browse/HADOOP-7730
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 0.20.205.0, 0.22.0
>Reporter: Konstantin Boudnik
>Assignee: Konstantin Boudnik
> Attachments: HADOOP-7730.patch, HADOOP-7730.trunk.patch, 
> HADOOP-7730.trunk.patch
>
>
> Use the same CLI test to test cluster bits (see HDFS-1762 for more info)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] About the details of JDK-8 support

2015-10-07 Thread Steve Loughran

> On 7 Oct 2015, at 07:29, Masatake Iwasaki  wrote:
> 
> Thanks for clear summary, Tsuyoshi.
> 
> I read some related past discussions.
> 
>  https://wiki.apache.org/hadoop/MovingToJdk7and8
>  http://search-hadoop.com/m/uOzYtGSiCs1acRnh
>  http://search-hadoop.com/m/uOzYthdWJqpGdSZ1
> 
> Though there seems to be no consensus about when to drop java 7 support yet,
> it would not be 2.8 for which the preparation is already started.
> If the works for making source compatible with java 8 does not result in
> dropping java 7 support, it would be nice and easy to backport to branch-2.
> 
> 
> > we need to upgrade grizzly to 2.2.16 to use
> > jersey-test-framework-grizzly2. I’d like to discuss which version we
> > will target this change. Can we do this in branch-2?
> 
> At lease, the newest grizzly, jersey and asm seems to support java 7 too
> and HADOOP-11993 may work in branch-2.
> 

Certainly for trunk, I'm +1 for making the leap. For branch 2, how backwards 
compatible/incompatible is the change? 

I think we'd have to test it downstream; I can use slider & spark as test 
builds locally —YARN apps are the failure points. Someone else would have to 
try HBase.

In that world, we could think of having a short-lived branch-2-java-8 branch, 
which cherry picks the grizzly changes from trunk, and which we can then use 
for that downstream testing

> 
> Masatake Iwasaki
> 
> 
> On 10/6/15 09:35, Tsuyoshi Ozawa wrote:
> > Hi commiters and users of Hadoop stack,
> >
> > I’ll share the current status of JDK-8 support here. We can take a
> > two-step approach to support JDK-8 - runtime-level support and
> > source-level support.
> >
> > About runtime-level support, I’ve tested Hadoop stack with JDK-8  e.g.
> > MapReduce, Spark, Tez, Flink on YARN and HDFS for a few months. As
> > long as I tested, it works well completely since JDK-8 doesn’t have
> > any incompatibility at binary level. We can say Hadoop has supported
> > JDK8 runtime already. Do you have any concern about this? I’ve not
> > tested with HBase yet. I need help of HBase community. I think only
> > problem about runtime is HADOOP-11364, default value of
> > colntainer-killer of YARN. After fixing the issue, we can declare the
> > support of JDK on Hadoop Wiki to make it clear for users.
> > https://wiki.apache.org/hadoop/HadoopJavaVersions
> >
> > About source-level, however, we have one big problem - upgrading
> > dependency of asm and cglib. We need to upgrade all libraries which
> > depends on asm to support new byte code introduced in JDK8[1]. The
> > dependencies which uses asm are jersey-server for compile and provide
> > scope, and cglib for test scope(I checked it with mvn dependency:tree
> > command). HADOOP-9613 is addressing the problem.
> >
> > One complex problem I’ve faced is Jersey depends on grizzly - to
> > upgrade jersey to 1.19, which supports JDK8,
> >  we need to upgrade grizzly to 2.2.16 to use
> > jersey-test-framework-grizzly2. I’d like to discuss which version we
> > will target this change. Can we do this in branch-2? Should we take
> > care of HADOOP-11656 and HADOOP-11993 at the same time? I’d also
> > confirm whether HADOOP-11993 means to remove Jersey, which depends on
> > asm, or not. I think we can collaborate with Yetus community here.
> >
> > Also, another simple problem is that source code cannot be compiled
> > because javadoc format or variable identifier are illegal(e.g.
> > HADOOP-12457, HADOOP-11875). I think this can be solved
> > straightforwardly.
> >
> > Please share any concern I’ve missed. The opinions of users are also 
> > welcome :-)
> > I'd like to go forward this step by step to make Hadoop user friendly.
> >
> > Thanks Steve, Sean, Allen, Robert, Brahma, Akira, Larry, Allen, Andrew
> > Purtell, Tsz-wo Sze, Sethen and other guys for having lots works about
> > JDK-8.
> >
> > Best regards,
> > - Tsuyoshi
> >
> > [1] http://product.hubspot.com/blog/upgrading-to-java-8-at-scale
> > [2] http://ispras.linuxbase.org/index.php/Java_API_Compliance_Checker
> 
> 



Jenkins build is back to normal : Hadoop-Common-trunk #1808

2015-10-07 Thread Apache Jenkins Server
See 



[jira] [Created] (HADOOP-12464) Interrupted client may try to fail-over and retry

2015-10-07 Thread Kihwal Lee (JIRA)
Kihwal Lee created HADOOP-12464:
---

 Summary: Interrupted client may try to fail-over and retry
 Key: HADOOP-12464
 URL: https://issues.apache.org/jira/browse/HADOOP-12464
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Kihwal Lee


When an IPC client is interrupted, it sometimes try to fail-over to a different 
namenode and retry.  We've seen this causing hang during shutdown. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: DomainSocket issues on Solaris

2015-10-07 Thread Colin P. McCabe
On Wed, Oct 7, 2015 at 9:35 AM, Alan Burlison  wrote:
> On 06/10/2015 10:52, Steve Loughran wrote:
>
>> HADOOP-11127, "Improve versioning and compatibility support in native
>> library for downstream hadoop-common users." says "we need to do
>> better here", which is probably some way of packaging native libs.
>
>
> From that JIRA:
>
>> Colin Patrick McCabe added a comment - 18/Apr/15 00:48
>>
>> I was thinking we:
>> 1. Add the Hadoop release version to libhadoop.so. It's very, very
>> simple and solves a lot of problems here.
>> 2. Remove libhadoop.so and libhdfs.so from the release tarball, since
>> they are CPU and OS-specific and the tarballs are not
>> 3. Schedule some follow-on work to include the native libraries
>> inside jars, as Chris suggested. This will take longer but ultimately
>> be the best solution.
>
>
> And:
>
>> I just spotted one: HADOOP-10027.  A field was removed from the Java
>> layer, which still could get referenced by an older version of the native
>> layer.  A backwards-compatible version of that patch would preserve the
>> old fields in the Java layer.
>
>
> I've been thinking about this and I really don't think the strategy of
> trying to shim old methods and fields back in to Hadoop is the correct one.
> The current Java-JNI interactions have been developed in an ad-hoc manner
> with no formal API definition and are explicitly Not-An-Interface and as a
> result no consideration has been given to cross-version stability. A
> compatibility shim approach is neither sustainable nor maintainable even on
> a single platform, and will severely compromise efforts to get Hadoop native
> components working on other platforms.

I agree.

>
> The approach suggested in HADOOP-11127 seems a much better way forward, in
> particular #2 (versioned libhadoop). As pointed out in the JIRA, #1 (freeze
> libahdoop forever) is an obvious non-starter, and #3 (distribute libahadoop
> inside the JAR) is also a non-starter as it will not work cross-platform.
>
> I'm happy to work on HADOOP-10027 and make that a prerequisite for fixing
> the Solaris DomainSocket issues discussed in this thread. I believe it's not
> practical to provide a fix for DomainSocket on Solaris with a 'No JNI
> signature changes' restriction.

I think you could come up with a select/poll solution while using the
old function signatures.  A 4-byte int is more than enough information
to pass in, given that you can use it as an index into a table in the
C code.  There are also a lot of other solution to this problem, like
I pointed out earlier.  For example, you dismissed the timer wheel
suggestion because of a detail of a unit test, but we could easily
change the test.

Anyway, changing the function signatures in the way you described is
certainly reasonable and I wouldn't object to it.  It is probably the
most natural solution.

>
> Does that sound acceptable? If so I can draft up a proposal for native
> library version and platform naming, library search locations etc.

Yes, I think it would be good to make some progress on HADOOP-11127.
We have been putting off the issue for too long.

best,
Colin

>
>
> Thanks,
>
> --
> Alan Burlison
> --


Re: Local repo sharing for maven builds

2015-10-07 Thread Allen Wittenauer

yetus-5 was just committed which does all of this (and more, of course).

On Oct 6, 2015, at 2:35 AM, Steve Loughran  wrote:

> 
>> On 5 Oct 2015, at 19:45, Colin McCabe  wrote:
>> 
>> On Mon, Sep 28, 2015 at 12:52 AM, Steve Loughran  
>> wrote:
>>> 
>>> the jenkins machines are shared across multiple projects; cut the executors 
>>> to 1/node and then everyone's performance drops, including the time to 
>>> complete of all jenkins patches, which is one of the goals.
>> 
>> Hi Steve,
>> 
>> Just to be clear, the proposal wasn't to cut the executors to 1 per
>> node, but to have multiple Docker containers per node (perhaps 3 or 4)
>> and run each executor in an isolated container.  At that point,
>> whatever badness Maven does on the .m2 stops being a problem for
>> concurrently running jobs.
>> 
> 
> I'd missed that bit. Yes, something with a containerized ~//m2 repo gets the 
> isolation without playing with mvn  version fixup
> 
>> I guess I don't feel that strongly about this, but the additional
>> complexity of the other solutions (like running a "find" command in
>> .m2, or changing artifactID) seems like a disadvantage compared to
>> just using multiple containers.  And there may be other race
>> conditions here that we're not aware of... like a TOCTOU between
>> checking for a jar in .m2 and downloading it, for example.  The
>> Dockerized solution skips all those potential failure modes and
>> complexity.
>> 
>> cheers,
>> Colin
>> 
> 



Re: [DISCUSS] About the details of JDK-8 support

2015-10-07 Thread Andrew Wang
We've been supporting JDK8 as a runtime for CDH5 for a while now (meaning
the full stack including HBase), so I agree that we're good there.

I'm against dropping JDK7 support though in branch-2. Even bumping
dependency versions scares me, since it often leads to downstream pain. Any
comment about the compatibility of said bump? We need to have very high
confidence if it's targeted for branch-2.

Best,
Andrew

On Wed, Oct 7, 2015 at 2:27 AM, Steve Loughran 
wrote:

>
> > On 7 Oct 2015, at 07:29, Masatake Iwasaki 
> wrote:
> >
> > Thanks for clear summary, Tsuyoshi.
> >
> > I read some related past discussions.
> >
> >  https://wiki.apache.org/hadoop/MovingToJdk7and8
> >  http://search-hadoop.com/m/uOzYtGSiCs1acRnh
> >  http://search-hadoop.com/m/uOzYthdWJqpGdSZ1
> >
> > Though there seems to be no consensus about when to drop java 7 support
> yet,
> > it would not be 2.8 for which the preparation is already started.
> > If the works for making source compatible with java 8 does not result in
> > dropping java 7 support, it would be nice and easy to backport to
> branch-2.
> >
> >
> > > we need to upgrade grizzly to 2.2.16 to use
> > > jersey-test-framework-grizzly2. I’d like to discuss which version we
> > > will target this change. Can we do this in branch-2?
> >
> > At lease, the newest grizzly, jersey and asm seems to support java 7 too
> > and HADOOP-11993 may work in branch-2.
> >
>
> Certainly for trunk, I'm +1 for making the leap. For branch 2, how
> backwards compatible/incompatible is the change?
>
> I think we'd have to test it downstream; I can use slider & spark as test
> builds locally —YARN apps are the failure points. Someone else would have
> to try HBase.
>
> In that world, we could think of having a short-lived branch-2-java-8
> branch, which cherry picks the grizzly changes from trunk, and which we can
> then use for that downstream testing
>
> >
> > Masatake Iwasaki
> >
> >
> > On 10/6/15 09:35, Tsuyoshi Ozawa wrote:
> > > Hi commiters and users of Hadoop stack,
> > >
> > > I’ll share the current status of JDK-8 support here. We can take a
> > > two-step approach to support JDK-8 - runtime-level support and
> > > source-level support.
> > >
> > > About runtime-level support, I’ve tested Hadoop stack with JDK-8  e.g.
> > > MapReduce, Spark, Tez, Flink on YARN and HDFS for a few months. As
> > > long as I tested, it works well completely since JDK-8 doesn’t have
> > > any incompatibility at binary level. We can say Hadoop has supported
> > > JDK8 runtime already. Do you have any concern about this? I’ve not
> > > tested with HBase yet. I need help of HBase community. I think only
> > > problem about runtime is HADOOP-11364, default value of
> > > colntainer-killer of YARN. After fixing the issue, we can declare the
> > > support of JDK on Hadoop Wiki to make it clear for users.
> > > https://wiki.apache.org/hadoop/HadoopJavaVersions
> > >
> > > About source-level, however, we have one big problem - upgrading
> > > dependency of asm and cglib. We need to upgrade all libraries which
> > > depends on asm to support new byte code introduced in JDK8[1]. The
> > > dependencies which uses asm are jersey-server for compile and provide
> > > scope, and cglib for test scope(I checked it with mvn dependency:tree
> > > command). HADOOP-9613 is addressing the problem.
> > >
> > > One complex problem I’ve faced is Jersey depends on grizzly - to
> > > upgrade jersey to 1.19, which supports JDK8,
> > >  we need to upgrade grizzly to 2.2.16 to use
> > > jersey-test-framework-grizzly2. I’d like to discuss which version we
> > > will target this change. Can we do this in branch-2? Should we take
> > > care of HADOOP-11656 and HADOOP-11993 at the same time? I’d also
> > > confirm whether HADOOP-11993 means to remove Jersey, which depends on
> > > asm, or not. I think we can collaborate with Yetus community here.
> > >
> > > Also, another simple problem is that source code cannot be compiled
> > > because javadoc format or variable identifier are illegal(e.g.
> > > HADOOP-12457, HADOOP-11875). I think this can be solved
> > > straightforwardly.
> > >
> > > Please share any concern I’ve missed. The opinions of users are also
> welcome :-)
> > > I'd like to go forward this step by step to make Hadoop user friendly.
> > >
> > > Thanks Steve, Sean, Allen, Robert, Brahma, Akira, Larry, Allen, Andrew
> > > Purtell, Tsz-wo Sze, Sethen and other guys for having lots works about
> > > JDK-8.
> > >
> > > Best regards,
> > > - Tsuyoshi
> > >
> > > [1] http://product.hubspot.com/blog/upgrading-to-java-8-at-scale
> > > [2] http://ispras.linuxbase.org/index.php/Java_API_Compliance_Checker
> >
> >
>
>


Re: Local repo sharing for maven builds

2015-10-07 Thread sanjay reddy
pls remove me from this group

On Tue, Sep 22, 2015 at 8:26 PM, Steve Loughran 
wrote:

>
> > On 22 Sep 2015, at 12:16, Brahma Reddy Battula <
> brahmareddy.batt...@huawei.com> wrote:
> >
> > After using timestamped jars, hadoop-hdfs module might still continue to
> use earlier timestamped jars (correct) and may complete run.But later
> modules might refer to updated jars which are from some other build.
>
>
> why?
>
> If I do a build with a forced mvn versions set first,
>
> mvn versions:set -DnewVersion=3.0.0.20120922155143
>
> then maven will go through all the poms and set the version.
>
> the main source of trouble there would be any patch to a pom whose diff
> was close enough to the version value that the patch wouldn't apply
>



-- 
*Regards,*
*Sanju Reddy*
*+91 8977977443*


Re: [DISCUSS] About the details of JDK-8 support

2015-10-07 Thread Steve Loughran

> On 7 Oct 2015, at 17:23, Andrew Wang  wrote:
> 
> We've been supporting JDK8 as a runtime for CDH5 for a while now (meaning
> the full stack including HBase), so I agree that we're good there.
> 


with Kerberos on?

> I'm against dropping JDK7 support though in branch-2.

+1. We're only talking about dependencies here

> Even bumping
> dependency versions scares me, since it often leads to downstream pain. Any
> comment about the compatibility of said bump? We need to have very high
> confidence if it's targeted for branch-2.
> 

Which is why we need to test things downstream with the bumped jersey/grizzly 
libs before making any commitment to branch-2.


Re: DomainSocket issues on Solaris

2015-10-07 Thread Alan Burlison

On 06/10/2015 10:52, Steve Loughran wrote:


HADOOP-11127, "Improve versioning and compatibility support in native
library for downstream hadoop-common users." says "we need to do
better here", which is probably some way of packaging native libs.


From that JIRA:


Colin Patrick McCabe added a comment - 18/Apr/15 00:48

I was thinking we:
1. Add the Hadoop release version to libhadoop.so. It's very, very
simple and solves a lot of problems here.
2. Remove libhadoop.so and libhdfs.so from the release tarball, since
they are CPU and OS-specific and the tarballs are not
3. Schedule some follow-on work to include the native libraries
inside jars, as Chris suggested. This will take longer but ultimately
be the best solution.


And:


I just spotted one: HADOOP-10027.  A field was removed from the Java
layer, which still could get referenced by an older version of the native
layer.  A backwards-compatible version of that patch would preserve the
old fields in the Java layer.


I've been thinking about this and I really don't think the strategy of 
trying to shim old methods and fields back in to Hadoop is the correct 
one.  The current Java-JNI interactions have been developed in an ad-hoc 
manner with no formal API definition and are explicitly Not-An-Interface 
and as a result no consideration has been given to cross-version 
stability. A compatibility shim approach is neither sustainable nor 
maintainable even on a single platform, and will severely compromise 
efforts to get Hadoop native components working on other platforms.


The approach suggested in HADOOP-11127 seems a much better way forward, 
in particular #2 (versioned libhadoop). As pointed out in the JIRA, #1 
(freeze libahdoop forever) is an obvious non-starter, and #3 (distribute 
libahadoop inside the JAR) is also a non-starter as it will not work 
cross-platform.


I'm happy to work on HADOOP-10027 and make that a prerequisite for 
fixing the Solaris DomainSocket issues discussed in this thread. I 
believe it's not practical to provide a fix for DomainSocket on Solaris 
with a 'No JNI signature changes' restriction.


Does that sound acceptable? If so I can draft up a proposal for native 
library version and platform naming, library search locations etc.


Thanks,

--
Alan Burlison
--