[jira] [Commented] (HADOOP-11216) Improve Openssl library finding

2014-10-28 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186439#comment-14186439
 ] 

Yi Liu commented on HADOOP-11216:
-

Thanks Colin for working on this. Certainly I don't mind you take this :-)
My comments are:
*1.* Existing {{openssl.prefix}}, {{openssl.lib}}, {{openssl.include}} will let 
people to specify custom openssl location, they will become {{-L}} and {{-I}} 
when doing compile using gcc.  Yes, I agree people can set 
{{CMAKE_LIBRARY_PATH}} and {{CMAKE_INCLUDE_PATH}}.  But snappy is in this way, 
should we make them consistent (keep openssl.\*\*, or remove snappy.\*\*)?
*2.* It's good that the patch solves compiling issue, but how about runtime, 
the cluster environment of customers may be different from compiling 
environment, currently the external libraries in Hadoop is shared linked, we 
still need to find the correct crypto library for runtime.

{quote}
We could depend on libcrypto.so rather than libcrypto.so.1.0.0
{quote}
Agree.

 Improve Openssl library finding
 ---

 Key: HADOOP-11216
 URL: https://issues.apache.org/jira/browse/HADOOP-11216
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Yi Liu
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-11216.003.patch


 When we compile Openssl 1.0.0\(x\) or 1.0.1\(x\) using default options, there 
 will be {{libcrypto.so.1.0.0}} in output lib dir, so we expect this version 
 suffix in cmake build file
 {code}
 SET(STORED_CMAKE_FIND_LIBRARY_SUFFIXES CMAKE_FIND_LIBRARY_SUFFIXES)
 set_find_shared_library_version(1.0.0)
 SET(OPENSSL_NAME crypto)
 
 {code}
 If we don't bundle the crypto shared library in Hadoop distribution, then 
 Hadoop will try to find crypto library in system path when running.
 But in real linux distribution, there may be no {{libcrypto.so.1.0.0}} or 
 {{libcrypto.so}} even the system embedded openssl is 1.0.1\(x\).  Then we 
 need to make symbolic link.
 This JIRA is to improve the Openssl library finding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11233) hadoop.security.kms.client.encrypted.key.cache.expiry property spelled wrong in core-default

2014-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186480#comment-14186480
 ] 

Hudson commented on HADOOP-11233:
-

FAILURE: Integrated in Hadoop-trunk-Commit #6364 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6364/])
HADOOP-11233. hadoop.security.kms.client.encrypted.key.cache.expiry property 
spelled wrong in core-default. (Stephen Chu via yliu) (yliu: rev 
e7859015bcc2cda99a64b1db18baab0e1ca5c155)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


 hadoop.security.kms.client.encrypted.key.cache.expiry property spelled wrong 
 in core-default
 

 Key: HADOOP-11233
 URL: https://issues.apache.org/jira/browse/HADOOP-11233
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.7.0
Reporter: Steve Loughran
Assignee: Stephen Chu
Priority: Minor
 Attachments: HADOOP-11233.1.patch


 Theres' a spurious {{}} at the start of the kms cache entry
 {code}
 property
   namehadoop.security.kms.client.encrypted.key.cache.expiry/name
   value4320/value
   description
 Cache expiry time for a Key, after which the cache Queue for this
 key will be dropped. Default = 12hrs
   /description
 /property
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11233) hadoop.security.kms.client.encrypted.key.cache.expiry property spelled wrong in core-default

2014-10-28 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-11233:

  Resolution: Fixed
   Fix Version/s: 2.6.0
Target Version/s: 2.6.0
  Status: Resolved  (was: Patch Available)

commit to trunk, branch-2, branch-2.6

 hadoop.security.kms.client.encrypted.key.cache.expiry property spelled wrong 
 in core-default
 

 Key: HADOOP-11233
 URL: https://issues.apache.org/jira/browse/HADOOP-11233
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Stephen Chu
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-11233.1.patch


 Theres' a spurious {{}} at the start of the kms cache entry
 {code}
 property
   namehadoop.security.kms.client.encrypted.key.cache.expiry/name
   value4320/value
   description
 Cache expiry time for a Key, after which the cache Queue for this
 key will be dropped. Default = 12hrs
   /description
 /property
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11233) hadoop.security.kms.client.encrypted.key.cache.expiry property spelled wrong in core-default

2014-10-28 Thread Yi Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liu updated HADOOP-11233:

Affects Version/s: (was: 2.7.0)
   2.6.0

 hadoop.security.kms.client.encrypted.key.cache.expiry property spelled wrong 
 in core-default
 

 Key: HADOOP-11233
 URL: https://issues.apache.org/jira/browse/HADOOP-11233
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Stephen Chu
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-11233.1.patch


 Theres' a spurious {{}} at the start of the kms cache entry
 {code}
 property
   namehadoop.security.kms.client.encrypted.key.cache.expiry/name
   value4320/value
   description
 Cache expiry time for a Key, after which the cache Queue for this
 key will be dropped. Default = 12hrs
   /description
 /property
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11235) execute maven plugin(compile-protoc) failed

2014-10-28 Thread ccin (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186552#comment-14186552
 ] 

ccin commented on HADOOP-11235:
---

i think maven should install this for me.should i install this separate?

 execute maven plugin(compile-protoc) failed
 ---

 Key: HADOOP-11235
 URL: https://issues.apache.org/jira/browse/HADOOP-11235
 Project: Hadoop Common
  Issue Type: Bug
 Environment: ubuntu 14.04
 jdk 1.7
 eclipse 4.4.1
 m2e 1.5
Reporter: ccin

 [ERROR] Failed to execute goal 
 org.apache.hadoop:hadoop-maven-plugins:2.5.1:protoc (compile-protoc) on 
 project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
 'protoc --version' did not return a version - [Help 1]
 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
 goal org.apache.hadoop:hadoop-maven-plugins:2.5.1:protoc (compile-protoc) on 
 project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
 'protoc --version' did not return a version
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:108)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:76)
   at 
 org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
   at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:116)
   at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:361)
   at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155)
   at org.apache.maven.cli.MavenCli.execute(MavenCli.java:584)
   at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:213)
   at org.apache.maven.cli.MavenCli.main(MavenCli.java:157)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
 Caused by: org.apache.maven.plugin.MojoExecutionException: 
 org.apache.maven.plugin.MojoExecutionException: 'protoc --version' did not 
 return a version
   at 
 org.apache.hadoop.maven.plugin.protoc.ProtocMojo.execute(ProtocMojo.java:105)
   at 
 org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:133)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
   ... 19 more
 Caused by: org.apache.maven.plugin.MojoExecutionException: 'protoc --version' 
 did not return a version
   at 
 org.apache.hadoop.maven.plugin.protoc.ProtocMojo.execute(ProtocMojo.java:68)
   ... 21 more
 [ERROR] 
 [ERROR] 
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11241) TestNMSimulator fails sometimes due to timing issue

2014-10-28 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated HADOOP-11241:
---
Status: Open  (was: Patch Available)

 TestNMSimulator fails sometimes due to timing issue
 ---

 Key: HADOOP-11241
 URL: https://issues.apache.org/jira/browse/HADOOP-11241
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Varun Vasudev
Assignee: Varun Vasudev
 Attachments: apache-yarn-2763.0.patch


 TestNMSimulator fails sometimes due to timing issues. From a failure -
 {noformat}
 2014-10-16 23:21:42,343 INFO  resourcemanager.ResourceTrackerService 
 (ResourceTrackerService.java:registerNodeManager(337)) - NodeManager from 
 node node1(cmPort: 0 httpPort: 80) registered with capability: memory:10240, 
 vCores:10, assigned nodeId node1:0
 2014-10-16 23:21:42,397 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,400 INFO  rmnode.RMNodeImpl (RMNodeImpl.java:handle(423)) 
 - node1:0 Node Transitioned from NEW to RUNNING
 2014-10-16 23:21:42,404 INFO  fair.FairScheduler 
 (FairScheduler.java:addNode(825)) - Added node node1:0 cluster capacity: 
 memory:10240, vCores:10
 2014-10-16 23:21:42,407 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped 
 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:18088
 2014-10-16 23:21:42,409 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,410 INFO  ipc.Server (Server.java:stop(2437)) - Stopping 
 server on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(706)) - Stopping 
 IPC Server listener on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(832)) - Stopping 
 IPC Server Responder
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-11241) TestNMSimulator fails sometimes due to timing issue

2014-10-28 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev moved YARN-2763 to HADOOP-11241:
--

Key: HADOOP-11241  (was: YARN-2763)
Project: Hadoop Common  (was: Hadoop YARN)

 TestNMSimulator fails sometimes due to timing issue
 ---

 Key: HADOOP-11241
 URL: https://issues.apache.org/jira/browse/HADOOP-11241
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Varun Vasudev
Assignee: Varun Vasudev
 Attachments: apache-yarn-2763.0.patch


 TestNMSimulator fails sometimes due to timing issues. From a failure -
 {noformat}
 2014-10-16 23:21:42,343 INFO  resourcemanager.ResourceTrackerService 
 (ResourceTrackerService.java:registerNodeManager(337)) - NodeManager from 
 node node1(cmPort: 0 httpPort: 80) registered with capability: memory:10240, 
 vCores:10, assigned nodeId node1:0
 2014-10-16 23:21:42,397 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,400 INFO  rmnode.RMNodeImpl (RMNodeImpl.java:handle(423)) 
 - node1:0 Node Transitioned from NEW to RUNNING
 2014-10-16 23:21:42,404 INFO  fair.FairScheduler 
 (FairScheduler.java:addNode(825)) - Added node node1:0 cluster capacity: 
 memory:10240, vCores:10
 2014-10-16 23:21:42,407 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped 
 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:18088
 2014-10-16 23:21:42,409 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,410 INFO  ipc.Server (Server.java:stop(2437)) - Stopping 
 server on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(706)) - Stopping 
 IPC Server listener on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(832)) - Stopping 
 IPC Server Responder
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11241) TestNMSimulator fails sometimes due to timing issue

2014-10-28 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated HADOOP-11241:
---
Status: Patch Available  (was: Open)

 TestNMSimulator fails sometimes due to timing issue
 ---

 Key: HADOOP-11241
 URL: https://issues.apache.org/jira/browse/HADOOP-11241
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Varun Vasudev
Assignee: Varun Vasudev
 Attachments: apache-yarn-2763.0.patch


 TestNMSimulator fails sometimes due to timing issues. From a failure -
 {noformat}
 2014-10-16 23:21:42,343 INFO  resourcemanager.ResourceTrackerService 
 (ResourceTrackerService.java:registerNodeManager(337)) - NodeManager from 
 node node1(cmPort: 0 httpPort: 80) registered with capability: memory:10240, 
 vCores:10, assigned nodeId node1:0
 2014-10-16 23:21:42,397 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,400 INFO  rmnode.RMNodeImpl (RMNodeImpl.java:handle(423)) 
 - node1:0 Node Transitioned from NEW to RUNNING
 2014-10-16 23:21:42,404 INFO  fair.FairScheduler 
 (FairScheduler.java:addNode(825)) - Added node node1:0 cluster capacity: 
 memory:10240, vCores:10
 2014-10-16 23:21:42,407 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped 
 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:18088
 2014-10-16 23:21:42,409 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,410 INFO  ipc.Server (Server.java:stop(2437)) - Stopping 
 server on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(706)) - Stopping 
 IPC Server listener on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(832)) - Stopping 
 IPC Server Responder
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11241) TestNMSimulator fails sometimes due to timing issue

2014-10-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186587#comment-14186587
 ] 

Hadoop QA commented on HADOOP-11241:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org
  against trunk revision 0398db1.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4966//console

This message is automatically generated.

 TestNMSimulator fails sometimes due to timing issue
 ---

 Key: HADOOP-11241
 URL: https://issues.apache.org/jira/browse/HADOOP-11241
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Varun Vasudev
Assignee: Varun Vasudev
 Attachments: apache-yarn-2763.0.patch


 TestNMSimulator fails sometimes due to timing issues. From a failure -
 {noformat}
 2014-10-16 23:21:42,343 INFO  resourcemanager.ResourceTrackerService 
 (ResourceTrackerService.java:registerNodeManager(337)) - NodeManager from 
 node node1(cmPort: 0 httpPort: 80) registered with capability: memory:10240, 
 vCores:10, assigned nodeId node1:0
 2014-10-16 23:21:42,397 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,400 INFO  rmnode.RMNodeImpl (RMNodeImpl.java:handle(423)) 
 - node1:0 Node Transitioned from NEW to RUNNING
 2014-10-16 23:21:42,404 INFO  fair.FairScheduler 
 (FairScheduler.java:addNode(825)) - Added node node1:0 cluster capacity: 
 memory:10240, vCores:10
 2014-10-16 23:21:42,407 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped 
 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:18088
 2014-10-16 23:21:42,409 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,410 INFO  ipc.Server (Server.java:stop(2437)) - Stopping 
 server on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(706)) - Stopping 
 IPC Server listener on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(832)) - Stopping 
 IPC Server Responder
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10768) Optimize Hadoop RPC encryption performance

2014-10-28 Thread Yi Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10768?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186599#comment-14186599
 ] 

Yi Liu commented on HADOOP-10768:
-

I'm working on this. It seems for services with heavy RPC calls like NameNode, 
the performance degrades obviously if encryption is enabled.
I will show performance benefits after the patch is ready.

 Optimize Hadoop RPC encryption performance
 --

 Key: HADOOP-10768
 URL: https://issues.apache.org/jira/browse/HADOOP-10768
 Project: Hadoop Common
  Issue Type: Improvement
  Components: performance, security
Affects Versions: 3.0.0
Reporter: Yi Liu
Assignee: Yi Liu

 Hadoop RPC encryption is enabled by setting {{hadoop.rpc.protection}} to 
 privacy. It utilized SASL {{GSSAPI}} and {{DIGEST-MD5}} mechanisms for 
 secure authentication and data protection. Even {{GSSAPI}} supports using 
 AES, but without AES-NI support by default, so the encryption is slow and 
 will become bottleneck.
 After discuss with [~atm], [~tucu00] and [~umamaheswararao], we can do the 
 same optimization as in HDFS-6606. Use AES-NI with more than *20x* speedup.
 On the other hand, RPC message is small, but RPC is frequent and there may be 
 lots of RPC calls in one connection, we needs to setup benchmark to see real 
 improvement and then make a trade-off. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11241) TestNMSimulator fails sometimes due to timing issue

2014-10-28 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated HADOOP-11241:
---
Attachment: (was: apache-yarn-2763.0.patch)

 TestNMSimulator fails sometimes due to timing issue
 ---

 Key: HADOOP-11241
 URL: https://issues.apache.org/jira/browse/HADOOP-11241
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Varun Vasudev
Assignee: Varun Vasudev

 TestNMSimulator fails sometimes due to timing issues. From a failure -
 {noformat}
 2014-10-16 23:21:42,343 INFO  resourcemanager.ResourceTrackerService 
 (ResourceTrackerService.java:registerNodeManager(337)) - NodeManager from 
 node node1(cmPort: 0 httpPort: 80) registered with capability: memory:10240, 
 vCores:10, assigned nodeId node1:0
 2014-10-16 23:21:42,397 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,400 INFO  rmnode.RMNodeImpl (RMNodeImpl.java:handle(423)) 
 - node1:0 Node Transitioned from NEW to RUNNING
 2014-10-16 23:21:42,404 INFO  fair.FairScheduler 
 (FairScheduler.java:addNode(825)) - Added node node1:0 cluster capacity: 
 memory:10240, vCores:10
 2014-10-16 23:21:42,407 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped 
 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:18088
 2014-10-16 23:21:42,409 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,410 INFO  ipc.Server (Server.java:stop(2437)) - Stopping 
 server on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(706)) - Stopping 
 IPC Server listener on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(832)) - Stopping 
 IPC Server Responder
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11241) TestNMSimulator fails sometimes due to timing issue

2014-10-28 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated HADOOP-11241:
---
Status: Patch Available  (was: Open)

 TestNMSimulator fails sometimes due to timing issue
 ---

 Key: HADOOP-11241
 URL: https://issues.apache.org/jira/browse/HADOOP-11241
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Varun Vasudev
Assignee: Varun Vasudev
 Attachments: apache-hadoop-11241.0.patch


 TestNMSimulator fails sometimes due to timing issues. From a failure -
 {noformat}
 2014-10-16 23:21:42,343 INFO  resourcemanager.ResourceTrackerService 
 (ResourceTrackerService.java:registerNodeManager(337)) - NodeManager from 
 node node1(cmPort: 0 httpPort: 80) registered with capability: memory:10240, 
 vCores:10, assigned nodeId node1:0
 2014-10-16 23:21:42,397 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,400 INFO  rmnode.RMNodeImpl (RMNodeImpl.java:handle(423)) 
 - node1:0 Node Transitioned from NEW to RUNNING
 2014-10-16 23:21:42,404 INFO  fair.FairScheduler 
 (FairScheduler.java:addNode(825)) - Added node node1:0 cluster capacity: 
 memory:10240, vCores:10
 2014-10-16 23:21:42,407 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped 
 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:18088
 2014-10-16 23:21:42,409 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,410 INFO  ipc.Server (Server.java:stop(2437)) - Stopping 
 server on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(706)) - Stopping 
 IPC Server listener on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(832)) - Stopping 
 IPC Server Responder
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11241) TestNMSimulator fails sometimes due to timing issue

2014-10-28 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated HADOOP-11241:
---
Status: Open  (was: Patch Available)

 TestNMSimulator fails sometimes due to timing issue
 ---

 Key: HADOOP-11241
 URL: https://issues.apache.org/jira/browse/HADOOP-11241
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Varun Vasudev
Assignee: Varun Vasudev
 Attachments: apache-hadoop-11241.0.patch


 TestNMSimulator fails sometimes due to timing issues. From a failure -
 {noformat}
 2014-10-16 23:21:42,343 INFO  resourcemanager.ResourceTrackerService 
 (ResourceTrackerService.java:registerNodeManager(337)) - NodeManager from 
 node node1(cmPort: 0 httpPort: 80) registered with capability: memory:10240, 
 vCores:10, assigned nodeId node1:0
 2014-10-16 23:21:42,397 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,400 INFO  rmnode.RMNodeImpl (RMNodeImpl.java:handle(423)) 
 - node1:0 Node Transitioned from NEW to RUNNING
 2014-10-16 23:21:42,404 INFO  fair.FairScheduler 
 (FairScheduler.java:addNode(825)) - Added node node1:0 cluster capacity: 
 memory:10240, vCores:10
 2014-10-16 23:21:42,407 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped 
 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:18088
 2014-10-16 23:21:42,409 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,410 INFO  ipc.Server (Server.java:stop(2437)) - Stopping 
 server on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(706)) - Stopping 
 IPC Server listener on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(832)) - Stopping 
 IPC Server Responder
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11241) TestNMSimulator fails sometimes due to timing issue

2014-10-28 Thread Varun Vasudev (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Vasudev updated HADOOP-11241:
---
Attachment: apache-hadoop-11241.0.patch

Renaming patch file

 TestNMSimulator fails sometimes due to timing issue
 ---

 Key: HADOOP-11241
 URL: https://issues.apache.org/jira/browse/HADOOP-11241
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Varun Vasudev
Assignee: Varun Vasudev
 Attachments: apache-hadoop-11241.0.patch


 TestNMSimulator fails sometimes due to timing issues. From a failure -
 {noformat}
 2014-10-16 23:21:42,343 INFO  resourcemanager.ResourceTrackerService 
 (ResourceTrackerService.java:registerNodeManager(337)) - NodeManager from 
 node node1(cmPort: 0 httpPort: 80) registered with capability: memory:10240, 
 vCores:10, assigned nodeId node1:0
 2014-10-16 23:21:42,397 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,400 INFO  rmnode.RMNodeImpl (RMNodeImpl.java:handle(423)) 
 - node1:0 Node Transitioned from NEW to RUNNING
 2014-10-16 23:21:42,404 INFO  fair.FairScheduler 
 (FairScheduler.java:addNode(825)) - Added node node1:0 cluster capacity: 
 memory:10240, vCores:10
 2014-10-16 23:21:42,407 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped 
 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:18088
 2014-10-16 23:21:42,409 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,410 INFO  ipc.Server (Server.java:stop(2437)) - Stopping 
 server on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(706)) - Stopping 
 IPC Server listener on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(832)) - Stopping 
 IPC Server Responder
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11235) execute maven plugin(compile-protoc) failed

2014-10-28 Thread JIRA

[ 
https://issues.apache.org/jira/browse/HADOOP-11235?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186618#comment-14186618
 ] 

André Kelpe commented on HADOOP-11235:
--

yes, you have to install it yourself: sudo apt-get install protobuf-compiler -y

 execute maven plugin(compile-protoc) failed
 ---

 Key: HADOOP-11235
 URL: https://issues.apache.org/jira/browse/HADOOP-11235
 Project: Hadoop Common
  Issue Type: Bug
 Environment: ubuntu 14.04
 jdk 1.7
 eclipse 4.4.1
 m2e 1.5
Reporter: ccin

 [ERROR] Failed to execute goal 
 org.apache.hadoop:hadoop-maven-plugins:2.5.1:protoc (compile-protoc) on 
 project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
 'protoc --version' did not return a version - [Help 1]
 org.apache.maven.lifecycle.LifecycleExecutionException: Failed to execute 
 goal org.apache.hadoop:hadoop-maven-plugins:2.5.1:protoc (compile-protoc) on 
 project hadoop-common: org.apache.maven.plugin.MojoExecutionException: 
 'protoc --version' did not return a version
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:216)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:153)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:145)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:108)
   at 
 org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:76)
   at 
 org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build(SingleThreadedBuilder.java:51)
   at 
 org.apache.maven.lifecycle.internal.LifecycleStarter.execute(LifecycleStarter.java:116)
   at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:361)
   at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:155)
   at org.apache.maven.cli.MavenCli.execute(MavenCli.java:584)
   at org.apache.maven.cli.MavenCli.doMain(MavenCli.java:213)
   at org.apache.maven.cli.MavenCli.main(MavenCli.java:157)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced(Launcher.java:289)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.launch(Launcher.java:229)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode(Launcher.java:415)
   at 
 org.codehaus.plexus.classworlds.launcher.Launcher.main(Launcher.java:356)
 Caused by: org.apache.maven.plugin.MojoExecutionException: 
 org.apache.maven.plugin.MojoExecutionException: 'protoc --version' did not 
 return a version
   at 
 org.apache.hadoop.maven.plugin.protoc.ProtocMojo.execute(ProtocMojo.java:105)
   at 
 org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:133)
   at 
 org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208)
   ... 19 more
 Caused by: org.apache.maven.plugin.MojoExecutionException: 'protoc --version' 
 did not return a version
   at 
 org.apache.hadoop.maven.plugin.protoc.ProtocMojo.execute(ProtocMojo.java:68)
   ... 21 more
 [ERROR] 
 [ERROR] 
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11241) TestNMSimulator fails sometimes due to timing issue

2014-10-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186620#comment-14186620
 ] 

Hadoop QA commented on HADOOP-11241:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org
  against trunk revision 0398db1.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4967//console

This message is automatically generated.

 TestNMSimulator fails sometimes due to timing issue
 ---

 Key: HADOOP-11241
 URL: https://issues.apache.org/jira/browse/HADOOP-11241
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Varun Vasudev
Assignee: Varun Vasudev
 Attachments: apache-hadoop-11241.0.patch


 TestNMSimulator fails sometimes due to timing issues. From a failure -
 {noformat}
 2014-10-16 23:21:42,343 INFO  resourcemanager.ResourceTrackerService 
 (ResourceTrackerService.java:registerNodeManager(337)) - NodeManager from 
 node node1(cmPort: 0 httpPort: 80) registered with capability: memory:10240, 
 vCores:10, assigned nodeId node1:0
 2014-10-16 23:21:42,397 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,400 INFO  rmnode.RMNodeImpl (RMNodeImpl.java:handle(423)) 
 - node1:0 Node Transitioned from NEW to RUNNING
 2014-10-16 23:21:42,404 INFO  fair.FairScheduler 
 (FairScheduler.java:addNode(825)) - Added node node1:0 cluster capacity: 
 memory:10240, vCores:10
 2014-10-16 23:21:42,407 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped 
 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:18088
 2014-10-16 23:21:42,409 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,410 INFO  ipc.Server (Server.java:stop(2437)) - Stopping 
 server on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(706)) - Stopping 
 IPC Server listener on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(832)) - Stopping 
 IPC Server Responder
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11241) TestNMSimulator fails sometimes due to timing issue

2014-10-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186648#comment-14186648
 ] 

Hadoop QA commented on HADOOP-11241:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org
  against trunk revision 0398db1.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4968//console

This message is automatically generated.

 TestNMSimulator fails sometimes due to timing issue
 ---

 Key: HADOOP-11241
 URL: https://issues.apache.org/jira/browse/HADOOP-11241
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Varun Vasudev
Assignee: Varun Vasudev
 Attachments: apache-hadoop-11241.0.patch


 TestNMSimulator fails sometimes due to timing issues. From a failure -
 {noformat}
 2014-10-16 23:21:42,343 INFO  resourcemanager.ResourceTrackerService 
 (ResourceTrackerService.java:registerNodeManager(337)) - NodeManager from 
 node node1(cmPort: 0 httpPort: 80) registered with capability: memory:10240, 
 vCores:10, assigned nodeId node1:0
 2014-10-16 23:21:42,397 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,400 INFO  rmnode.RMNodeImpl (RMNodeImpl.java:handle(423)) 
 - node1:0 Node Transitioned from NEW to RUNNING
 2014-10-16 23:21:42,404 INFO  fair.FairScheduler 
 (FairScheduler.java:addNode(825)) - Added node node1:0 cluster capacity: 
 memory:10240, vCores:10
 2014-10-16 23:21:42,407 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped 
 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:18088
 2014-10-16 23:21:42,409 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,410 INFO  ipc.Server (Server.java:stop(2437)) - Stopping 
 server on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(706)) - Stopping 
 IPC Server listener on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(832)) - Stopping 
 IPC Server Responder
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11240) Jenkins build seems to be broken by changes in test-patch.sh

2014-10-28 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186661#comment-14186661
 ] 

Steve Loughran commented on HADOOP-11240:
-

Looking at the git logs the HADOOP-10926 patch went in to trunk at 01:22 GMT, 
the first patch application failure report from jenkins arrived at 01:32. 
Reverted that patch to see what happens. 

 Jenkins build seems to be broken by changes in test-patch.sh
 

 Key: HADOOP-11240
 URL: https://issues.apache.org/jira/browse/HADOOP-11240
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Zhijie Shen
Priority: Blocker

 * https://builds.apache.org/job/PreCommit-YARN-Build/5596//console
 * https://builds.apache.org/job/PreCommit-YARN-Build/5595//console
 * https://builds.apache.org/job/PreCommit-YARN-Build/5597//console
 * https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4981//console
 A couple jenkins build failure for the same reason:
 {code}
 HEAD is now at b0e19c9 HADOOP-10926. Improve test-patch.sh to apply binary 
 diffs (cmccabe)
 Previous HEAD position was b0e19c9... HADOOP-10926. Improve test-patch.sh to 
 apply binary diffs (cmccabe)
 Switched to branch 'trunk'
 Your branch is behind 'origin/trunk' by 17 commits, and can be fast-forwarded.
   (use git pull to update your local branch)
 First, rewinding head to replay your work on top of it...
 Fast-forwarded trunk to b0e19c9d54cecef191b91431f9ca62a76a000f45.
 MAPREDUCE-5933 patch is being downloaded at Tue Oct 28 02:11:12 UTC 2014 from
 http://issues.apache.org/jira/secure/attachment/12677496/MAPREDUCE-5933.patch
 cp: cannot stat '/home/jenkins/buildSupport/lib/*': No such file or directory
 Error: Patch dryrun couldn't detect changes the patch would make. Exiting.
 PATCH APPLICATION FAILED
 {code}
 It seems to have been broken by HADOOP-10926



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-10926) Improve test-patch.sh to apply binary diffs

2014-10-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran reopened HADOOP-10926:
-

HADOOP-11240 reports that YARN patches are failing, and this is the likely 
cause. Reverting the patch  re-opening the JIRA

 Improve test-patch.sh to apply binary diffs
 ---

 Key: HADOOP-10926
 URL: https://issues.apache.org/jira/browse/HADOOP-10926
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0

 Attachments: HADOOP-10926.001.patch


 The Unix {{patch}} command cannot apply binary diffs as generated via {{git 
 diff --binary}}. This means we cannot get effective test-patch.sh runs when 
 the patch requires adding a binary file.
 We should consider using a different patch method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10926) Improve test-patch.sh to apply binary diffs

2014-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186669#comment-14186669
 ] 

Hudson commented on HADOOP-10926:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #6366 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6366/])
Revert HADOOP-10926. Improve test-patch.sh to apply binary diffs (cmccabe) 
(stevel: rev c9bec46c92cc4df8d3247a3f235c303c8ae94655)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/smart-apply-patch.sh


 Improve test-patch.sh to apply binary diffs
 ---

 Key: HADOOP-10926
 URL: https://issues.apache.org/jira/browse/HADOOP-10926
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0

 Attachments: HADOOP-10926.001.patch


 The Unix {{patch}} command cannot apply binary diffs as generated via {{git 
 diff --binary}}. This means we cannot get effective test-patch.sh runs when 
 the patch requires adding a binary file.
 We should consider using a different patch method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11233) hadoop.security.kms.client.encrypted.key.cache.expiry property spelled wrong in core-default

2014-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186721#comment-14186721
 ] 

Hudson commented on HADOOP-11233:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #726 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/726/])
HADOOP-11233. hadoop.security.kms.client.encrypted.key.cache.expiry property 
spelled wrong in core-default. (Stephen Chu via yliu) (yliu: rev 
e7859015bcc2cda99a64b1db18baab0e1ca5c155)
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt


 hadoop.security.kms.client.encrypted.key.cache.expiry property spelled wrong 
 in core-default
 

 Key: HADOOP-11233
 URL: https://issues.apache.org/jira/browse/HADOOP-11233
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Stephen Chu
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-11233.1.patch


 Theres' a spurious {{}} at the start of the kms cache entry
 {code}
 property
   namehadoop.security.kms.client.encrypted.key.cache.expiry/name
   value4320/value
   description
 Cache expiry time for a Key, after which the cache Queue for this
 key will be dropped. Default = 12hrs
   /description
 /property
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11236) NFS: Fix javadoc warning in RpcProgram.java

2014-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186723#comment-14186723
 ] 

Hudson commented on HADOOP-11236:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #726 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/726/])
HADOOP-11236. NFS: Fix javadoc warning in RpcProgram.java. Contributed by 
Abhiraj Butala. (harsh) (harsh: rev 2429b31656808b0360ed3e77dcacf3c77e842e31)
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 NFS: Fix javadoc warning in RpcProgram.java
 ---

 Key: HADOOP-11236
 URL: https://issues.apache.org/jira/browse/HADOOP-11236
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Abhiraj Butala
Assignee: Abhiraj Butala
Priority: Trivial
 Fix For: 2.7.0

 Attachments: HDFS-6675.patch


 Fix following javadoc warning during hadoop-nfs compilation:
 {code}
 :
 :
 [WARNING] Javadoc Warnings
 [WARNING] 
 /home/abutala/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java:73:
  warning - @param argument DatagramSocket is not a parameter name.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10926) Improve test-patch.sh to apply binary diffs

2014-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186732#comment-14186732
 ] 

Hudson commented on HADOOP-10926:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #726 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/726/])
HADOOP-10926. Improve test-patch.sh to apply binary diffs (cmccabe) (cmccabe: 
rev b0e19c9d54cecef191b91431f9ca62a76a000f45)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/smart-apply-patch.sh
Revert HADOOP-10926. Improve test-patch.sh to apply binary diffs (cmccabe) 
(stevel: rev c9bec46c92cc4df8d3247a3f235c303c8ae94655)
* dev-support/smart-apply-patch.sh
* hadoop-common-project/hadoop-common/CHANGES.txt


 Improve test-patch.sh to apply binary diffs
 ---

 Key: HADOOP-10926
 URL: https://issues.apache.org/jira/browse/HADOOP-10926
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0

 Attachments: HADOOP-10926.001.patch


 The Unix {{patch}} command cannot apply binary diffs as generated via {{git 
 diff --binary}}. This means we cannot get effective test-patch.sh runs when 
 the patch requires adding a binary file.
 We should consider using a different patch method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11240) Jenkins build seems to be broken by changes in test-patch.sh

2014-10-28 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11240?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-11240.
-
   Resolution: Fixed
Fix Version/s: 3.0.0
 Assignee: Steve Loughran

after reverting HADOOP-10026 all seems well: patches are applying

 Jenkins build seems to be broken by changes in test-patch.sh
 

 Key: HADOOP-11240
 URL: https://issues.apache.org/jira/browse/HADOOP-11240
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Steve Loughran
Priority: Blocker
 Fix For: 3.0.0


 * https://builds.apache.org/job/PreCommit-YARN-Build/5596//console
 * https://builds.apache.org/job/PreCommit-YARN-Build/5595//console
 * https://builds.apache.org/job/PreCommit-YARN-Build/5597//console
 * https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4981//console
 A couple jenkins build failure for the same reason:
 {code}
 HEAD is now at b0e19c9 HADOOP-10926. Improve test-patch.sh to apply binary 
 diffs (cmccabe)
 Previous HEAD position was b0e19c9... HADOOP-10926. Improve test-patch.sh to 
 apply binary diffs (cmccabe)
 Switched to branch 'trunk'
 Your branch is behind 'origin/trunk' by 17 commits, and can be fast-forwarded.
   (use git pull to update your local branch)
 First, rewinding head to replay your work on top of it...
 Fast-forwarded trunk to b0e19c9d54cecef191b91431f9ca62a76a000f45.
 MAPREDUCE-5933 patch is being downloaded at Tue Oct 28 02:11:12 UTC 2014 from
 http://issues.apache.org/jira/secure/attachment/12677496/MAPREDUCE-5933.patch
 cp: cannot stat '/home/jenkins/buildSupport/lib/*': No such file or directory
 Error: Patch dryrun couldn't detect changes the patch would make. Exiting.
 PATCH APPLICATION FAILED
 {code}
 It seems to have been broken by HADOOP-10926



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10926) Improve test-patch.sh to apply binary diffs

2014-10-28 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186759#comment-14186759
 ] 

Steve Loughran commented on HADOOP-10926:
-

having resolved this patch, jenkins patches are applying again.

I know how near-impossible it is to test the jenkins integration except by 
committing the patch —perhaps we ought to have some process in place for 
managing those commits

I propose:
# do at a weekend when less people are submitting patches
# after applying the patch, do some test patches to verify they take. Ideally 
regression test patches, here text patches, and feature test patches that 
test the applicability of the new feature. If these patches were added to JIRAs 
and submitted before committing the change, we'd expect the regression patch to 
continue to apply, and the feature patch to go from failing to working

 Improve test-patch.sh to apply binary diffs
 ---

 Key: HADOOP-10926
 URL: https://issues.apache.org/jira/browse/HADOOP-10926
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0

 Attachments: HADOOP-10926.001.patch


 The Unix {{patch}} command cannot apply binary diffs as generated via {{git 
 diff --binary}}. This means we cannot get effective test-patch.sh runs when 
 the patch requires adding a binary file.
 We should consider using a different patch method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11233) hadoop.security.kms.client.encrypted.key.cache.expiry property spelled wrong in core-default

2014-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186820#comment-14186820
 ] 

Hudson commented on HADOOP-11233:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1940 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1940/])
HADOOP-11233. hadoop.security.kms.client.encrypted.key.cache.expiry property 
spelled wrong in core-default. (Stephen Chu via yliu) (yliu: rev 
e7859015bcc2cda99a64b1db18baab0e1ca5c155)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


 hadoop.security.kms.client.encrypted.key.cache.expiry property spelled wrong 
 in core-default
 

 Key: HADOOP-11233
 URL: https://issues.apache.org/jira/browse/HADOOP-11233
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Stephen Chu
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-11233.1.patch


 Theres' a spurious {{}} at the start of the kms cache entry
 {code}
 property
   namehadoop.security.kms.client.encrypted.key.cache.expiry/name
   value4320/value
   description
 Cache expiry time for a Key, after which the cache Queue for this
 key will be dropped. Default = 12hrs
   /description
 /property
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11236) NFS: Fix javadoc warning in RpcProgram.java

2014-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186822#comment-14186822
 ] 

Hudson commented on HADOOP-11236:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1940 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1940/])
HADOOP-11236. NFS: Fix javadoc warning in RpcProgram.java. Contributed by 
Abhiraj Butala. (harsh) (harsh: rev 2429b31656808b0360ed3e77dcacf3c77e842e31)
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 NFS: Fix javadoc warning in RpcProgram.java
 ---

 Key: HADOOP-11236
 URL: https://issues.apache.org/jira/browse/HADOOP-11236
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Abhiraj Butala
Assignee: Abhiraj Butala
Priority: Trivial
 Fix For: 2.7.0

 Attachments: HDFS-6675.patch


 Fix following javadoc warning during hadoop-nfs compilation:
 {code}
 :
 :
 [WARNING] Javadoc Warnings
 [WARNING] 
 /home/abutala/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java:73:
  warning - @param argument DatagramSocket is not a parameter name.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10926) Improve test-patch.sh to apply binary diffs

2014-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186831#comment-14186831
 ] 

Hudson commented on HADOOP-10926:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1940 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1940/])
HADOOP-10926. Improve test-patch.sh to apply binary diffs (cmccabe) (cmccabe: 
rev b0e19c9d54cecef191b91431f9ca62a76a000f45)
* dev-support/smart-apply-patch.sh
* hadoop-common-project/hadoop-common/CHANGES.txt
Revert HADOOP-10926. Improve test-patch.sh to apply binary diffs (cmccabe) 
(stevel: rev c9bec46c92cc4df8d3247a3f235c303c8ae94655)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/smart-apply-patch.sh


 Improve test-patch.sh to apply binary diffs
 ---

 Key: HADOOP-10926
 URL: https://issues.apache.org/jira/browse/HADOOP-10926
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0

 Attachments: HADOOP-10926.001.patch


 The Unix {{patch}} command cannot apply binary diffs as generated via {{git 
 diff --binary}}. This means we cannot get effective test-patch.sh runs when 
 the patch requires adding a binary file.
 We should consider using a different patch method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11233) hadoop.security.kms.client.encrypted.key.cache.expiry property spelled wrong in core-default

2014-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186882#comment-14186882
 ] 

Hudson commented on HADOOP-11233:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1915 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1915/])
HADOOP-11233. hadoop.security.kms.client.encrypted.key.cache.expiry property 
spelled wrong in core-default. (Stephen Chu via yliu) (yliu: rev 
e7859015bcc2cda99a64b1db18baab0e1ca5c155)
* hadoop-hdfs-project/hadoop-hdfs/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml


 hadoop.security.kms.client.encrypted.key.cache.expiry property spelled wrong 
 in core-default
 

 Key: HADOOP-11233
 URL: https://issues.apache.org/jira/browse/HADOOP-11233
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Stephen Chu
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-11233.1.patch


 Theres' a spurious {{}} at the start of the kms cache entry
 {code}
 property
   namehadoop.security.kms.client.encrypted.key.cache.expiry/name
   value4320/value
   description
 Cache expiry time for a Key, after which the cache Queue for this
 key will be dropped. Default = 12hrs
   /description
 /property
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10926) Improve test-patch.sh to apply binary diffs

2014-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186894#comment-14186894
 ] 

Hudson commented on HADOOP-10926:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1915 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1915/])
HADOOP-10926. Improve test-patch.sh to apply binary diffs (cmccabe) (cmccabe: 
rev b0e19c9d54cecef191b91431f9ca62a76a000f45)
* dev-support/smart-apply-patch.sh
* hadoop-common-project/hadoop-common/CHANGES.txt
Revert HADOOP-10926. Improve test-patch.sh to apply binary diffs (cmccabe) 
(stevel: rev c9bec46c92cc4df8d3247a3f235c303c8ae94655)
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/smart-apply-patch.sh


 Improve test-patch.sh to apply binary diffs
 ---

 Key: HADOOP-10926
 URL: https://issues.apache.org/jira/browse/HADOOP-10926
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0

 Attachments: HADOOP-10926.001.patch


 The Unix {{patch}} command cannot apply binary diffs as generated via {{git 
 diff --binary}}. This means we cannot get effective test-patch.sh runs when 
 the patch requires adding a binary file.
 We should consider using a different patch method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11236) NFS: Fix javadoc warning in RpcProgram.java

2014-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186884#comment-14186884
 ] 

Hudson commented on HADOOP-11236:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1915 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1915/])
HADOOP-11236. NFS: Fix javadoc warning in RpcProgram.java. Contributed by 
Abhiraj Butala. (harsh) (harsh: rev 2429b31656808b0360ed3e77dcacf3c77e842e31)
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 NFS: Fix javadoc warning in RpcProgram.java
 ---

 Key: HADOOP-11236
 URL: https://issues.apache.org/jira/browse/HADOOP-11236
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Reporter: Abhiraj Butala
Assignee: Abhiraj Butala
Priority: Trivial
 Fix For: 2.7.0

 Attachments: HDFS-6675.patch


 Fix following javadoc warning during hadoop-nfs compilation:
 {code}
 :
 :
 [WARNING] Javadoc Warnings
 [WARNING] 
 /home/abutala/work/hadoop/hadoop-trunk/hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcProgram.java:73:
  warning - @param argument DatagramSocket is not a parameter name.
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9740) FsShell's Text command does not read avro data files stored on HDFS

2014-10-28 Thread Rushabh S Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9740?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14186977#comment-14186977
 ] 

Rushabh S Shah commented on HADOOP-9740:


Hey Doug,
I don't see this patch going into branch-2 release.
This patch just went into trunk.
Can anyone from the community please check in this patch into branch-2 ?

 FsShell's Text command does not read avro data files stored on HDFS
 ---

 Key: HADOOP-9740
 URL: https://issues.apache.org/jira/browse/HADOOP-9740
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.0.5-alpha
Reporter: Allan Yan
Assignee: Allan Yan
  Labels: patch
 Attachments: HADOOP-9740.patch, HADOOP-9740.patch, 
 maven_unit_test_error.log


 HADOOP-8597 added support for reading avro data files from FsShell Text 
 command. However, it does not work with files stored on HDFS. Here is the 
 error message:
 {code}
 $hadoop fs -text hdfs://localhost:8020/test.avro
 -text: URI scheme is not file
 Usage: hadoop fs [generic options] -text [-ignoreCrc] src ...
 {code}
 The problem is because the File constructor complains not able to recognize 
 hdfs:// scheme in during AvroFileInputStream initialization. 
 There is a unit TestTextCommand.java under hadoop-common project. However it 
 only tested files in local file system. I created a similar one under 
 hadoop-hdfs project using MiniDFSCluster. Please see attached maven unit test 
 error message with full stack trace for more details.
  
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10926) Improve test-patch.sh to apply binary diffs

2014-10-28 Thread Xiaoyu Yao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187041#comment-14187041
 ] 

Xiaoyu Yao commented on HADOOP-10926:
-

We are still seeing the Jenkins error related to test-patch.sh. 

https://builds.apache.org/job/PreCommit-HDFS-Build/8570//console

Compiling /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build@2
/home/jenkins/tools/maven/latest/bin/mvn clean test -DskipTests 
-DHadoopPatchProcess -Ptest-patch  
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build@2/../patchprocess/trunkJavacWarnings.txt
 21
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build@2/dev-support/test-patch.sh:
 line 270:  3108 Killed  $MVN clean test -DskipTests 
-D${PROJECT_NAME}PatchProcess -Ptest-patch  $PATCH_DIR/trunkJavacWarnings.txt 
21
Trunk compilation is broken?
.



 Improve test-patch.sh to apply binary diffs
 ---

 Key: HADOOP-10926
 URL: https://issues.apache.org/jira/browse/HADOOP-10926
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0

 Attachments: HADOOP-10926.001.patch


 The Unix {{patch}} command cannot apply binary diffs as generated via {{git 
 diff --binary}}. This means we cannot get effective test-patch.sh runs when 
 the patch requires adding a binary file.
 We should consider using a different patch method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11240) Jenkins build seems to be broken by changes in test-patch.sh

2014-10-28 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187137#comment-14187137
 ] 

Zhijie Shen commented on HADOOP-11240:
--

[~ste...@apache.org], thanks for unblocking jenkins build!

 Jenkins build seems to be broken by changes in test-patch.sh
 

 Key: HADOOP-11240
 URL: https://issues.apache.org/jira/browse/HADOOP-11240
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Steve Loughran
Priority: Blocker
 Fix For: 3.0.0


 * https://builds.apache.org/job/PreCommit-YARN-Build/5596//console
 * https://builds.apache.org/job/PreCommit-YARN-Build/5595//console
 * https://builds.apache.org/job/PreCommit-YARN-Build/5597//console
 * https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4981//console
 A couple jenkins build failure for the same reason:
 {code}
 HEAD is now at b0e19c9 HADOOP-10926. Improve test-patch.sh to apply binary 
 diffs (cmccabe)
 Previous HEAD position was b0e19c9... HADOOP-10926. Improve test-patch.sh to 
 apply binary diffs (cmccabe)
 Switched to branch 'trunk'
 Your branch is behind 'origin/trunk' by 17 commits, and can be fast-forwarded.
   (use git pull to update your local branch)
 First, rewinding head to replay your work on top of it...
 Fast-forwarded trunk to b0e19c9d54cecef191b91431f9ca62a76a000f45.
 MAPREDUCE-5933 patch is being downloaded at Tue Oct 28 02:11:12 UTC 2014 from
 http://issues.apache.org/jira/secure/attachment/12677496/MAPREDUCE-5933.patch
 cp: cannot stat '/home/jenkins/buildSupport/lib/*': No such file or directory
 Error: Patch dryrun couldn't detect changes the patch would make. Exiting.
 PATCH APPLICATION FAILED
 {code}
 It seems to have been broken by HADOOP-10926



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10926) Improve test-patch.sh to apply binary diffs

2014-10-28 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187146#comment-14187146
 ] 

Colin Patrick McCabe commented on HADOOP-10926:
---

Sorry for the inconvenience, [~stev...@iseran.com].  I tested with a bunch of 
patches, but as you said, it's hard to test every case.  It looks like the 
issue here is related to differences in what git apply produces on stdout as 
opposed to what GNU patch produces.

Thinking about this a little bit more, I think maybe the way to go here is to 
have a special case for patches that are clearly git patches generated 
*without* {{\-\-no-prefix}}.  These patches are always generated with reference 
to the project root, not to some other directory.  Even if git diff is run 
when inside a subdirectory, we know that the {{a/}} and {{b/}} actually stand 
for the project root.  This would allow us to skip a bunch of the crazy let's 
figure out where the patch root is logic-- just for the special case of git 
patches generated with a prefix.

bq. Xiaoyu wrote: We are still seeing the Jenkins error related to 
test-patch.sh.  https://builds.apache.org/job/PreCommit-HDFS-Build/8570//console

The build error you linked to doesn't seem related to this change.  The error 
message is:

{code}
/home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build@2/dev-support/test-patch.sh:
 line 270:  3108 Killed  $MVN clean test -DskipTests 
-D${PROJECT_NAME}PatchProcess -Ptest-patch  $PATCH_DIR/trunkJavacWarnings.txt 
21
Trunk compilation is broken?
{code}

The Killed means that test-patch.sh received a SIGKILL, possibly as a result 
of an OOM condition on the executor.

Also, the build you linked to is at commit 
58c0bb9ed9f4a2491395b63c68046562a73526c9, which is after the change has been 
reverted, so there is no code from this change actually being executed at this 
time.

 Improve test-patch.sh to apply binary diffs
 ---

 Key: HADOOP-10926
 URL: https://issues.apache.org/jira/browse/HADOOP-10926
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0

 Attachments: HADOOP-10926.001.patch


 The Unix {{patch}} command cannot apply binary diffs as generated via {{git 
 diff --binary}}. This means we cannot get effective test-patch.sh runs when 
 the patch requires adding a binary file.
 We should consider using a different patch method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11240) Jenkins build seems to be broken by changes in test-patch.sh

2014-10-28 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187149#comment-14187149
 ] 

Colin Patrick McCabe commented on HADOOP-11240:
---

Thanks for taking care of this, Steve, and sorry for the inconvenience.

 Jenkins build seems to be broken by changes in test-patch.sh
 

 Key: HADOOP-11240
 URL: https://issues.apache.org/jira/browse/HADOOP-11240
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Steve Loughran
Priority: Blocker
 Fix For: 3.0.0


 * https://builds.apache.org/job/PreCommit-YARN-Build/5596//console
 * https://builds.apache.org/job/PreCommit-YARN-Build/5595//console
 * https://builds.apache.org/job/PreCommit-YARN-Build/5597//console
 * https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4981//console
 A couple jenkins build failure for the same reason:
 {code}
 HEAD is now at b0e19c9 HADOOP-10926. Improve test-patch.sh to apply binary 
 diffs (cmccabe)
 Previous HEAD position was b0e19c9... HADOOP-10926. Improve test-patch.sh to 
 apply binary diffs (cmccabe)
 Switched to branch 'trunk'
 Your branch is behind 'origin/trunk' by 17 commits, and can be fast-forwarded.
   (use git pull to update your local branch)
 First, rewinding head to replay your work on top of it...
 Fast-forwarded trunk to b0e19c9d54cecef191b91431f9ca62a76a000f45.
 MAPREDUCE-5933 patch is being downloaded at Tue Oct 28 02:11:12 UTC 2014 from
 http://issues.apache.org/jira/secure/attachment/12677496/MAPREDUCE-5933.patch
 cp: cannot stat '/home/jenkins/buildSupport/lib/*': No such file or directory
 Error: Patch dryrun couldn't detect changes the patch would make. Exiting.
 PATCH APPLICATION FAILED
 {code}
 It seems to have been broken by HADOOP-10926



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11240) Jenkins build seems to be broken by changes in test-patch.sh

2014-10-28 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11240?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187187#comment-14187187
 ] 

Steve Loughran commented on HADOOP-11240:
-

no worrries -it's why you need committers in different time zones. We european 
developers get to find problems with broken -SNAPSHOT artifacts before the US 
does too.

 Jenkins build seems to be broken by changes in test-patch.sh
 

 Key: HADOOP-11240
 URL: https://issues.apache.org/jira/browse/HADOOP-11240
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Steve Loughran
Priority: Blocker
 Fix For: 3.0.0


 * https://builds.apache.org/job/PreCommit-YARN-Build/5596//console
 * https://builds.apache.org/job/PreCommit-YARN-Build/5595//console
 * https://builds.apache.org/job/PreCommit-YARN-Build/5597//console
 * https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4981//console
 A couple jenkins build failure for the same reason:
 {code}
 HEAD is now at b0e19c9 HADOOP-10926. Improve test-patch.sh to apply binary 
 diffs (cmccabe)
 Previous HEAD position was b0e19c9... HADOOP-10926. Improve test-patch.sh to 
 apply binary diffs (cmccabe)
 Switched to branch 'trunk'
 Your branch is behind 'origin/trunk' by 17 commits, and can be fast-forwarded.
   (use git pull to update your local branch)
 First, rewinding head to replay your work on top of it...
 Fast-forwarded trunk to b0e19c9d54cecef191b91431f9ca62a76a000f45.
 MAPREDUCE-5933 patch is being downloaded at Tue Oct 28 02:11:12 UTC 2014 from
 http://issues.apache.org/jira/secure/attachment/12677496/MAPREDUCE-5933.patch
 cp: cannot stat '/home/jenkins/buildSupport/lib/*': No such file or directory
 Error: Patch dryrun couldn't detect changes the patch would make. Exiting.
 PATCH APPLICATION FAILED
 {code}
 It seems to have been broken by HADOOP-10926



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11186) documentation should talk about hadoop.htrace.spanreceiver.classes, not hadoop.trace.spanreceiver.classes

2014-10-28 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11186?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187258#comment-14187258
 ] 

Masatake Iwasaki commented on HADOOP-11186:
---

+1(non-binding)

 documentation should talk about hadoop.htrace.spanreceiver.classes, not 
 hadoop.trace.spanreceiver.classes
 -

 Key: HADOOP-11186
 URL: https://issues.apache.org/jira/browse/HADOOP-11186
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Attachments: 0001-HADOOP-11186.patch


 The documentation should talk about hadoop.htrace.spanreceiver.classes, not 
 hadoop.trace.spanreceiver.classes (note the H)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10926) Improve test-patch.sh to apply binary diffs

2014-10-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-10926:
--
Attachment: HADOOP-10926.002.patch

This patch adds a special case for git patches generated without \-\-no-prefix 
as discussed above.  This is a lot easier to do, since we don't need to search 
for the patch root when applying such patches.  This avoids the need to parse 
the output on stdout of git apply and simplifies the application process for 
such patches a lot.  I use bash regexes in order to verify that a patch is a 
git patch.

 Improve test-patch.sh to apply binary diffs
 ---

 Key: HADOOP-10926
 URL: https://issues.apache.org/jira/browse/HADOOP-10926
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0

 Attachments: HADOOP-10926.001.patch, HADOOP-10926.002.patch


 The Unix {{patch}} command cannot apply binary diffs as generated via {{git 
 diff --binary}}. This means we cannot get effective test-patch.sh runs when 
 the patch requires adding a binary file.
 We should consider using a different patch method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10421) Enable Kerberos profiled UTs to run with IBM JAVA

2014-10-28 Thread Luke Browning (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187291#comment-14187291
 ] 

Luke Browning commented on HADOOP-10421:


Any idea when this will be reviewed and committed?

Thanks,
Luke

 Enable Kerberos profiled UTs to run with IBM JAVA
 -

 Key: HADOOP-10421
 URL: https://issues.apache.org/jira/browse/HADOOP-10421
 Project: Hadoop Common
  Issue Type: Test
  Components: security, test
Affects Versions: 2.4.1
Reporter: Jinghui Wang
Assignee: Jinghui Wang
 Attachments: HADOOP-10421.patch


 KerberosTestUtils in hadoop-auth does not support IBM JAVA, which has 
 different Krb5LoginModule configuration options.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11226) ipc.Client has to use setTrafficClass() with IPTOS_LOWDELAY|IPTOS_RELIABILITY

2014-10-28 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V updated HADOOP-11226:
-
Attachment: HADOOP-11226.2.patch

 ipc.Client has to use setTrafficClass() with IPTOS_LOWDELAY|IPTOS_RELIABILITY
 -

 Key: HADOOP-11226
 URL: https://issues.apache.org/jira/browse/HADOOP-11226
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.6.0
Reporter: Gopal V
Assignee: Gopal V
 Attachments: HADOOP-11226.1.patch, HADOOP-11226.2.patch


 During heavy shuffle, packet loss for IPC packets was observed from a machine.
 Avoid packet-loss and speed up transfer by using 0x14 QOS bits for the 
 packets.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11239) [JDK8] azurenative tests fail builds on JDK8

2014-10-28 Thread Gopal V (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gopal V resolved HADOOP-11239.
--
Resolution: Not a Problem

 [JDK8] azurenative tests fail builds on JDK8
 

 Key: HADOOP-11239
 URL: https://issues.apache.org/jira/browse/HADOOP-11239
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.6.0
Reporter: Gopal V
Assignee: Gopal V
Priority: Trivial

 java.util.Base64 conflicts with 
 com.microsoft.windowsazure.storage.core.Base64 in Azure unit tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10847) Cleanup calling of sun.security.x509

2014-10-28 Thread Luke Browning (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187365#comment-14187365
 ] 

Luke Browning commented on HADOOP-10847:


Any idea when this will be included?

Thanks, Luke

 Cleanup calling of sun.security.x509 
 -

 Key: HADOOP-10847
 URL: https://issues.apache.org/jira/browse/HADOOP-10847
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Kai Zheng
Priority: Minor
 Attachments: HADOOP-10847-1.patch


 As was told by Max (Oracle), JDK9 is likely to block all accesses to sun.* 
 classes.
 Below is from email of Andrew Purtell:
 {quote}
 The use of sun.* APIs to create a certificate in Hadoop and HBase test code 
 can be removed. Someone (Intel? Oracle?) can submit a JIRA that replaces the 
 programmatic construction with a stringified binary cert for use in the 
 relevant unit tests. 
 {quote}
 In Hadoop, the calls in question are below:
 {code}
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:24:import
  sun.security.x509.CertificateIssuerName;
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:25:import
  sun.security.x509.CertificateSerialNumber;
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:26:import
  sun.security.x509.CertificateSubjectName;
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:27:import
  sun.security.x509.CertificateValidity;
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:28:import
  sun.security.x509.CertificateVersion;
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:29:import
  sun.security.x509.CertificateX509Key;
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:30:import
  sun.security.x509.X500Name; 
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:31:import
  sun.security.x509.X509CertImpl; 
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:32:import
  sun.security.x509.X509CertInfo;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10847) Cleanup calling of sun.security.x509

2014-10-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187372#comment-14187372
 ] 

Hadoop QA commented on HADOOP-10847:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org
  against trunk revision e226b5b.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4969//console

This message is automatically generated.

 Cleanup calling of sun.security.x509 
 -

 Key: HADOOP-10847
 URL: https://issues.apache.org/jira/browse/HADOOP-10847
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Kai Zheng
Priority: Minor
 Attachments: HADOOP-10847-1.patch


 As was told by Max (Oracle), JDK9 is likely to block all accesses to sun.* 
 classes.
 Below is from email of Andrew Purtell:
 {quote}
 The use of sun.* APIs to create a certificate in Hadoop and HBase test code 
 can be removed. Someone (Intel? Oracle?) can submit a JIRA that replaces the 
 programmatic construction with a stringified binary cert for use in the 
 relevant unit tests. 
 {quote}
 In Hadoop, the calls in question are below:
 {code}
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:24:import
  sun.security.x509.CertificateIssuerName;
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:25:import
  sun.security.x509.CertificateSerialNumber;
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:26:import
  sun.security.x509.CertificateSubjectName;
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:27:import
  sun.security.x509.CertificateValidity;
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:28:import
  sun.security.x509.CertificateVersion;
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:29:import
  sun.security.x509.CertificateX509Key;
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:30:import
  sun.security.x509.X500Name; 
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:31:import
  sun.security.x509.X509CertImpl; 
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:32:import
  sun.security.x509.X509CertInfo;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11216) Improve Openssl library finding

2014-10-28 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187403#comment-14187403
 ] 

Colin Patrick McCabe commented on HADOOP-11216:
---

I thought about this a little bit more.  How openssl is named varies a lot from 
platform to platform.  It's not much of an exaggeration to say that it's a 
nightmare for packagers.  Adding to this problem, a lot of the OSes we support 
(like RHEL6.4) have versions of openssl installed by default that are too old 
for Hadoop.  So we're going to have to ask people to install a newer version 
manually in any case.  I think depending on a specific name suffix is going to 
create more problems than it solves, at least in the short term.  So I think we 
should just search for the no-suffix form, and error out if it's not found.

bq. 1. Existing openssl.prefix, openssl.lib, openssl.include will let people to 
specify custom openssl location, they will become -L and -I when doing compile 
using gcc. Yes, I agree people can set CMAKE_LIBRARY_PATH and 
CMAKE_INCLUDE_PATH. But snappy is in this way, should we make them consistent 
(keep openssl.**, or remove snappy.**)?

This is another thing I need to think about more.  If we're going to have 
custom installations of openssl, then maybe these variables need to exist.  
However, we also need a way of finding this stuff at runtime, as you pointed 
out.  I think we can use the same trick we used earlier where we add an RPATH 
to libhadoop.so that causes it to find the libcrypto in its own directory, if 
one exists.  Then the sysadmin can create a symlink inside the libhadoop.so 
directory to the hadoop-specific version of libcrypto.so.

 Improve Openssl library finding
 ---

 Key: HADOOP-11216
 URL: https://issues.apache.org/jira/browse/HADOOP-11216
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Yi Liu
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-11216.003.patch


 When we compile Openssl 1.0.0\(x\) or 1.0.1\(x\) using default options, there 
 will be {{libcrypto.so.1.0.0}} in output lib dir, so we expect this version 
 suffix in cmake build file
 {code}
 SET(STORED_CMAKE_FIND_LIBRARY_SUFFIXES CMAKE_FIND_LIBRARY_SUFFIXES)
 set_find_shared_library_version(1.0.0)
 SET(OPENSSL_NAME crypto)
 
 {code}
 If we don't bundle the crypto shared library in Hadoop distribution, then 
 Hadoop will try to find crypto library in system path when running.
 But in real linux distribution, there may be no {{libcrypto.so.1.0.0}} or 
 {{libcrypto.so}} even the system embedded openssl is 1.0.1\(x\).  Then we 
 need to make symbolic link.
 This JIRA is to improve the Openssl library finding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11242) Record the time of calling in tracing span of IPC server

2014-10-28 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HADOOP-11242:
-

 Summary: Record the time of calling in tracing span of IPC server
 Key: HADOOP-11242
 URL: https://issues.apache.org/jira/browse/HADOOP-11242
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor


Current tracing span starts when the Call is put into callQueue. Recording the 
time of calling is useful to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11242) Record the time of calling in tracing span of IPC server

2014-10-28 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-11242:
--
Attachment: HADOOP-11242.1.patch

attaching patch.

 Record the time of calling in tracing span of IPC server
 

 Key: HADOOP-11242
 URL: https://issues.apache.org/jira/browse/HADOOP-11242
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HADOOP-11242.1.patch


 Current tracing span starts when the Call is put into callQueue. Recording 
 the time of calling is useful to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11242) Record the time of calling in tracing span of IPC server

2014-10-28 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-11242:
--
Status: Patch Available  (was: Open)

 Record the time of calling in tracing span of IPC server
 

 Key: HADOOP-11242
 URL: https://issues.apache.org/jira/browse/HADOOP-11242
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HADOOP-11242.1.patch


 Current tracing span starts when the Call is put into callQueue. Recording 
 the time of calling is useful to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11242) Record the time of calling in tracing span of IPC server

2014-10-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187494#comment-14187494
 ] 

Hadoop QA commented on HADOOP-11242:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org
  against trunk revision 8984e9b.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4970//console

This message is automatically generated.

 Record the time of calling in tracing span of IPC server
 

 Key: HADOOP-11242
 URL: https://issues.apache.org/jira/browse/HADOOP-11242
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HADOOP-11242.1.patch


 Current tracing span starts when the Call is put into callQueue. Recording 
 the time of calling is useful to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11157) ZKDelegationTokenSecretManager never shuts down listenerThreadPool

2014-10-28 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187567#comment-14187567
 ] 

Gregory Chanan commented on HADOOP-11157:
-

[~kkambatl] while writing up a test as you requested, I found a number of other 
issues.  This will be kind of scatter-brained, sorry:

1) related to shutdown
- a) the ExpiredToken is shut down after the ZKDelegationTokenSecretManager's 
curator, which causes an exception to be thrown and the process to exit.  This 
can be addressed by shutting down the ExpiredToken thread before the curator.
- b) even with a), the ExpiredTokenThread is interrupted by 
AbstractDelegationTokenSecretManager.closeThreads...if the ExpiredTokenThread 
is currently rolling the master key or expiring tokens in ZK, the interruption 
will cause the process to exit.  It seems like this can be addressed by holding 
the noInterruptsLock while the ExpiredTokenThread is not sleeping (should be 
waiting), but I'm not sure if we want to go that route.  Perhaps alternatively 
we could deal with the interruption by checking if its expected (i.e. if 
running is false).  One issue is that approach is that the 
ZKDelegationTokenSecretManager functions called from the ExpiredTokenThread 
don't throw or keep the interrupt flag, they just catch the exceptions and 
possibly throw them as a runtime exception.  I'm not sure if we can just 
swallow the InterruptedException -- presumably we need the ZK state to be in 
some reasonable state in case the process restarts?  Of course we have no tests 
of that...
2) not related to shutdown
- a) if you run TestZKDelegationTokenSecretManager#testCancelTokenSingleManager 
in a loop it will fail eventually.  It looks like the issue is how we deal with 
asynchronous ZK updates.
Consider the following code:
{code}
token = createToken
cancelToken(token)
verifyToken(token){code}
cancelToken will delete it from the local cache and delete the znode.  But the 
curator client will get the create child message (in the listener thread) and 
add the token back.  If that happens after cancelToken, the token will be added 
back until the listener thread gets the cancel message again.  (It also just 
occurred to me that this is happening in two different threads but some of the 
structures, like the currentToken, aren't thread safe).  The usual way to 
prevent this is to assign versions to the znodes so you can track whether you 
are getting an update for an old version.  I don't know how to deal with it in 
this case where deletes are a possibility and there doesn't appear to be a 
master that is responsible for writing (i.e. what is preventing some other 
SecretManager from recreating the token just after delete -- how would versions 
help with that?).  This may affect the keyCache as well as the tokenCache.

 ZKDelegationTokenSecretManager never shuts down listenerThreadPool
 --

 Key: HADOOP-11157
 URL: https://issues.apache.org/jira/browse/HADOOP-11157
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: HADOOP-11157.patch, HADOOP-11157.patch


 I'm trying to integrate Solr with the DelegationTokenAuthenticationFilter and 
 running into this issue.  The solr unit tests look for leaked threads and 
 when I started using the ZKDelegationTokenSecretManager it started reporting 
 leaks.  Shuting down the listenerThreadPool after the objects that use it 
 resolves the leak threads errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11157) ZKDelegationTokenSecretManager never shuts down listenerThreadPool

2014-10-28 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187594#comment-14187594
 ] 

Gregory Chanan commented on HADOOP-11157:
-

Here's another issue I think could happen, but have no test for:
1) set up two SecretManagers sharing zk
2) get a delegation token from one
3) use on both
4) renew on one around token expiration time

Then, both SecretManagers will run the token expiration code and possibly 
expire the newly renewed token.

 ZKDelegationTokenSecretManager never shuts down listenerThreadPool
 --

 Key: HADOOP-11157
 URL: https://issues.apache.org/jira/browse/HADOOP-11157
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: HADOOP-11157.patch, HADOOP-11157.patch


 I'm trying to integrate Solr with the DelegationTokenAuthenticationFilter and 
 running into this issue.  The solr unit tests look for leaked threads and 
 when I started using the ZKDelegationTokenSecretManager it started reporting 
 leaks.  Shuting down the listenerThreadPool after the objects that use it 
 resolves the leak threads errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11216) Improve Openssl library finding

2014-10-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11216:
--
Attachment: HADOOP-11216.004.patch

 Improve Openssl library finding
 ---

 Key: HADOOP-11216
 URL: https://issues.apache.org/jira/browse/HADOOP-11216
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Yi Liu
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-11216.003.patch, HADOOP-11216.004.patch


 When we compile Openssl 1.0.0\(x\) or 1.0.1\(x\) using default options, there 
 will be {{libcrypto.so.1.0.0}} in output lib dir, so we expect this version 
 suffix in cmake build file
 {code}
 SET(STORED_CMAKE_FIND_LIBRARY_SUFFIXES CMAKE_FIND_LIBRARY_SUFFIXES)
 set_find_shared_library_version(1.0.0)
 SET(OPENSSL_NAME crypto)
 
 {code}
 If we don't bundle the crypto shared library in Hadoop distribution, then 
 Hadoop will try to find crypto library in system path when running.
 But in real linux distribution, there may be no {{libcrypto.so.1.0.0}} or 
 {{libcrypto.so}} even the system embedded openssl is 1.0.1\(x\).  Then we 
 need to make symbolic link.
 This JIRA is to improve the Openssl library finding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11216) Improve Openssl library finding

2014-10-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187608#comment-14187608
 ] 

Hadoop QA commented on HADOOP-11216:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org
  against trunk revision 8984e9b.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4971//console

This message is automatically generated.

 Improve Openssl library finding
 ---

 Key: HADOOP-11216
 URL: https://issues.apache.org/jira/browse/HADOOP-11216
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Yi Liu
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-11216.003.patch, HADOOP-11216.004.patch


 When we compile Openssl 1.0.0\(x\) or 1.0.1\(x\) using default options, there 
 will be {{libcrypto.so.1.0.0}} in output lib dir, so we expect this version 
 suffix in cmake build file
 {code}
 SET(STORED_CMAKE_FIND_LIBRARY_SUFFIXES CMAKE_FIND_LIBRARY_SUFFIXES)
 set_find_shared_library_version(1.0.0)
 SET(OPENSSL_NAME crypto)
 
 {code}
 If we don't bundle the crypto shared library in Hadoop distribution, then 
 Hadoop will try to find crypto library in system path when running.
 But in real linux distribution, there may be no {{libcrypto.so.1.0.0}} or 
 {{libcrypto.so}} even the system embedded openssl is 1.0.1\(x\).  Then we 
 need to make symbolic link.
 This JIRA is to improve the Openssl library finding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11217) Disable SSLv3 (POODLEbleed vulnerability) in KMS

2014-10-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187629#comment-14187629
 ] 

Karthik Kambatla commented on HADOOP-11217:
---

The patch looks good to me. Discussed with Robert offline, and he clarified my 
one concern with the second output: it says {{Protocol : SSLv3}} and {{Verify 
return code: 0 (ok)}}. 

Can some security expert eyeball the changes as well? 


 Disable SSLv3 (POODLEbleed vulnerability) in KMS
 

 Key: HADOOP-11217
 URL: https://issues.apache.org/jira/browse/HADOOP-11217
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 2.6.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Attachments: HADOOP-11217.patch, HADOOP-11217.patch


 We should disable SSLv3 in KMS to protect against the POODLEbleed 
 vulnerability.
 See 
 [CVE-2014-3566|http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
 We have {{sslProtocol=TLS}} set to only allow TLS in ssl-server.xml, but 
 when I checked, I could still connect with SSLv3.  There documentation is 
 somewhat unclear in the tomcat configs between {{sslProtocol}}, 
 {{sslProtocols}}, and {{sslEnabledProtocols}} and what each value they take 
 does exactly.  From what I can gather, {{sslProtocol=TLS}} actually 
 includes SSLv3 and the only way to fix this is to explicitly list which TLS 
 versions we support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11218) Add TLSv1.1,TLSv1.2 to KMS

2014-10-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-11218:
--
Priority: Critical  (was: Major)

 Add TLSv1.1,TLSv1.2 to KMS
 --

 Key: HADOOP-11218
 URL: https://issues.apache.org/jira/browse/HADOOP-11218
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 2.7.0
Reporter: Robert Kanter
Priority: Critical

 HADOOP-11217 required us to specifically list the versions of TLS that KMS 
 supports. With Hadoop 2.7 dropping support for Java 6 and Java 7 supporting 
 TLSv1.1 and TLSv1.2, we should add them to the list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11216) Improve Openssl library finding

2014-10-28 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187679#comment-14187679
 ] 

Colin Patrick McCabe commented on HADOOP-11216:
---

* This patch sets bundling to false by default, but doesn't remove the 
openssl.prefix, openssl.include, openssl.library.

* It fixes a bug where {{STORED_CMAKE_FIND_LIBRARY_SUFFIXES}} was not being 
correctly preserved.

* It adds a compile-time check that the openssl version we're compiling against 
is not too old.

* We now link against {{libcrypto.so}} (no suffix).  This avoids all the issues 
with distro (and distro-version)-specific suffixes.  The user can supply 
openssl in a few different ways
** Installing the openssl-dev package for the distro, if the distro is new 
enough.  This will create a libcrypto.so (no suffix) symlink.  We don't have to 
play the suffix guessing game because devel packages always include a no-suffix 
version.
** Bundling openssl.  I don't anticipate that any major hadoop distribution 
will do this.  It would require us to update Hadoop each time an openssl 
vulnerability was found.  It also has some export control issues.
** Doing a custom install of openssl and creating a symlink from the Hadoop 
library path to it.  This should only be necessary on older distros that don't 
have a new enough openssl version.  This is also the case where we may need 
openssl.suffix and the rest.

Take a look...

 Improve Openssl library finding
 ---

 Key: HADOOP-11216
 URL: https://issues.apache.org/jira/browse/HADOOP-11216
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Yi Liu
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-11216.003.patch, HADOOP-11216.004.patch


 When we compile Openssl 1.0.0\(x\) or 1.0.1\(x\) using default options, there 
 will be {{libcrypto.so.1.0.0}} in output lib dir, so we expect this version 
 suffix in cmake build file
 {code}
 SET(STORED_CMAKE_FIND_LIBRARY_SUFFIXES CMAKE_FIND_LIBRARY_SUFFIXES)
 set_find_shared_library_version(1.0.0)
 SET(OPENSSL_NAME crypto)
 
 {code}
 If we don't bundle the crypto shared library in Hadoop distribution, then 
 Hadoop will try to find crypto library in system path when running.
 But in real linux distribution, there may be no {{libcrypto.so.1.0.0}} or 
 {{libcrypto.so}} even the system embedded openssl is 1.0.1\(x\).  Then we 
 need to make symbolic link.
 This JIRA is to improve the Openssl library finding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11217) Disable SSLv3 (POODLEbleed vulnerability) in KMS

2014-10-28 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187698#comment-14187698
 ] 

Aaron T. Myers commented on HADOOP-11217:
-

+1, this change looks good to me as well.

Thanks, Robert and Karthik.

 Disable SSLv3 (POODLEbleed vulnerability) in KMS
 

 Key: HADOOP-11217
 URL: https://issues.apache.org/jira/browse/HADOOP-11217
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 2.6.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Attachments: HADOOP-11217.patch, HADOOP-11217.patch


 We should disable SSLv3 in KMS to protect against the POODLEbleed 
 vulnerability.
 See 
 [CVE-2014-3566|http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
 We have {{sslProtocol=TLS}} set to only allow TLS in ssl-server.xml, but 
 when I checked, I could still connect with SSLv3.  There documentation is 
 somewhat unclear in the tomcat configs between {{sslProtocol}}, 
 {{sslProtocols}}, and {{sslEnabledProtocols}} and what each value they take 
 does exactly.  From what I can gather, {{sslProtocol=TLS}} actually 
 includes SSLv3 and the only way to fix this is to explicitly list which TLS 
 versions we support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11241) TestNMSimulator fails sometimes due to timing issue

2014-10-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11241:
--
Comment: was deleted

(was: {color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org
  against trunk revision 0398db1.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4967//console

This message is automatically generated.)

 TestNMSimulator fails sometimes due to timing issue
 ---

 Key: HADOOP-11241
 URL: https://issues.apache.org/jira/browse/HADOOP-11241
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Varun Vasudev
Assignee: Varun Vasudev
 Attachments: apache-hadoop-11241.0.patch


 TestNMSimulator fails sometimes due to timing issues. From a failure -
 {noformat}
 2014-10-16 23:21:42,343 INFO  resourcemanager.ResourceTrackerService 
 (ResourceTrackerService.java:registerNodeManager(337)) - NodeManager from 
 node node1(cmPort: 0 httpPort: 80) registered with capability: memory:10240, 
 vCores:10, assigned nodeId node1:0
 2014-10-16 23:21:42,397 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,400 INFO  rmnode.RMNodeImpl (RMNodeImpl.java:handle(423)) 
 - node1:0 Node Transitioned from NEW to RUNNING
 2014-10-16 23:21:42,404 INFO  fair.FairScheduler 
 (FairScheduler.java:addNode(825)) - Added node node1:0 cluster capacity: 
 memory:10240, vCores:10
 2014-10-16 23:21:42,407 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped 
 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:18088
 2014-10-16 23:21:42,409 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,410 INFO  ipc.Server (Server.java:stop(2437)) - Stopping 
 server on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(706)) - Stopping 
 IPC Server listener on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(832)) - Stopping 
 IPC Server Responder
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11241) TestNMSimulator fails sometimes due to timing issue

2014-10-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11241:
--
Comment: was deleted

(was: {color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12677561/apache-yarn-2763.0.patch
  against trunk revision 0398db1.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: https://builds.apache.org/job/PreCommit-YARN-Build/5602//console

This message is automatically generated.)

 TestNMSimulator fails sometimes due to timing issue
 ---

 Key: HADOOP-11241
 URL: https://issues.apache.org/jira/browse/HADOOP-11241
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Varun Vasudev
Assignee: Varun Vasudev
 Attachments: apache-hadoop-11241.0.patch


 TestNMSimulator fails sometimes due to timing issues. From a failure -
 {noformat}
 2014-10-16 23:21:42,343 INFO  resourcemanager.ResourceTrackerService 
 (ResourceTrackerService.java:registerNodeManager(337)) - NodeManager from 
 node node1(cmPort: 0 httpPort: 80) registered with capability: memory:10240, 
 vCores:10, assigned nodeId node1:0
 2014-10-16 23:21:42,397 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,400 INFO  rmnode.RMNodeImpl (RMNodeImpl.java:handle(423)) 
 - node1:0 Node Transitioned from NEW to RUNNING
 2014-10-16 23:21:42,404 INFO  fair.FairScheduler 
 (FairScheduler.java:addNode(825)) - Added node node1:0 cluster capacity: 
 memory:10240, vCores:10
 2014-10-16 23:21:42,407 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped 
 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:18088
 2014-10-16 23:21:42,409 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,410 INFO  ipc.Server (Server.java:stop(2437)) - Stopping 
 server on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(706)) - Stopping 
 IPC Server listener on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(832)) - Stopping 
 IPC Server Responder
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11241) TestNMSimulator fails sometimes due to timing issue

2014-10-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11241:
--
Comment: was deleted

(was: {color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org
  against trunk revision 0398db1.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4966//console

This message is automatically generated.)

 TestNMSimulator fails sometimes due to timing issue
 ---

 Key: HADOOP-11241
 URL: https://issues.apache.org/jira/browse/HADOOP-11241
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Varun Vasudev
Assignee: Varun Vasudev
 Attachments: apache-hadoop-11241.0.patch


 TestNMSimulator fails sometimes due to timing issues. From a failure -
 {noformat}
 2014-10-16 23:21:42,343 INFO  resourcemanager.ResourceTrackerService 
 (ResourceTrackerService.java:registerNodeManager(337)) - NodeManager from 
 node node1(cmPort: 0 httpPort: 80) registered with capability: memory:10240, 
 vCores:10, assigned nodeId node1:0
 2014-10-16 23:21:42,397 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,400 INFO  rmnode.RMNodeImpl (RMNodeImpl.java:handle(423)) 
 - node1:0 Node Transitioned from NEW to RUNNING
 2014-10-16 23:21:42,404 INFO  fair.FairScheduler 
 (FairScheduler.java:addNode(825)) - Added node node1:0 cluster capacity: 
 memory:10240, vCores:10
 2014-10-16 23:21:42,407 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped 
 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:18088
 2014-10-16 23:21:42,409 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,410 INFO  ipc.Server (Server.java:stop(2437)) - Stopping 
 server on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(706)) - Stopping 
 IPC Server listener on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(832)) - Stopping 
 IPC Server Responder
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10847) Cleanup calling of sun.security.x509

2014-10-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187711#comment-14187711
 ] 

Hadoop QA commented on HADOOP-10847:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12659153/HADOOP-10847-1.patch
  against trunk revision 8984e9b.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4974//console

This message is automatically generated.

 Cleanup calling of sun.security.x509 
 -

 Key: HADOOP-10847
 URL: https://issues.apache.org/jira/browse/HADOOP-10847
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Kai Zheng
Priority: Minor
 Attachments: HADOOP-10847-1.patch


 As was told by Max (Oracle), JDK9 is likely to block all accesses to sun.* 
 classes.
 Below is from email of Andrew Purtell:
 {quote}
 The use of sun.* APIs to create a certificate in Hadoop and HBase test code 
 can be removed. Someone (Intel? Oracle?) can submit a JIRA that replaces the 
 programmatic construction with a stringified binary cert for use in the 
 relevant unit tests. 
 {quote}
 In Hadoop, the calls in question are below:
 {code}
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:24:import
  sun.security.x509.CertificateIssuerName;
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:25:import
  sun.security.x509.CertificateSerialNumber;
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:26:import
  sun.security.x509.CertificateSubjectName;
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:27:import
  sun.security.x509.CertificateValidity;
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:28:import
  sun.security.x509.CertificateVersion;
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:29:import
  sun.security.x509.CertificateX509Key;
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:30:import
  sun.security.x509.X500Name; 
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:31:import
  sun.security.x509.X509CertImpl; 
 hadoop-common/src/test/java/org/apache/hadoop/security/ssl/KeyStoreTestUtil.java:32:import
  sun.security.x509.X509CertInfo;
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11241) TestNMSimulator fails sometimes due to timing issue

2014-10-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11241:
--
Comment: was deleted

(was: {color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org
  against trunk revision 0398db1.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4968//console

This message is automatically generated.)

 TestNMSimulator fails sometimes due to timing issue
 ---

 Key: HADOOP-11241
 URL: https://issues.apache.org/jira/browse/HADOOP-11241
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Varun Vasudev
Assignee: Varun Vasudev
 Attachments: apache-hadoop-11241.0.patch


 TestNMSimulator fails sometimes due to timing issues. From a failure -
 {noformat}
 2014-10-16 23:21:42,343 INFO  resourcemanager.ResourceTrackerService 
 (ResourceTrackerService.java:registerNodeManager(337)) - NodeManager from 
 node node1(cmPort: 0 httpPort: 80) registered with capability: memory:10240, 
 vCores:10, assigned nodeId node1:0
 2014-10-16 23:21:42,397 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,400 INFO  rmnode.RMNodeImpl (RMNodeImpl.java:handle(423)) 
 - node1:0 Node Transitioned from NEW to RUNNING
 2014-10-16 23:21:42,404 INFO  fair.FairScheduler 
 (FairScheduler.java:addNode(825)) - Added node node1:0 cluster capacity: 
 memory:10240, vCores:10
 2014-10-16 23:21:42,407 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped 
 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:18088
 2014-10-16 23:21:42,409 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,410 INFO  ipc.Server (Server.java:stop(2437)) - Stopping 
 server on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(706)) - Stopping 
 IPC Server listener on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(832)) - Stopping 
 IPC Server Responder
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11238) Group cache expiry causes namenode slowdown

2014-10-28 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-11238:
--
Description: 
Our namenode pauses for 12-60 seconds several times every hour or so. During 
these pauses, no new requests can come in.

Around the time of pauses, we have log messages such as:
2014-10-22 13:24:22,688 WARN org.apache.hadoop.security.Groups: Potential 
performance problem: getGroups(user=x) took 34507 milliseconds.

The current theory is:
1. Groups has a cache that is refreshed periodically. Each entry has a cache 
expiry.
2. When a cache entry expires, multiple threads can see this expiration and 
then we have a thundering herd effect where all these threads hit the wire and 
overwhelm our LDAP servers (we are using ShellBasedUnixGroupsMapping with sssd, 
how this happens has yet to be established)
3. group resolution queries begin to take longer, I've observed it taking 1.2 
seconds instead of the usual 0.01-0.03 seconds when measuring in the shell 
`time groups myself`
4. If there is mutual exclusion somewhere along this path, a 1 second pause 
could lead to a 60 second pause as all the threads compete for the resource. 
The exact cause hasn't been established

Potential solutions include:
1. Increasing group cache time, which will make the issue less frequent
2. Rolling evictions of the cache so we prevent the large spike in LDAP queries
3. Gate the cache refresh so that only one thread is responsible for refreshing 
the cache



  was:
Our namenode pauses for 12-60 seconds several times every hour or so. During 
these pauses, no new requests can come in.

Around the time of pauses, we have log messages such as:
2014-10-22 13:24:22,688 WARN org.apache.hadoop.security.Groups: Potential 
performance problem: getGroups(user=x) took 34507 milliseconds.

The current theory is:
1. Groups has a cache that is refreshed periodically. 
2. When the cache is cleared, we have a thundering herd effect which overwhelms 
our LDAP servers (we are using ShellBasedUnixGroupsMapping with sssd, how this 
happens has yet to be established)
3. group resolution queries begin to take longer, I've observed it taking 1.2 
seconds instead of the usual 0.01-0.03 seconds when measuring in the shell 
`time groups myself`
4. If there is mutual exclusion somewhere along this path, a 1 second pause 
could lead to a 60 second pause as all the threads compete for the resource. 
The exact cause hasn't been established

Potential solutions include:
1. Increasing group cache time, which will make the issue less frequent
2. Rolling evictions of the cache so we prevent the large spike in LDAP queries




 Group cache expiry causes namenode slowdown
 ---

 Key: HADOOP-11238
 URL: https://issues.apache.org/jira/browse/HADOOP-11238
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.1
Reporter: Chris Li
Assignee: Chris Li
Priority: Minor

 Our namenode pauses for 12-60 seconds several times every hour or so. During 
 these pauses, no new requests can come in.
 Around the time of pauses, we have log messages such as:
 2014-10-22 13:24:22,688 WARN org.apache.hadoop.security.Groups: Potential 
 performance problem: getGroups(user=x) took 34507 milliseconds.
 The current theory is:
 1. Groups has a cache that is refreshed periodically. Each entry has a cache 
 expiry.
 2. When a cache entry expires, multiple threads can see this expiration and 
 then we have a thundering herd effect where all these threads hit the wire 
 and overwhelm our LDAP servers (we are using ShellBasedUnixGroupsMapping with 
 sssd, how this happens has yet to be established)
 3. group resolution queries begin to take longer, I've observed it taking 1.2 
 seconds instead of the usual 0.01-0.03 seconds when measuring in the shell 
 `time groups myself`
 4. If there is mutual exclusion somewhere along this path, a 1 second pause 
 could lead to a 60 second pause as all the threads compete for the resource. 
 The exact cause hasn't been established
 Potential solutions include:
 1. Increasing group cache time, which will make the issue less frequent
 2. Rolling evictions of the cache so we prevent the large spike in LDAP 
 queries
 3. Gate the cache refresh so that only one thread is responsible for 
 refreshing the cache



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11238) Group cache expiry causes namenode slowdown

2014-10-28 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-11238:
--
Description: 
Our namenode pauses for 12-60 seconds several times every hour. During these 
pauses, no new requests can come in.

Around the time of pauses, we have log messages such as:
2014-10-22 13:24:22,688 WARN org.apache.hadoop.security.Groups: Potential 
performance problem: getGroups(user=x) took 34507 milliseconds.

The current theory is:
1. Groups has a cache that is refreshed periodically. Each entry has a cache 
expiry.
2. When a cache entry expires, multiple threads can see this expiration and 
then we have a thundering herd effect where all these threads hit the wire and 
overwhelm our LDAP servers (we are using ShellBasedUnixGroupsMapping with sssd, 
how this happens has yet to be established)
3. group resolution queries begin to take longer, I've observed it taking 1.2 
seconds instead of the usual 0.01-0.03 seconds when measuring in the shell 
`time groups myself`
4. If there is mutual exclusion somewhere along this path, a 1 second pause 
could lead to a 60 second pause as all the threads compete for the resource. 
The exact cause hasn't been established

Potential solutions include:
1. Increasing group cache time, which will make the issue less frequent
2. Rolling evictions of the cache so we prevent the large spike in LDAP queries
3. Gate the cache refresh so that only one thread is responsible for refreshing 
the cache



  was:
Our namenode pauses for 12-60 seconds several times every hour or so. During 
these pauses, no new requests can come in.

Around the time of pauses, we have log messages such as:
2014-10-22 13:24:22,688 WARN org.apache.hadoop.security.Groups: Potential 
performance problem: getGroups(user=x) took 34507 milliseconds.

The current theory is:
1. Groups has a cache that is refreshed periodically. Each entry has a cache 
expiry.
2. When a cache entry expires, multiple threads can see this expiration and 
then we have a thundering herd effect where all these threads hit the wire and 
overwhelm our LDAP servers (we are using ShellBasedUnixGroupsMapping with sssd, 
how this happens has yet to be established)
3. group resolution queries begin to take longer, I've observed it taking 1.2 
seconds instead of the usual 0.01-0.03 seconds when measuring in the shell 
`time groups myself`
4. If there is mutual exclusion somewhere along this path, a 1 second pause 
could lead to a 60 second pause as all the threads compete for the resource. 
The exact cause hasn't been established

Potential solutions include:
1. Increasing group cache time, which will make the issue less frequent
2. Rolling evictions of the cache so we prevent the large spike in LDAP queries
3. Gate the cache refresh so that only one thread is responsible for refreshing 
the cache




 Group cache expiry causes namenode slowdown
 ---

 Key: HADOOP-11238
 URL: https://issues.apache.org/jira/browse/HADOOP-11238
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.1
Reporter: Chris Li
Assignee: Chris Li
Priority: Minor

 Our namenode pauses for 12-60 seconds several times every hour. During these 
 pauses, no new requests can come in.
 Around the time of pauses, we have log messages such as:
 2014-10-22 13:24:22,688 WARN org.apache.hadoop.security.Groups: Potential 
 performance problem: getGroups(user=x) took 34507 milliseconds.
 The current theory is:
 1. Groups has a cache that is refreshed periodically. Each entry has a cache 
 expiry.
 2. When a cache entry expires, multiple threads can see this expiration and 
 then we have a thundering herd effect where all these threads hit the wire 
 and overwhelm our LDAP servers (we are using ShellBasedUnixGroupsMapping with 
 sssd, how this happens has yet to be established)
 3. group resolution queries begin to take longer, I've observed it taking 1.2 
 seconds instead of the usual 0.01-0.03 seconds when measuring in the shell 
 `time groups myself`
 4. If there is mutual exclusion somewhere along this path, a 1 second pause 
 could lead to a 60 second pause as all the threads compete for the resource. 
 The exact cause hasn't been established
 Potential solutions include:
 1. Increasing group cache time, which will make the issue less frequent
 2. Rolling evictions of the cache so we prevent the large spike in LDAP 
 queries
 3. Gate the cache refresh so that only one thread is responsible for 
 refreshing the cache



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11241) TestNMSimulator fails sometimes due to timing issue

2014-10-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11241?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187742#comment-14187742
 ] 

Hadoop QA commented on HADOOP-11241:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12677568/apache-hadoop-11241.0.patch
  against trunk revision 8984e9b.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-sls.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4973//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4973//console

This message is automatically generated.

 TestNMSimulator fails sometimes due to timing issue
 ---

 Key: HADOOP-11241
 URL: https://issues.apache.org/jira/browse/HADOOP-11241
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Varun Vasudev
Assignee: Varun Vasudev
 Attachments: apache-hadoop-11241.0.patch


 TestNMSimulator fails sometimes due to timing issues. From a failure -
 {noformat}
 2014-10-16 23:21:42,343 INFO  resourcemanager.ResourceTrackerService 
 (ResourceTrackerService.java:registerNodeManager(337)) - NodeManager from 
 node node1(cmPort: 0 httpPort: 80) registered with capability: memory:10240, 
 vCores:10, assigned nodeId node1:0
 2014-10-16 23:21:42,397 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,400 INFO  rmnode.RMNodeImpl (RMNodeImpl.java:handle(423)) 
 - node1:0 Node Transitioned from NEW to RUNNING
 2014-10-16 23:21:42,404 INFO  fair.FairScheduler 
 (FairScheduler.java:addNode(825)) - Added node node1:0 cluster capacity: 
 memory:10240, vCores:10
 2014-10-16 23:21:42,407 INFO  mortbay.log (Slf4jLog.java:info(67)) - Stopped 
 HttpServer2$SelectChannelConnectorWithSafeStartup@localhost:18088
 2014-10-16 23:21:42,409 ERROR delegation.AbstractDelegationTokenSecretManager 
 (AbstractDelegationTokenSecretManager.java:run(642)) - ExpiredTokenRemover 
 received java.lang.InterruptedException: sleep interrupted
 2014-10-16 23:21:42,410 INFO  ipc.Server (Server.java:stop(2437)) - Stopping 
 server on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(706)) - Stopping 
 IPC Server listener on 18032
 2014-10-16 23:21:42,412 INFO  ipc.Server (Server.java:run(832)) - Stopping 
 IPC Server Responder
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11216) Improve Openssl library finding

2014-10-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187764#comment-14187764
 ] 

Hadoop QA commented on HADOOP-11216:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12677724/HADOOP-11216.004.patch
  against trunk revision 8984e9b.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverControllerStress

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4972//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4972//console

This message is automatically generated.

 Improve Openssl library finding
 ---

 Key: HADOOP-11216
 URL: https://issues.apache.org/jira/browse/HADOOP-11216
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Yi Liu
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-11216.003.patch, HADOOP-11216.004.patch


 When we compile Openssl 1.0.0\(x\) or 1.0.1\(x\) using default options, there 
 will be {{libcrypto.so.1.0.0}} in output lib dir, so we expect this version 
 suffix in cmake build file
 {code}
 SET(STORED_CMAKE_FIND_LIBRARY_SUFFIXES CMAKE_FIND_LIBRARY_SUFFIXES)
 set_find_shared_library_version(1.0.0)
 SET(OPENSSL_NAME crypto)
 
 {code}
 If we don't bundle the crypto shared library in Hadoop distribution, then 
 Hadoop will try to find crypto library in system path when running.
 But in real linux distribution, there may be no {{libcrypto.so.1.0.0}} or 
 {{libcrypto.so}} even the system embedded openssl is 1.0.1\(x\).  Then we 
 need to make symbolic link.
 This JIRA is to improve the Openssl library finding.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11217) Disable SSLv3 in KMS

2014-10-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-11217:
--
Summary: Disable SSLv3 in KMS  (was: Disable SSLv3 (POODLEbleed 
vulnerability) in KMS)

 Disable SSLv3 in KMS
 

 Key: HADOOP-11217
 URL: https://issues.apache.org/jira/browse/HADOOP-11217
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 2.6.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Attachments: HADOOP-11217.patch, HADOOP-11217.patch


 We should disable SSLv3 in KMS to protect against the POODLEbleed 
 vulnerability.
 See 
 [CVE-2014-3566|http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
 We have {{sslProtocol=TLS}} set to only allow TLS in ssl-server.xml, but 
 when I checked, I could still connect with SSLv3.  There documentation is 
 somewhat unclear in the tomcat configs between {{sslProtocol}}, 
 {{sslProtocols}}, and {{sslEnabledProtocols}} and what each value they take 
 does exactly.  From what I can gather, {{sslProtocol=TLS}} actually 
 includes SSLv3 and the only way to fix this is to explicitly list which TLS 
 versions we support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11242) Record the time of calling in tracing span of IPC server

2014-10-28 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11242?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-11242:
--
Attachment: HADOOP-11242.1.patch

 Record the time of calling in tracing span of IPC server
 

 Key: HADOOP-11242
 URL: https://issues.apache.org/jira/browse/HADOOP-11242
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HADOOP-11242.1.patch, HADOOP-11242.1.patch


 Current tracing span starts when the Call is put into callQueue. Recording 
 the time of calling is useful to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11177) Reduce tar ball size for MR over distributed cache

2014-10-28 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11177?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-11177:
-
Status: Open  (was: Patch Available)

bq. Remove version number after unpack
Can we please drop this change from this patch? Let's not overload this patch.

Also, please move this to MapReduce project.

 Reduce tar ball size for MR over distributed cache
 --

 Key: HADOOP-11177
 URL: https://issues.apache.org/jira/browse/HADOOP-11177
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Junping Du
Assignee: Junping Du
Priority: Critical
 Attachments: HADOOP-11177.patch


 The current tar ball built from mvn package -Pdist -DskipTests -Dtar is 
 over 160M in size. We need more smaller tar ball pieces for feature like MR 
 over distributed cache to support Rolling update of cluster.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11217) Disable SSLv3 in KMS

2014-10-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-11217:
--
   Resolution: Fixed
Fix Version/s: 2.6.0
   Status: Resolved  (was: Patch Available)

Thanks Robert for the patch, and ATM for the review. Just committed this to 
trunk, branch-2 and branch-2.6.

 Disable SSLv3 in KMS
 

 Key: HADOOP-11217
 URL: https://issues.apache.org/jira/browse/HADOOP-11217
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 2.6.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Fix For: 2.6.0

 Attachments: HADOOP-11217.patch, HADOOP-11217.patch


 We should disable SSLv3 in KMS to protect against the POODLEbleed 
 vulnerability.
 See 
 [CVE-2014-3566|http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
 We have {{sslProtocol=TLS}} set to only allow TLS in ssl-server.xml, but 
 when I checked, I could still connect with SSLv3.  There documentation is 
 somewhat unclear in the tomcat configs between {{sslProtocol}}, 
 {{sslProtocols}}, and {{sslEnabledProtocols}} and what each value they take 
 does exactly.  From what I can gather, {{sslProtocol=TLS}} actually 
 includes SSLv3 and the only way to fix this is to explicitly list which TLS 
 versions we support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11195) Move Id-Name mapping in NFS to the hadoop-common area for better maintenance

2014-10-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187808#comment-14187808
 ] 

Hadoop QA commented on HADOOP-11195:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12677397/HADOOP-11195.002.patch
  against trunk revision ac9ab03.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

  {color:red}-1 javac{color}.  The applied patch generated 1266 javac 
compiler warnings (more than the trunk's current 1264 warnings).

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The test build failed in 
hadoop-hdfs-project/hadoop-hdfs 

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4975//testReport/
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4975//artifact/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4975//console

This message is automatically generated.

 Move Id-Name mapping in NFS to the hadoop-common area for better maintenance
 

 Key: HADOOP-11195
 URL: https://issues.apache.org/jira/browse/HADOOP-11195
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HADOOP-11195.001.patch, HADOOP-11195.002.patch, 
 HADOOP-11195.002.patch


 Per [~aw]'s suggestion in HDFS-7146, creating this jira to move the id-name 
 mapping implementation (IdUserGroup.java) to the framework that cache user 
 and group info in hadoop-common area 
 (hadoop-common/src/main/java/org/apache/hadoop/security) 
 Thanks [~brandonli] and [~aw] for the review and discussion in HDFS-7146.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-11243) Disable SSLv3 in YARN shuffle

2014-10-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla moved YARN-2722 to HADOOP-11243:
-

Key: HADOOP-11243  (was: YARN-2722)
Project: Hadoop Common  (was: Hadoop YARN)

 Disable SSLv3 in YARN shuffle
 -

 Key: HADOOP-11243
 URL: https://issues.apache.org/jira/browse/HADOOP-11243
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
Assignee: Wei Yan
 Attachments: YARN-2722-1.patch, YARN-2722-2.patch, YARN-2722-3.patch


 We should disable SSLv3 in HttpFS to protect against the POODLEbleed 
 vulnerability.
 See [CVE-2014-3566 
 |http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
 We have {{context = SSLContext.getInstance(TLS);}} in SSLFactory, but when 
 I checked, I could still connect with SSLv3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11217) Disable SSLv3 in KMS

2014-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187813#comment-14187813
 ] 

Hudson commented on HADOOP-11217:
-

FAILURE: Integrated in Hadoop-trunk-Commit #6377 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6377/])
HADOOP-11217. Disable SSLv3 in KMS. (Robert Kanter via kasha) (kasha: rev 
1a780823384a9c4289b8bb0b3c73e6b886d78fd0)
* hadoop-common-project/hadoop-kms/src/main/tomcat/ssl-server.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


 Disable SSLv3 in KMS
 

 Key: HADOOP-11217
 URL: https://issues.apache.org/jira/browse/HADOOP-11217
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 2.6.0
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Blocker
 Fix For: 2.6.0

 Attachments: HADOOP-11217.patch, HADOOP-11217.patch


 We should disable SSLv3 in KMS to protect against the POODLEbleed 
 vulnerability.
 See 
 [CVE-2014-3566|http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
 We have {{sslProtocol=TLS}} set to only allow TLS in ssl-server.xml, but 
 when I checked, I could still connect with SSLv3.  There documentation is 
 somewhat unclear in the tomcat configs between {{sslProtocol}}, 
 {{sslProtocols}}, and {{sslEnabledProtocols}} and what each value they take 
 does exactly.  From what I can gather, {{sslProtocol=TLS}} actually 
 includes SSLv3 and the only way to fix this is to explicitly list which TLS 
 versions we support.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11243) SSLFactory shouldn't allow SSLv3

2014-10-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-11243:
--
Description: 
We should disable SSLv3 in SSLFactory. This affects MR shuffle among others. 
See [CVE-2014-3566 
|http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]

We have {{context = SSLContext.getInstance(TLS);}} in SSLFactory, but when I 
checked, I could still connect with SSLv3.

  was:
We should disable SSLv3 in HttpFS to protect against the POODLEbleed 
vulnerability.
See [CVE-2014-3566 
|http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]

We have {{context = SSLContext.getInstance(TLS);}} in SSLFactory, but when I 
checked, I could still connect with SSLv3.

   Priority: Blocker  (was: Major)
Summary: SSLFactory shouldn't allow SSLv3  (was: Disable SSLv3 in YARN 
shuffle)

 SSLFactory shouldn't allow SSLv3
 

 Key: HADOOP-11243
 URL: https://issues.apache.org/jira/browse/HADOOP-11243
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Blocker
 Attachments: YARN-2722-1.patch, YARN-2722-2.patch, YARN-2722-3.patch


 We should disable SSLv3 in SSLFactory. This affects MR shuffle among others. 
 See [CVE-2014-3566 
 |http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
 We have {{context = SSLContext.getInstance(TLS);}} in SSLFactory, but when 
 I checked, I could still connect with SSLv3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11243) SSLFactory shouldn't allow SSLv3

2014-10-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187820#comment-14187820
 ] 

Karthik Kambatla commented on HADOOP-11243:
---

LGTM, like the configurability. 

+1. Checking this in. 

 SSLFactory shouldn't allow SSLv3
 

 Key: HADOOP-11243
 URL: https://issues.apache.org/jira/browse/HADOOP-11243
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Blocker
 Attachments: YARN-2722-1.patch, YARN-2722-2.patch, YARN-2722-3.patch


 We should disable SSLv3 in SSLFactory. This affects MR shuffle among others. 
 See [CVE-2014-3566 
 |http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
 We have {{context = SSLContext.getInstance(TLS);}} in SSLFactory, but when 
 I checked, I could still connect with SSLv3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11243) SSLFactory shouldn't allow SSLv3

2014-10-28 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187848#comment-14187848
 ] 

Hudson commented on HADOOP-11243:
-

FAILURE: Integrated in Hadoop-trunk-Commit #6379 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6379/])
HADOOP-11243. SSLFactory shouldn't allow SSLv3. (Wei Yan via kasha) (kasha: rev 
3c5f5af1184e85158dec962df0b0bc2be8d0d1e3)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/SSLFactory.java
* 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/site/apt/EncryptedShuffle.apt.vm
* hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* hadoop-common-project/hadoop-common/CHANGES.txt


 SSLFactory shouldn't allow SSLv3
 

 Key: HADOOP-11243
 URL: https://issues.apache.org/jira/browse/HADOOP-11243
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Blocker
 Attachments: YARN-2722-1.patch, YARN-2722-2.patch, YARN-2722-3.patch


 We should disable SSLv3 in SSLFactory. This affects MR shuffle among others. 
 See [CVE-2014-3566 
 |http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
 We have {{context = SSLContext.getInstance(TLS);}} in SSLFactory, but when 
 I checked, I could still connect with SSLv3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11243) SSLFactory shouldn't allow SSLv3

2014-10-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-11243:
--
   Resolution: Fixed
Fix Version/s: 2.6.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Wei for working on this, Stephen/Wing Yew for your reviews.

Just committed this to trunk, branch-2 and branch-2.6. 

 SSLFactory shouldn't allow SSLv3
 

 Key: HADOOP-11243
 URL: https://issues.apache.org/jira/browse/HADOOP-11243
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Blocker
 Fix For: 2.6.0

 Attachments: YARN-2722-1.patch, YARN-2722-2.patch, YARN-2722-3.patch


 We should disable SSLv3 in SSLFactory. This affects MR shuffle among others. 
 See [CVE-2014-3566 
 |http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
 We have {{context = SSLContext.getInstance(TLS);}} in SSLFactory, but when 
 I checked, I could still connect with SSLv3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11243) SSLFactory shouldn't allow SSLv3

2014-10-28 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187862#comment-14187862
 ] 

Haohui Mai commented on HADOOP-11243:
-

It maybe 

 SSLFactory shouldn't allow SSLv3
 

 Key: HADOOP-11243
 URL: https://issues.apache.org/jira/browse/HADOOP-11243
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Blocker
 Fix For: 2.6.0

 Attachments: YARN-2722-1.patch, YARN-2722-2.patch, YARN-2722-3.patch


 We should disable SSLv3 in SSLFactory. This affects MR shuffle among others. 
 See [CVE-2014-3566 
 |http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
 We have {{context = SSLContext.getInstance(TLS);}} in SSLFactory, but when 
 I checked, I could still connect with SSLv3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-11243) SSLFactory shouldn't allow SSLv3

2014-10-28 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187862#comment-14187862
 ] 

Haohui Mai edited comment on HADOOP-11243 at 10/29/14 1:27 AM:
---

Given the fact that we're moving towards Java 7 in 2.7 maybe it's valuable to 
create another jira to either change the default value of the configuration for 
a more secure one or to remove it.


was (Author: wheat9):
It maybe 

 SSLFactory shouldn't allow SSLv3
 

 Key: HADOOP-11243
 URL: https://issues.apache.org/jira/browse/HADOOP-11243
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Blocker
 Fix For: 2.6.0

 Attachments: YARN-2722-1.patch, YARN-2722-2.patch, YARN-2722-3.patch


 We should disable SSLv3 in SSLFactory. This affects MR shuffle among others. 
 See [CVE-2014-3566 
 |http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
 We have {{context = SSLContext.getInstance(TLS);}} in SSLFactory, but when 
 I checked, I could still connect with SSLv3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10926) Improve test-patch.sh to apply binary diffs

2014-10-28 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-10926:
--
Status: Patch Available  (was: Reopened)

 Improve test-patch.sh to apply binary diffs
 ---

 Key: HADOOP-10926
 URL: https://issues.apache.org/jira/browse/HADOOP-10926
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0

 Attachments: HADOOP-10926.001.patch, HADOOP-10926.002.patch


 The Unix {{patch}} command cannot apply binary diffs as generated via {{git 
 diff --binary}}. This means we cannot get effective test-patch.sh runs when 
 the patch requires adding a binary file.
 We should consider using a different patch method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11243) SSLFactory shouldn't allow SSLv3

2014-10-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187885#comment-14187885
 ] 

Hadoop QA commented on HADOOP-11243:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12677027/YARN-2722-3.patch
  against trunk revision 3f48493.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4977//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4977//console

This message is automatically generated.

 SSLFactory shouldn't allow SSLv3
 

 Key: HADOOP-11243
 URL: https://issues.apache.org/jira/browse/HADOOP-11243
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Blocker
 Fix For: 2.6.0

 Attachments: YARN-2722-1.patch, YARN-2722-2.patch, YARN-2722-3.patch


 We should disable SSLv3 in SSLFactory. This affects MR shuffle among others. 
 See [CVE-2014-3566 
 |http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
 We have {{context = SSLContext.getInstance(TLS);}} in SSLFactory, but when 
 I checked, I could still connect with SSLv3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11218) Add TLSv1.1,TLSv1.2 to KMS, HttpFS, SSLFactory

2014-10-28 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11218?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-11218:
--
Summary: Add TLSv1.1,TLSv1.2 to KMS, HttpFS, SSLFactory  (was: Add 
TLSv1.1,TLSv1.2 to KMS)

 Add TLSv1.1,TLSv1.2 to KMS, HttpFS, SSLFactory
 --

 Key: HADOOP-11218
 URL: https://issues.apache.org/jira/browse/HADOOP-11218
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms
Affects Versions: 2.7.0
Reporter: Robert Kanter
Priority: Critical

 HADOOP-11217 required us to specifically list the versions of TLS that KMS 
 supports. With Hadoop 2.7 dropping support for Java 6 and Java 7 supporting 
 TLSv1.1 and TLSv1.2, we should add them to the list.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11243) SSLFactory shouldn't allow SSLv3

2014-10-28 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187901#comment-14187901
 ] 

Karthik Kambatla commented on HADOOP-11243:
---

Thanks [~wheat9]. Repurposed HADOOP-11218 to take care of this for 2.7. 

 SSLFactory shouldn't allow SSLv3
 

 Key: HADOOP-11243
 URL: https://issues.apache.org/jira/browse/HADOOP-11243
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Blocker
 Fix For: 2.6.0

 Attachments: YARN-2722-1.patch, YARN-2722-2.patch, YARN-2722-3.patch


 We should disable SSLv3 in SSLFactory. This affects MR shuffle among others. 
 See [CVE-2014-3566 
 |http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2014-3566]
 We have {{context = SSLContext.getInstance(TLS);}} in SSLFactory, but when 
 I checked, I could still connect with SSLv3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10926) Improve test-patch.sh to apply binary diffs

2014-10-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10926?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187920#comment-14187920
 ] 

Hadoop QA commented on HADOOP-10926:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12677660/HADOOP-10926.002.patch
  against trunk revision 3c5f5af.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4978//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4978//console

This message is automatically generated.

 Improve test-patch.sh to apply binary diffs
 ---

 Key: HADOOP-10926
 URL: https://issues.apache.org/jira/browse/HADOOP-10926
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Colin Patrick McCabe
 Fix For: 3.0.0

 Attachments: HADOOP-10926.001.patch, HADOOP-10926.002.patch


 The Unix {{patch}} command cannot apply binary diffs as generated via {{git 
 diff --binary}}. This means we cannot get effective test-patch.sh runs when 
 the patch requires adding a binary file.
 We should consider using a different patch method.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11195) Move Id-Name mapping in NFS to the hadoop-common area for better maintenance

2014-10-28 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11195?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187921#comment-14187921
 ] 

Yongjun Zhang commented on HADOOP-11195:


HI Brandon, thanks for triggering the above build. The problem I'm having right 
now is that the link to patchprocess/diffJavacWarnings.txt doesn't exist, for 
the recent builds. It looks a jenkins infra issue.




 Move Id-Name mapping in NFS to the hadoop-common area for better maintenance
 

 Key: HADOOP-11195
 URL: https://issues.apache.org/jira/browse/HADOOP-11195
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HADOOP-11195.001.patch, HADOOP-11195.002.patch, 
 HADOOP-11195.002.patch


 Per [~aw]'s suggestion in HDFS-7146, creating this jira to move the id-name 
 mapping implementation (IdUserGroup.java) to the framework that cache user 
 and group info in hadoop-common area 
 (hadoop-common/src/main/java/org/apache/hadoop/security) 
 Thanks [~brandonli] and [~aw] for the review and discussion in HDFS-7146.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11242) Record the time of calling in tracing span of IPC server

2014-10-28 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11242?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14187964#comment-14187964
 ] 

Hadoop QA commented on HADOOP-11242:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/1265/HADOOP-11242.1.patch
  against trunk revision 675bca2.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.web.TestWebHDFSXAttr

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4976//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4976//console

This message is automatically generated.

 Record the time of calling in tracing span of IPC server
 

 Key: HADOOP-11242
 URL: https://issues.apache.org/jira/browse/HADOOP-11242
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: HADOOP-11242.1.patch, HADOOP-11242.1.patch


 Current tracing span starts when the Call is put into callQueue. Recording 
 the time of calling is useful to debug.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10911) hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109

2014-10-28 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-10911:
---
Attachment: oozie-webconsole.stream

We've discovered that this patch has broken the Oozie web console in secure 
clusters.  The attached trace shows that the cookie is getting truncated after 
the first occurrence of '='.  Reintroducing the quotes fixes the issue.

I'd like to revert this patch for 2.6.0.  We can reopen this issue if there is 
still a need to make a subsequent attempt at another patch for something 
related to HttpClient.  What do others think?

Thanks to [~venkatnrangan] for reporting the bug and providing the root cause 
analysis that identified this patch.

 hadoop.auth cookie after HADOOP-10710 still not proper according to RFC2109
 ---

 Key: HADOOP-10911
 URL: https://issues.apache.org/jira/browse/HADOOP-10911
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.0
Reporter: Gregory Chanan
 Fix For: 2.6.0

 Attachments: HADOOP-10911-tests.patch, HADOOP-10911.patch, 
 HADOOP-10911v2.patch, HADOOP-10911v3.patch, oozie-webconsole.stream


 I'm seeing the same problem reported in HADOOP-10710 (that is, httpclient is 
 unable to authenticate with servers running the authentication filter), even 
 with HADOOP-10710 applied.
 From my reading of the spec, the problem is as follows:
 Expires is not a valid directive according to the RFC, though it is mentioned 
 for backwards compatibility with netscape draft spec.  When httpclient sees 
 Expires, it parses according to the netscape draft spec, but note from 
 RFC2109:
 {code}
 Note that the Expires date format contains embedded spaces, and that old 
 cookies did not have quotes around values. 
 {code}
 and note that AuthenticationFilter puts quotes around the value:
 https://github.com/apache/hadoop-common/blob/6b11bff94ebf7d99b3a9e513edd813cb82538400/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L437-L439
 So httpclient's parsing appears to be kosher.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)