[jira] [Commented] (HADOOP-10530) Make hadoop trunk build on Java7+ only

2014-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14237744#comment-14237744
 ] 

Hadoop QA commented on HADOOP-10530:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12685564/HADOOP-10530-005.patch
  against trunk revision 120e1de.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

  {color:red}-1 javac{color}.  The applied patch generated 1217 javac 
compiler warnings (more than the trunk's current 1211 warnings).

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 3 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-assemblies hadoop-common-project/hadoop-annotations.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5189//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5189//artifact/patchprocess/newPatchFindbugsWarningshadoop-annotations.html
Javac warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5189//artifact/patchprocess/diffJavacWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5189//console

This message is automatically generated.

 Make hadoop trunk build on Java7+ only
 --

 Key: HADOOP-10530
 URL: https://issues.apache.org/jira/browse/HADOOP-10530
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0, 2.6.0
 Environment: Java 1.7+
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Blocker
 Attachments: HADOOP-10530-001.patch, HADOOP-10530-002.patch, 
 HADOOP-10530-003.patch, HADOOP-10530-004.patch, HADOOP-10530-005.patch, 
 HADOOP-10530-debug.000.patch, Screen Shot 2014-09-20 at 18.09.05.png


 As discussed on hadoop-common, hadoop 3 is envisaged to be Java7+ *only* 
 -this JIRA covers switching the build for this
 # maven enforcer plugin to set Java version = {{[1.7)}}
 # compiler to set language to java 1.7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11360) GraphiteSink reports data with wrong timestamp

2014-12-08 Thread Kamil Gorlo (JIRA)
Kamil Gorlo created HADOOP-11360:


 Summary: GraphiteSink reports data with wrong timestamp
 Key: HADOOP-11360
 URL: https://issues.apache.org/jira/browse/HADOOP-11360
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Reporter: Kamil Gorlo


I've tried to use GraphiteSink with metrics2 system, but it looks that 
timestamp sent to Graphite is refreshed rarely (about every 2 minutes approx.) 
no mather how small period is set.

Here is my configuration:

*.sink.graphite.server_host=graphite-relay.host
*.sink.graphite.server_port=2013
*.sink.graphite.metrics_prefix=graphite.warehouse-data-1
*.period=10
nodemanager.sink.graphite.class=org.apache.hadoop.metrics2.sink.GraphiteSink

And here is dumpet network traffic to graphite-relay.host (only select lines, 
every line appears in 10 seconds as period suggests):

graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041472
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041472
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041472
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 3 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 4 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 2 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 3 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 2 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 2 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 1 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 1 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041728
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041728


As you can see, AllocatedContainers value is refreshed every 10 seconds, but 
timestamp is not.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11360) GraphiteSink reports data with wrong timestamp

2014-12-08 Thread Kamil Gorlo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kamil Gorlo updated HADOOP-11360:
-
Description: 
I've tried to use GraphiteSink with metrics2 system, but it looks that 
timestamp sent to Graphite is refreshed rarely (about every 2 minutes approx.) 
no mather how small period is set.

Here is my configuration:

*.sink.graphite.server_host=graphite-relay.host
*.sink.graphite.server_port=2013
*.sink.graphite.metrics_prefix=graphite.warehouse-data-1
*.period=10
nodemanager.sink.graphite.class=org.apache.hadoop.metrics2.sink.GraphiteSink

And here is dumped network traffic to graphite-relay.host (only select lines, 
every line appears in 10 seconds as period suggests):

graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041472
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041472
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041472
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 3 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 4 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 2 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 3 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 2 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 2 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 1 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 1 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041728
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041728


As you can see, AllocatedContainers value is refreshed every 10 seconds, but 
timestamp is not.

  was:
I've tried to use GraphiteSink with metrics2 system, but it looks that 
timestamp sent to Graphite is refreshed rarely (about every 2 minutes approx.) 
no mather how small period is set.

Here is my configuration:

*.sink.graphite.server_host=graphite-relay.host
*.sink.graphite.server_port=2013
*.sink.graphite.metrics_prefix=graphite.warehouse-data-1
*.period=10
nodemanager.sink.graphite.class=org.apache.hadoop.metrics2.sink.GraphiteSink

And here is dumpet network traffic to graphite-relay.host (only select lines, 
every line appears in 10 seconds as period suggests):

graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041472
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041472
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041472
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 3 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 4 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 2 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 3 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 2 1418041600

[jira] [Updated] (HADOOP-11360) GraphiteSink reports data with wrong timestamp

2014-12-08 Thread Kamil Gorlo (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kamil Gorlo updated HADOOP-11360:
-
Description: 
I've tried to use GraphiteSink with metrics2 system, but it looks that 
timestamp sent to Graphite is refreshed rarely (about every 2 minutes approx.) 
no mather how small period is set.

Here is my configuration:

*.sink.graphite.server_host=graphite-relay.host
*.sink.graphite.server_port=2013
*.sink.graphite.metrics_prefix=graphite.warehouse-data-1
*.period=10
nodemanager.sink.graphite.class=org.apache.hadoop.metrics2.sink.GraphiteSink

And here is dumped network traffic to graphite-relay.host (only selected lines, 
every line appears in 10 seconds as period suggests):

graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041472
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041472
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041472
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 3 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 4 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 2 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 3 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 2 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 2 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 1 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 1 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041728
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041728


As you can see, AllocatedContainers value is refreshed every 10 seconds, but 
timestamp is not.

It looks that the problem is level above (in classes providing MetricsRecord - 
because timestamp value is taken from MetricsRecord object provided in argument 
to putMetrics method in Sink implementation) which implies that every sink will 
have the same problem. Maybe I misconfigured something?

  was:
I've tried to use GraphiteSink with metrics2 system, but it looks that 
timestamp sent to Graphite is refreshed rarely (about every 2 minutes approx.) 
no mather how small period is set.

Here is my configuration:

*.sink.graphite.server_host=graphite-relay.host
*.sink.graphite.server_port=2013
*.sink.graphite.metrics_prefix=graphite.warehouse-data-1
*.period=10
nodemanager.sink.graphite.class=org.apache.hadoop.metrics2.sink.GraphiteSink

And here is dumped network traffic to graphite-relay.host (only select lines, 
every line appears in 10 seconds as period suggests):

graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041472
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041472
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041472
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 0 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 3 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 4 1418041600
graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
 2 1418041600

[jira] [Updated] (HADOOP-11183) Memory-based S3AOutputstream

2014-12-08 Thread Thomas Demoor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Demoor updated HADOOP-11183:
---
Attachment: info-S3AFastOutputStream-sync.md
HADOOP-11183.001.patch

Patch 001: synchronous implementation (blocks on every partUpload). A little 
extra info is provided in info-S3AFastOutputStream-sync.md.

Additional remarks:
# This patch is simply to set the stage and kick off the discussion. I am 
working on an async version (multiple concurrent partuploads), which I will 
post asap.
# I would really like to bump up the aws-sdk version but in some other jira 
this was said to be a gargantuan task (probably http versions conflicting with 
other libraries? azure?) Correct?
# I also renamed partSizeThreshold to the correct term multiPartThreshold. 
(creating a separate issue seemed overkill).

 Memory-based S3AOutputstream
 

 Key: HADOOP-11183
 URL: https://issues.apache.org/jira/browse/HADOOP-11183
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: Thomas Demoor
 Attachments: HADOOP-11183.001.patch, info-S3AFastOutputStream-sync.md


 Currently s3a buffers files on disk(s) before uploading. This JIRA 
 investigates adding a memory-based upload implementation.
 The motivation is evidently performance: this would be beneficial for users 
 with high network bandwidth to S3 (EC2?) or users that run Hadoop directly on 
 an S3-compatible object store (FYI: my contributions are made in name of 
 Amplidata). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11183) Memory-based S3AOutputstream

2014-12-08 Thread Thomas Demoor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11183?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Demoor updated HADOOP-11183:
---
Target Version/s: 2.8.0  (was: 2.6.0)

 Memory-based S3AOutputstream
 

 Key: HADOOP-11183
 URL: https://issues.apache.org/jira/browse/HADOOP-11183
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 2.6.0
Reporter: Thomas Demoor
 Attachments: HADOOP-11183.001.patch, info-S3AFastOutputStream-sync.md


 Currently s3a buffers files on disk(s) before uploading. This JIRA 
 investigates adding a memory-based upload implementation.
 The motivation is evidently performance: this would be beneficial for users 
 with high network bandwidth to S3 (EC2?) or users that run Hadoop directly on 
 an S3-compatible object store (FYI: my contributions are made in name of 
 Amplidata). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-11361) NPE in MetricsSourceAdapter

2014-12-08 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula moved HDFS-7487 to HADOOP-11361:
-

Affects Version/s: (was: 2.5.1)
   (was: 2.6.0)
   (was: 2.4.1)
   2.4.1
   2.6.0
   2.5.1
  Key: HADOOP-11361  (was: HDFS-7487)
  Project: Hadoop Common  (was: Hadoop HDFS)

 NPE in MetricsSourceAdapter
 ---

 Key: HADOOP-11361
 URL: https://issues.apache.org/jira/browse/HADOOP-11361
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.1, 2.6.0, 2.4.1
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HDFS-7487.patch


 {noformat}
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
   at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
   at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11362) Test org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf timing out on java 8

2014-12-08 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-11362:
---

 Summary: Test 
org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf 
timing out on java 8
 Key: HADOOP-11362
 URL: https://issues.apache.org/jira/browse/HADOOP-11362
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 3.0.0
 Environment: ASF Jenkins, Java 8
Reporter: Steve Loughran


The test 
{{org.apache.hadoop.crypto.random.TestOsSecureRandom.testOsSecureRandomSetConf}}
 is timing out on jenkins + Java 8.

This is probably the exec() operation. It may be transient, it may be a java 8 
+ shell problem. 

do we actually need this test in its present form? If a test for file handle 
leakage is really needed, attempting to create 64K instances of the OSRandom 
object should do it without having to resort to some printing and manual 
debugging of logs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11361) NPE in MetricsSourceAdapter

2014-12-08 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11361:
--
Attachment: HADOOP-11361.patch

 NPE in MetricsSourceAdapter
 ---

 Key: HADOOP-11361
 URL: https://issues.apache.org/jira/browse/HADOOP-11361
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.1, 2.6.0, 2.5.1
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-11361.patch, HDFS-7487.patch


 {noformat}
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
   at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
   at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11361) NPE in MetricsSourceAdapter

2014-12-08 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14237935#comment-14237935
 ] 

Brahma Reddy Battula commented on HADOOP-11361:
---

I moved to common..:)

 NPE in MetricsSourceAdapter
 ---

 Key: HADOOP-11361
 URL: https://issues.apache.org/jira/browse/HADOOP-11361
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.1, 2.6.0, 2.5.1
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-11361.patch, HDFS-7487.patch


 {noformat}
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
   at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
   at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-10530) Make hadoop trunk build on Java7+ only

2014-12-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-10530.
-
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Incompatible change

 Make hadoop trunk build on Java7+ only
 --

 Key: HADOOP-10530
 URL: https://issues.apache.org/jira/browse/HADOOP-10530
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0, 2.6.0
 Environment: Java 1.7+
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HADOOP-10530-001.patch, HADOOP-10530-002.patch, 
 HADOOP-10530-003.patch, HADOOP-10530-004.patch, HADOOP-10530-005.patch, 
 HADOOP-10530-debug.000.patch, Screen Shot 2014-09-20 at 18.09.05.png


 As discussed on hadoop-common, hadoop 3 is envisaged to be Java7+ *only* 
 -this JIRA covers switching the build for this
 # maven enforcer plugin to set Java version = {{[1.7)}}
 # compiler to set language to java 1.7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10530) Make hadoop trunk build on Java7+ only

2014-12-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14237984#comment-14237984
 ] 

Hudson commented on HADOOP-10530:
-

FAILURE: Integrated in Hadoop-trunk-Commit #6668 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6668/])
HADOOP-10530 Make hadoop build on Java7+ only (stevel) (stevel: rev 
144da2e4656703751c48875b4ed34975d106edaa)
* pom.xml
* hadoop-common-project/hadoop-annotations/pom.xml
* hadoop-project/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* BUILDING.txt
* hadoop-assemblies/pom.xml


 Make hadoop trunk build on Java7+ only
 --

 Key: HADOOP-10530
 URL: https://issues.apache.org/jira/browse/HADOOP-10530
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0, 2.6.0
 Environment: Java 1.7+
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HADOOP-10530-001.patch, HADOOP-10530-002.patch, 
 HADOOP-10530-003.patch, HADOOP-10530-004.patch, HADOOP-10530-005.patch, 
 HADOOP-10530-debug.000.patch, Screen Shot 2014-09-20 at 18.09.05.png


 As discussed on hadoop-common, hadoop 3 is envisaged to be Java7+ *only* 
 -this JIRA covers switching the build for this
 # maven enforcer plugin to set Java version = {{[1.7)}}
 # compiler to set language to java 1.7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11361) NPE in MetricsSourceAdapter

2014-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14237996#comment-14237996
 ] 

Hadoop QA commented on HADOOP-11361:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12685758/HADOOP-11361.patch
  against trunk revision 8963515.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 65 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5190//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5190//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5190//console

This message is automatically generated.

 NPE in MetricsSourceAdapter
 ---

 Key: HADOOP-11361
 URL: https://issues.apache.org/jira/browse/HADOOP-11361
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.1, 2.6.0, 2.5.1
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-11361.patch, HDFS-7487.patch


 {noformat}
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
   at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
   at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11363) Hadoop maven surefire-plugin uses must set heap size

2014-12-08 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-11363:
---

 Summary: Hadoop maven surefire-plugin uses must set heap size
 Key: HADOOP-11363
 URL: https://issues.apache.org/jira/browse/HADOOP-11363
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.0
 Environment: java 8
Reporter: Steve Loughran


Some of the hadoop tests (especially HBase) are running out of memory on Java 
8, due to there not being enough heap for them

The heap size of surefire test runs is *not* set in {{MAVEN_OPTS}}, it needs to 
be explicitly set as an argument to the test run.

I propose

# {{hadoop-project/pom.xml}} defines the maximum heap size and test timeouts 
for surefire builds as properties
# modules which run tests use these values for their memory  timeout settings.
# these modules should also set the surefire version they want to use



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11361) NPE in MetricsSourceAdapter

2014-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238066#comment-14238066
 ] 

Hadoop QA commented on HADOOP-11361:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12685758/HADOOP-11361.patch
  against trunk revision 144da2e.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 65 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5191//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5191//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5191//console

This message is automatically generated.

 NPE in MetricsSourceAdapter
 ---

 Key: HADOOP-11361
 URL: https://issues.apache.org/jira/browse/HADOOP-11361
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.4.1, 2.6.0, 2.5.1
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-11361.patch, HDFS-7487.patch


 {noformat}
 Caused by: java.lang.NullPointerException
   at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
   at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
   at 
 org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
   at 
 com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11360) GraphiteSink reports data with wrong timestamp

2014-12-08 Thread Ravi Prakash (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ravi Prakash resolved HADOOP-11360.
---
Resolution: Duplicate

Thanks for the reporting this JIRA Kamil!
This looks like a duplicate of 
https://issues.apache.org/jira/browse/HADOOP-11182 which was fixed in Hadoop 
2.6.0. If not, please re-open this JIRA

 GraphiteSink reports data with wrong timestamp
 --

 Key: HADOOP-11360
 URL: https://issues.apache.org/jira/browse/HADOOP-11360
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Reporter: Kamil Gorlo

 I've tried to use GraphiteSink with metrics2 system, but it looks that 
 timestamp sent to Graphite is refreshed rarely (about every 2 minutes 
 approx.) no mather how small period is set.
 Here is my configuration:
 *.sink.graphite.server_host=graphite-relay.host
 *.sink.graphite.server_port=2013
 *.sink.graphite.metrics_prefix=graphite.warehouse-data-1
 *.period=10
 nodemanager.sink.graphite.class=org.apache.hadoop.metrics2.sink.GraphiteSink
 And here is dumped network traffic to graphite-relay.host (only selected 
 lines, every line appears in 10 seconds as period suggests):
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041472
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041472
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041472
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  3 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  4 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  2 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  3 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  2 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  2 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  1 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  1 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041600
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041728
 graphite.warehouse-data-1.yarn.NodeManagerMetrics.Context=yarn.Hostname=warehouse-data-1.AllocatedContainers
  0 1418041728
 As you can see, AllocatedContainers value is refreshed every 10 seconds, but 
 timestamp is not.
 It looks that the problem is level above (in classes providing MetricsRecord 
 - because timestamp value is taken from MetricsRecord object provided in 
 argument to putMetrics method in Sink implementation) which implies that 
 every sink will have the same problem. Maybe I misconfigured something?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11347) Inconsistent enforcement of umask between FileSystem and FileContext interacting with local file system.

2014-12-08 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena reassigned HADOOP-11347:
-

Assignee: Varun Saxena

 Inconsistent enforcement of umask between FileSystem and FileContext 
 interacting with local file system.
 

 Key: HADOOP-11347
 URL: https://issues.apache.org/jira/browse/HADOOP-11347
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Reporter: Chris Nauroth
Assignee: Varun Saxena

 The {{FileSystem}} and {{FileContext}} APIs are inconsistent in enforcement 
 of umask for newly created directories.  {{FileContext}} utilizes 
 configuration property {{fs.permissions.umask-mode}} and runs a separate 
 {{chmod}} call to guarantee bypassing the process umask.  This is the 
 expected behavior for Hadoop as discussed in the documentation of 
 {{fs.permissions.umask-mode}}.  For the equivalent {{FileSystem}} APIs, it 
 does not use {{fs.permissions.umask-mode}}.  Instead, the permissions end up 
 getting controlled by the process umask.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11349) RawLocalFileSystem leaks file descriptor while creating a file if creat succeeds but chmod fails.

2014-12-08 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena reassigned HADOOP-11349:
-

Assignee: Varun Saxena

 RawLocalFileSystem leaks file descriptor while creating a file if creat 
 succeeds but chmod fails.
 -

 Key: HADOOP-11349
 URL: https://issues.apache.org/jira/browse/HADOOP-11349
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.6.0
Reporter: Chris Nauroth
Assignee: Varun Saxena
Priority: Minor

 {{RawLocalFileSystem}} currently implements some file creation operations as 
 a sequence of 2 syscalls: create the file, followed by setting its 
 permissions.  If creation succeeds, but then setting permission causes an 
 exception to be thrown, then there is no attempt to close the previously 
 opened file, resulting in a file descriptor leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11354) ThrottledInputStream doesn't perform effective throttling

2014-12-08 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238266#comment-14238266
 ] 

Jing Zhao commented on HADOOP-11354:


The patch looks good to me. +1. I will commit it shortly.

 ThrottledInputStream doesn't perform effective throttling
 -

 Key: HADOOP-11354
 URL: https://issues.apache.org/jira/browse/HADOOP-11354
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Attachments: mapreduce-6180-001.patch


 This was first reported in HBASE-12632 by [~Tobi] :
 I just transferred a ton of data using ExportSnapshot with bandwidth 
 throttling from one Hadoop cluster to another Hadoop cluster, and discovered 
 that ThrottledInputStream does not limit bandwidth.
 The problem is that ThrottledInputStream sleeps once, for a fixed time (50 
 ms), at the start of each read call, disregarding the actual amount of data 
 read.
 ExportSnapshot defaults to a buffer size as big as the block size of the 
 outputFs:
 {code:java}
   // Use the default block size of the outputFs if bigger
   int defaultBlockSize = Math.max((int) outputFs.getDefaultBlockSize(), 
 BUFFER_SIZE);
   bufferSize = conf.getInt(CONF_BUFFER_SIZE, defaultBlockSize);
   LOG.info(Using bufferSize= + 
 StringUtils.humanReadableInt(bufferSize));
 {code}
 In my case, this was 256MB.
 Hence, the ExportSnapshot mapper will attempt to read up to 256 MB at a time, 
 each time sleeping only 50ms. Thus, in the worst case where each call to read 
 fills the 256 MB buffer in negligible time, the ThrottledInputStream cannot 
 reduce the bandwidth to under (256 MB) / (5 ms) = 5 GB/s.
 Even in a more realistic case where read returns about 1 MB per call, it 
 still cannot throttle the bandwidth to under 20 MB/s.
 The issue is exacerbated by the fact that you need to set a low limit because 
 the total bandwidth per host depends on the number of mapper slots as well.
 A simple solution would change the if in throttle to a while, so that it 
 keeps sleeping for 50 ms until the rate is finally low enough:
 {code:java}
   private void throttle() throws IOException {
 while (getBytesPerSec()  maxBytesPerSec) {
   try {
 Thread.sleep(SLEEP_DURATION_MS);
 totalSleepTime += SLEEP_DURATION_MS;
   } catch (InterruptedException e) {
 throw new IOException(Thread aborted, e);
   }
 }
   }
 {code}
 This issue affects the ThrottledInputStream in hadoop as well.
 Another way to see this is that for big enough buffer sizes, 
 ThrottledInputStream will be throttling only the number of read calls to 20 
 per second, disregarding the number of bytes read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11354) ThrottledInputStream doesn't perform effective throttling

2014-12-08 Thread Jing Zhao (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jing Zhao updated HADOOP-11354:
---
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed this to trunk and branch-2. Thanks for the fix [~tedyu]!

 ThrottledInputStream doesn't perform effective throttling
 -

 Key: HADOOP-11354
 URL: https://issues.apache.org/jira/browse/HADOOP-11354
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 2.7.0

 Attachments: mapreduce-6180-001.patch


 This was first reported in HBASE-12632 by [~Tobi] :
 I just transferred a ton of data using ExportSnapshot with bandwidth 
 throttling from one Hadoop cluster to another Hadoop cluster, and discovered 
 that ThrottledInputStream does not limit bandwidth.
 The problem is that ThrottledInputStream sleeps once, for a fixed time (50 
 ms), at the start of each read call, disregarding the actual amount of data 
 read.
 ExportSnapshot defaults to a buffer size as big as the block size of the 
 outputFs:
 {code:java}
   // Use the default block size of the outputFs if bigger
   int defaultBlockSize = Math.max((int) outputFs.getDefaultBlockSize(), 
 BUFFER_SIZE);
   bufferSize = conf.getInt(CONF_BUFFER_SIZE, defaultBlockSize);
   LOG.info(Using bufferSize= + 
 StringUtils.humanReadableInt(bufferSize));
 {code}
 In my case, this was 256MB.
 Hence, the ExportSnapshot mapper will attempt to read up to 256 MB at a time, 
 each time sleeping only 50ms. Thus, in the worst case where each call to read 
 fills the 256 MB buffer in negligible time, the ThrottledInputStream cannot 
 reduce the bandwidth to under (256 MB) / (5 ms) = 5 GB/s.
 Even in a more realistic case where read returns about 1 MB per call, it 
 still cannot throttle the bandwidth to under 20 MB/s.
 The issue is exacerbated by the fact that you need to set a low limit because 
 the total bandwidth per host depends on the number of mapper slots as well.
 A simple solution would change the if in throttle to a while, so that it 
 keeps sleeping for 50 ms until the rate is finally low enough:
 {code:java}
   private void throttle() throws IOException {
 while (getBytesPerSec()  maxBytesPerSec) {
   try {
 Thread.sleep(SLEEP_DURATION_MS);
 totalSleepTime += SLEEP_DURATION_MS;
   } catch (InterruptedException e) {
 throw new IOException(Thread aborted, e);
   }
 }
   }
 {code}
 This issue affects the ThrottledInputStream in hadoop as well.
 Another way to see this is that for big enough buffer sizes, 
 ThrottledInputStream will be throttling only the number of read calls to 20 
 per second, disregarding the number of bytes read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11354) ThrottledInputStream doesn't perform effective throttling

2014-12-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238290#comment-14238290
 ] 

Hudson commented on HADOOP-11354:
-

FAILURE: Integrated in Hadoop-trunk-Commit #6670 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6670/])
HADOOP-11354. ThrottledInputStream doesn't perform effective throttling. 
Contributed by Ted Yu. (jing9: rev 57cb43be50c81daad8da34d33a45f396d9c1c35b)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/util/ThrottledInputStream.java


 ThrottledInputStream doesn't perform effective throttling
 -

 Key: HADOOP-11354
 URL: https://issues.apache.org/jira/browse/HADOOP-11354
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu
 Fix For: 2.7.0

 Attachments: mapreduce-6180-001.patch


 This was first reported in HBASE-12632 by [~Tobi] :
 I just transferred a ton of data using ExportSnapshot with bandwidth 
 throttling from one Hadoop cluster to another Hadoop cluster, and discovered 
 that ThrottledInputStream does not limit bandwidth.
 The problem is that ThrottledInputStream sleeps once, for a fixed time (50 
 ms), at the start of each read call, disregarding the actual amount of data 
 read.
 ExportSnapshot defaults to a buffer size as big as the block size of the 
 outputFs:
 {code:java}
   // Use the default block size of the outputFs if bigger
   int defaultBlockSize = Math.max((int) outputFs.getDefaultBlockSize(), 
 BUFFER_SIZE);
   bufferSize = conf.getInt(CONF_BUFFER_SIZE, defaultBlockSize);
   LOG.info(Using bufferSize= + 
 StringUtils.humanReadableInt(bufferSize));
 {code}
 In my case, this was 256MB.
 Hence, the ExportSnapshot mapper will attempt to read up to 256 MB at a time, 
 each time sleeping only 50ms. Thus, in the worst case where each call to read 
 fills the 256 MB buffer in negligible time, the ThrottledInputStream cannot 
 reduce the bandwidth to under (256 MB) / (5 ms) = 5 GB/s.
 Even in a more realistic case where read returns about 1 MB per call, it 
 still cannot throttle the bandwidth to under 20 MB/s.
 The issue is exacerbated by the fact that you need to set a low limit because 
 the total bandwidth per host depends on the number of mapper slots as well.
 A simple solution would change the if in throttle to a while, so that it 
 keeps sleeping for 50 ms until the rate is finally low enough:
 {code:java}
   private void throttle() throws IOException {
 while (getBytesPerSec()  maxBytesPerSec) {
   try {
 Thread.sleep(SLEEP_DURATION_MS);
 totalSleepTime += SLEEP_DURATION_MS;
   } catch (InterruptedException e) {
 throw new IOException(Thread aborted, e);
   }
 }
   }
 {code}
 This issue affects the ThrottledInputStream in hadoop as well.
 Another way to see this is that for big enough buffer sizes, 
 ThrottledInputStream will be throttling only the number of read calls to 20 
 per second, disregarding the number of bytes read.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9726) org.apache.hadoop.security.SecurityUtil has its own static Configuration which cannot be overridden

2014-12-08 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated HADOOP-9726:

Resolution: Not a Problem
Status: Resolved  (was: Patch Available)

 org.apache.hadoop.security.SecurityUtil has its own static Configuration 
 which cannot be overridden
 ---

 Key: HADOOP-9726
 URL: https://issues.apache.org/jira/browse/HADOOP-9726
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Hitesh Shah
Assignee: Hitesh Shah
 Attachments: HADOOP-9726.1.patch, HADOOP-9726.2.patch, 
 HADOOP-9726.3.patch


 There is a static block which loads a new Configuration object and uses it to 
 initialize the SSLFactory and HostResolver.
 Should this class have a similar static setConfiguration() function similar 
 to UGI? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8109) Using the default rpms of hadoop, /usr/etc/hadoop symlink gets removed on upgrade

2014-12-08 Thread Hitesh Shah (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hitesh Shah updated HADOOP-8109:

Resolution: Not a Problem
Status: Resolved  (was: Patch Available)

 Using the default rpms of hadoop,  /usr/etc/hadoop symlink gets removed on 
 upgrade
 --

 Key: HADOOP-8109
 URL: https://issues.apache.org/jira/browse/HADOOP-8109
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0, 0.23.0
Reporter: Hitesh Shah
Assignee: Hitesh Shah
Priority: Minor
 Attachments: HADOOP-8109.branch-1.patch, HADOOP-8109.trunk.patch


 Given that for rpms, the pre-uninstall scripts for the older version run 
 after the post-install scripts of the package being installed, the symlink 
 created from /usr/etc/hadoop to /etc/hadoop gets deleted. This breaks running 
 /usr/bin/hadoop without providing --config. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11345) On Suse/PPC64, building Hadoop Pipes (with -Pnative) requires to add -lcrypto at link stage

2014-12-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11345?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238323#comment-14238323
 ] 

Colin Patrick McCabe commented on HADOOP-11345:
---

The idea here is that {{find_package(OpenSSL REQUIRED)}} should be setting 
{{OPENSSL_LIBRARIES}} to the correct value for your platform, and using 
{{target_link_libraries}} should be adding the correct {{\-l}} flag to the 
linker line.  As-is, it looks like it's adding {{\-lssl}}, but that is not 
working for you.

I am using openSUSE 12.3 with no problems, what version are you using?  I am on 
x86, though.

It's hard to think of fixes here that doesn't involve rewriting that 
{{find_package}} script, which I'd really like to avoid.  There may be some 
options we can pass to the {{find_package}} script?

It's also a little concerning that we are linking openSSL statically into 
{{libhadooppipes.a}}.  It would be nice to get rid of the static library build 
altogether... can someone who is still using libhadooppipes speak up?

 On Suse/PPC64, building Hadoop Pipes (with -Pnative) requires to add -lcrypto 
 at link stage
 ---

 Key: HADOOP-11345
 URL: https://issues.apache.org/jira/browse/HADOOP-11345
 Project: Hadoop Common
  Issue Type: Bug
  Components: tools
Affects Versions: 2.4.1
 Environment: Suse on PPC64
 # uname -a
 Linux hybridf 3.0.101-0.40-ppc64 #1 SMP Thu Sep 18 13:09:38 UTC 2014 
 (44b8c95) ppc64 ppc64 ppc64 GNU/Linux
Reporter: Tony Reix
Priority: Minor

 Compiling Hadoop Pipes fails on Suse/PPC64.
 [INFO] Apache Hadoop Extras ... SUCCESS [  5.855 
 s]
 [INFO] Apache Hadoop Pipes  FAILURE [  8.134 
 s]
 Traces of building Hadoop Pipes:
 main:
 [mkdir] Created dir: 
 /home/hadoop/hadoop-common/hadoop-tools/hadoop-pipes/target/native
  [exec] Current OS is Linux
  [exec] Executing 'cmake' with arguments:
  [exec] '/home/hadoop/hadoop-common/hadoop-tools/hadoop-pipes/src/'
  [exec] '-DJVM_ARCH_DATA_MODEL=64'
  [exec]
  [exec] The ' characters around the executable and arguments are
  [exec] not part of the command.
 Execute:Java13CommandLauncher: Executing 'cmake' with arguments:
 '/home/hadoop/hadoop-common/hadoop-tools/hadoop-pipes/src/'
 '-DJVM_ARCH_DATA_MODEL=64'
 The ' characters around the executable and arguments are
 not part of the command.
  [exec] -- The C compiler identification is GNU
  [exec] -- The CXX compiler identification is GNU
  [exec] -- Check for working C compiler: /opt/at7.0/bin/gcc
  [exec] -- Check for working C compiler: /opt/at7.0/bin/gcc -- works
  [exec] -- Detecting C compiler ABI info
  [exec] -- Detecting C compiler ABI info - done
  [exec] -- Check for working CXX compiler: /opt/at7.0/bin/c++
  [exec] -- Check for working CXX compiler: /opt/at7.0/bin/c++ -- works
  [exec] -- Detecting CXX compiler ABI info
  [exec] JAVA_HOME=, 
 JAVA_JVM_LIBRARY=/opt/ibm/java-ppc64-71/jre/lib/ppc64/compressedrefs/libjvm.so
  [exec] JAVA_INCLUDE_PATH=/opt/ibm/java-ppc64-71/include, 
 JAVA_INCLUDE_PATH2=/opt/ibm/java-ppc64-71/include/linux
  [exec] Located all JNI components successfully.
  [exec] -- Detecting CXX compiler ABI info - done
  [exec] -- Found OpenSSL: /usr/lib64/libssl.so
  [exec] -- Configuring done
  [exec] -- Generating done
  [exec] -- Build files have been written to: 
 /home/hadoop/hadoop-common/hadoop-tools/hadoop-pipes/target/native
 
 [exec] /usr/bin/cmake -E cmake_progress_report 
 /home/hadoop/hadoop-common/hadoop-tools/hadoop-pipes/target/native/CMakeFiles 
 4
  [exec] [ 57%] Building CXX object 
 CMakeFiles/pipes-sort.dir/main/native/examples/impl/sort.cc.o
  [exec] /opt/at7.0/bin/c++-g -Wall -O2 -D_REENTRANT -D_GNU_SOURCE 
 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 
 -I/home/hadoop/hadoop-common/hadoop-tools/hadoop-pipes/src/main/native/utils/api
  
 -I/home/hadoop/hadoop-common/hadoop-tools/hadoop-pipes/src/main/native/pipes/api
  -I/home/hadoop/hadoop-common/hadoop-tools/hadoop-pipes/src   -o 
 CMakeFiles/pipes-sort.dir/main/native/examples/impl/sort.cc.o -c 
 /home/hadoop/hadoop-common/hadoop-tools/hadoop-pipes/src/main/native/examples/impl/sort.cc
  [exec] Linking CXX executable examples/pipes-sort
  [exec] /usr/bin/cmake -E cmake_link_script 
 CMakeFiles/pipes-sort.dir/link.txt --verbose=1
  [exec] /opt/at7.0/bin/c++-g -Wall -O2 -D_REENTRANT -D_GNU_SOURCE 
 -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64  -fPIC 
 CMakeFiles/pipes-sort.dir/main/native/examples/impl/sort.cc.o  -o 
 examples/pipes-sort -rdynamic libhadooppipes.a libhadooputils.a -lssl 
 -lpthread
  [exec] 
 

[jira] [Updated] (HADOOP-11287) Simplify UGI#reloginFromKeytab for Java 7+

2014-12-08 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11287:
---
Attachment: HADOOP-11287-120814.patch

Since Hadoop has been moved to Java 7, upload this patch to simplify 
UGI#reloginFromKeytab. 

 Simplify UGI#reloginFromKeytab for Java 7+
 --

 Key: HADOOP-11287
 URL: https://issues.apache.org/jira/browse/HADOOP-11287
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Attachments: HADOOP-11287-120814.patch


 HADOOP-10786 uses reflection to make {{UGI#reloginFromKeytab}} work with Java 
 6/7/8. In 2.7 Java 6 will no longer be supported, thus the code can be 
 simplified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11357) Print information of the build enviornment in test-patch.sh

2014-12-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238345#comment-14238345
 ] 

Colin Patrick McCabe commented on HADOOP-11357:
---

To be fair, we do print the executor that was used, and you can usually ask the 
infra people what is installed on that executor.  However, +1 for the idea of 
just printing {{java \-version}} or something to make it absolutely clear.  
That would be nice.  It might also clarify some things like are we running in 
32 bit mode or 64 bit mode when running unit tests.

 Print information of the build enviornment in test-patch.sh
 ---

 Key: HADOOP-11357
 URL: https://issues.apache.org/jira/browse/HADOOP-11357
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Haohui Mai
Assignee: Li Lu
Priority: Minor

 Currently test-patch.sh lacks of information such as java version during the 
 build, thus debugging problem like HADOOP-10530 becomes difficult.
 This jira proposes to print out more information in test-patch.sh to simplify 
 debugging.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10788) Rewrite kms to use new shell framework

2014-12-08 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10788?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238353#comment-14238353
 ] 

Allen Wittenauer commented on HADOOP-10788:
---

After looking at the httpfs boot code, it's pretty obvious that these two code 
bases should get merged, especially since the httpfs code still suffers from 
the security hole recently fixed in the kms code.

 Rewrite kms to use new shell framework
 --

 Key: HADOOP-10788
 URL: https://issues.apache.org/jira/browse/HADOOP-10788
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: scripts
 Attachments: HADOOP-10788-01.patch, HADOOP-10788-02.patch, 
 HADOOP-10788.patch


 kms was not rewritten to use the new shell framework.  It should be reworked 
 to take advantage of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11363) Hadoop maven surefire-plugin uses must set heap size

2014-12-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238395#comment-14238395
 ] 

Steve Loughran commented on HADOOP-11363:
-

surfaces on java7 too, hence hdfs precommit builds failing

 Hadoop maven surefire-plugin uses must set heap size
 

 Key: HADOOP-11363
 URL: https://issues.apache.org/jira/browse/HADOOP-11363
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.0
 Environment: java 8
Reporter: Steve Loughran

 Some of the hadoop tests (especially HBase) are running out of memory on Java 
 8, due to there not being enough heap for them
 The heap size of surefire test runs is *not* set in {{MAVEN_OPTS}}, it needs 
 to be explicitly set as an argument to the test run.
 I propose
 # {{hadoop-project/pom.xml}} defines the maximum heap size and test timeouts 
 for surefire builds as properties
 # modules which run tests use these values for their memory  timeout 
 settings.
 # these modules should also set the surefire version they want to use



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11363) Hadoop maven surefire-plugin uses must set heap size

2014-12-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11363:

Assignee: Steve Loughran
  Status: Patch Available  (was: Open)

 Hadoop maven surefire-plugin uses must set heap size
 

 Key: HADOOP-11363
 URL: https://issues.apache.org/jira/browse/HADOOP-11363
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.0
 Environment: java 8
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-11363-001.patch


 Some of the hadoop tests (especially HBase) are running out of memory on Java 
 8, due to there not being enough heap for them
 The heap size of surefire test runs is *not* set in {{MAVEN_OPTS}}, it needs 
 to be explicitly set as an argument to the test run.
 I propose
 # {{hadoop-project/pom.xml}} defines the maximum heap size and test timeouts 
 for surefire builds as properties
 # modules which run tests use these values for their memory  timeout 
 settings.
 # these modules should also set the surefire version they want to use



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11363) Hadoop maven surefire-plugin uses must set heap size

2014-12-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11363:

Attachment: HADOOP-11363-001.patch

 Hadoop maven surefire-plugin uses must set heap size
 

 Key: HADOOP-11363
 URL: https://issues.apache.org/jira/browse/HADOOP-11363
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.0
 Environment: java 8
Reporter: Steve Loughran
 Attachments: HADOOP-11363-001.patch


 Some of the hadoop tests (especially HBase) are running out of memory on Java 
 8, due to there not being enough heap for them
 The heap size of surefire test runs is *not* set in {{MAVEN_OPTS}}, it needs 
 to be explicitly set as an argument to the test run.
 I propose
 # {{hadoop-project/pom.xml}} defines the maximum heap size and test timeouts 
 for surefire builds as properties
 # modules which run tests use these values for their memory  timeout 
 settings.
 # these modules should also set the surefire version they want to use



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11363) Hadoop maven surefire-plugin uses must set heap size

2014-12-08 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238432#comment-14238432
 ] 

stack commented on HADOOP-11363:


lgtm

Want to just go for 4G rather than 2G or you thinking that if we OOME on 2G, 
we'll take a look at the dumped heaps to see what is going on?

Thanks [~ste...@apache.org]

 Hadoop maven surefire-plugin uses must set heap size
 

 Key: HADOOP-11363
 URL: https://issues.apache.org/jira/browse/HADOOP-11363
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.0
 Environment: java 8
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-11363-001.patch


 Some of the hadoop tests (especially HBase) are running out of memory on Java 
 8, due to there not being enough heap for them
 The heap size of surefire test runs is *not* set in {{MAVEN_OPTS}}, it needs 
 to be explicitly set as an argument to the test run.
 I propose
 # {{hadoop-project/pom.xml}} defines the maximum heap size and test timeouts 
 for surefire builds as properties
 # modules which run tests use these values for their memory  timeout 
 settings.
 # these modules should also set the surefire version they want to use



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11363) Hadoop maven surefire-plugin uses must set heap size

2014-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238443#comment-14238443
 ] 

Hadoop QA commented on HADOOP-11363:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12685844/HADOOP-11363-001.patch
  against trunk revision 6c5bbd7.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5192//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5192//console

This message is automatically generated.

 Hadoop maven surefire-plugin uses must set heap size
 

 Key: HADOOP-11363
 URL: https://issues.apache.org/jira/browse/HADOOP-11363
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.0
 Environment: java 8
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-11363-001.patch


 Some of the hadoop tests (especially HBase) are running out of memory on Java 
 8, due to there not being enough heap for them
 The heap size of surefire test runs is *not* set in {{MAVEN_OPTS}}, it needs 
 to be explicitly set as an argument to the test run.
 I propose
 # {{hadoop-project/pom.xml}} defines the maximum heap size and test timeouts 
 for surefire builds as properties
 # modules which run tests use these values for their memory  timeout 
 settings.
 # these modules should also set the surefire version they want to use



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11287) Simplify UGI#reloginFromKeytab for Java 7+

2014-12-08 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11287:
---
Status: Patch Available  (was: Open)

 Simplify UGI#reloginFromKeytab for Java 7+
 --

 Key: HADOOP-11287
 URL: https://issues.apache.org/jira/browse/HADOOP-11287
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Attachments: HADOOP-11287-120814.patch


 HADOOP-10786 uses reflection to make {{UGI#reloginFromKeytab}} work with Java 
 6/7/8. In 2.7 Java 6 will no longer be supported, thus the code can be 
 simplified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11329) Add JAVA_LIBRARY_PATH to KMS startup options

2014-12-08 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-11329:
-
Summary: Add JAVA_LIBRARY_PATH to KMS startup options  (was: should add 
HADOOP_HOME as part of kms's startup options)

 Add JAVA_LIBRARY_PATH to KMS startup options
 

 Key: HADOOP-11329
 URL: https://issues.apache.org/jira/browse/HADOOP-11329
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms, security
Reporter: Dian Fu
Assignee: Arun Suresh
 Attachments: HADOOP-11329.1.patch, HADOOP-11329.2.patch, 
 HADOOP-11329.3.patch, HADOOP-11329.4.patch, HADOOP-11329.5.patch, 
 HADOOP-11329.6.patch, HADOOP-11329.7.patch, HADOOP-11329.8.patch, 
 HADOOP-11329.9.patch


 Currently, HADOOP_HOME isn't part of the start up options of KMS. If I add 
 the the following configuration to core-site.xml of kms,
 {code} property
   namehadoop.security.crypto.codec.classes.aes.ctr.nopadding/name
   valueorg.apache.hadoop.crypto.OpensslAesCtrCryptoCodec/value
  /property
 {code} kms server will throw the following exception when receive 
 generateEncryptedKey request
 {code}
 2014-11-24 10:23:18,189 DEBUG org.apache.hadoop.crypto.OpensslCipher: Failed 
 to load OpenSSL Cipher.
 java.lang.UnsatisfiedLinkError: 
 org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl()Z
 at 
 org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl(Native Method)
 at 
 org.apache.hadoop.crypto.OpensslCipher.clinit(OpensslCipher.java:85)
 at 
 org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec.init(OpensslAesCtrCryptoCodec.java:50)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
 at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:129)
 at 
 org.apache.hadoop.crypto.CryptoCodec.getInstance(CryptoCodec.java:67)
 at 
 org.apache.hadoop.crypto.CryptoCodec.getInstance(CryptoCodec.java:100)
 at 
 org.apache.hadoop.crypto.key.KeyProviderCryptoExtension$DefaultCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:256)
 at 
 org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:371)
 at 
 org.apache.hadoop.crypto.key.kms.server.EagerKeyGeneratorKeyProviderCryptoExtension$CryptoExtension$EncryptedQueueRefiller.fillQueueForKey(EagerKeyGeneratorKeyProviderCryptoExtension.java:77)
 at 
 org.apache.hadoop.crypto.key.kms.ValueQueue$1.load(ValueQueue.java:181)
 at 
 org.apache.hadoop.crypto.key.kms.ValueQueue$1.load(ValueQueue.java:175)
 at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
 at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
 at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
 at 
 com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
 at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
 at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
 at 
 com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
 at 
 org.apache.hadoop.crypto.key.kms.ValueQueue.getAtMost(ValueQueue.java:256)
 at 
 org.apache.hadoop.crypto.key.kms.ValueQueue.getNext(ValueQueue.java:226)
 at 
 org.apache.hadoop.crypto.key.kms.server.EagerKeyGeneratorKeyProviderCryptoExtension$CryptoExtension.generateEncryptedKey(EagerKeyGeneratorKeyProviderCryptoExtension.java:126)
 at 
 org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:371)
 at 
 org.apache.hadoop.crypto.key.kms.server.KeyAuthorizationKeyProvider.generateEncryptedKey(KeyAuthorizationKeyProvider.java:192)
 at org.apache.hadoop.crypto.key.kms.server.KMS$9.run(KMS.java:379)
 at org.apache.hadoop.crypto.key.kms.server.KMS$9.run(KMS.java:375
 {code}
 The reason is that it cannot find libhadoop.so. This will prevent KMS to 
 response to generateEncryptedKey requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11329) Add JAVA_LIBRARY_PATH to KMS startup options

2014-12-08 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-11329:
-
   Resolution: Fixed
Fix Version/s: 2.7.0
   Status: Resolved  (was: Patch Available)

Pushed to trunk and branch-2, thanks for the contribution Arun, and Allen for 
reviews

 Add JAVA_LIBRARY_PATH to KMS startup options
 

 Key: HADOOP-11329
 URL: https://issues.apache.org/jira/browse/HADOOP-11329
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms, security
Reporter: Dian Fu
Assignee: Arun Suresh
 Fix For: 2.7.0

 Attachments: HADOOP-11329.1.patch, HADOOP-11329.2.patch, 
 HADOOP-11329.3.patch, HADOOP-11329.4.patch, HADOOP-11329.5.patch, 
 HADOOP-11329.6.patch, HADOOP-11329.7.patch, HADOOP-11329.8.patch, 
 HADOOP-11329.9.patch


 Currently, HADOOP_HOME isn't part of the start up options of KMS. If I add 
 the the following configuration to core-site.xml of kms,
 {code} property
   namehadoop.security.crypto.codec.classes.aes.ctr.nopadding/name
   valueorg.apache.hadoop.crypto.OpensslAesCtrCryptoCodec/value
  /property
 {code} kms server will throw the following exception when receive 
 generateEncryptedKey request
 {code}
 2014-11-24 10:23:18,189 DEBUG org.apache.hadoop.crypto.OpensslCipher: Failed 
 to load OpenSSL Cipher.
 java.lang.UnsatisfiedLinkError: 
 org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl()Z
 at 
 org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl(Native Method)
 at 
 org.apache.hadoop.crypto.OpensslCipher.clinit(OpensslCipher.java:85)
 at 
 org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec.init(OpensslAesCtrCryptoCodec.java:50)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
 at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:129)
 at 
 org.apache.hadoop.crypto.CryptoCodec.getInstance(CryptoCodec.java:67)
 at 
 org.apache.hadoop.crypto.CryptoCodec.getInstance(CryptoCodec.java:100)
 at 
 org.apache.hadoop.crypto.key.KeyProviderCryptoExtension$DefaultCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:256)
 at 
 org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:371)
 at 
 org.apache.hadoop.crypto.key.kms.server.EagerKeyGeneratorKeyProviderCryptoExtension$CryptoExtension$EncryptedQueueRefiller.fillQueueForKey(EagerKeyGeneratorKeyProviderCryptoExtension.java:77)
 at 
 org.apache.hadoop.crypto.key.kms.ValueQueue$1.load(ValueQueue.java:181)
 at 
 org.apache.hadoop.crypto.key.kms.ValueQueue$1.load(ValueQueue.java:175)
 at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
 at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
 at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
 at 
 com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
 at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
 at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
 at 
 com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
 at 
 org.apache.hadoop.crypto.key.kms.ValueQueue.getAtMost(ValueQueue.java:256)
 at 
 org.apache.hadoop.crypto.key.kms.ValueQueue.getNext(ValueQueue.java:226)
 at 
 org.apache.hadoop.crypto.key.kms.server.EagerKeyGeneratorKeyProviderCryptoExtension$CryptoExtension.generateEncryptedKey(EagerKeyGeneratorKeyProviderCryptoExtension.java:126)
 at 
 org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:371)
 at 
 org.apache.hadoop.crypto.key.kms.server.KeyAuthorizationKeyProvider.generateEncryptedKey(KeyAuthorizationKeyProvider.java:192)
 at org.apache.hadoop.crypto.key.kms.server.KMS$9.run(KMS.java:379)
 at org.apache.hadoop.crypto.key.kms.server.KMS$9.run(KMS.java:375
 {code}
 The reason is that it cannot find libhadoop.so. This will prevent KMS to 
 response to generateEncryptedKey requests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11329) Add JAVA_LIBRARY_PATH to KMS startup options

2014-12-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238516#comment-14238516
 ] 

Hudson commented on HADOOP-11329:
-

FAILURE: Integrated in Hadoop-trunk-Commit #6672 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6672/])
HADOOP-11329. Add JAVA_LIBRARY_PATH to KMS startup options. Contributed by Arun 
Suresh. (wang: rev ddffcd8fac8af0ff78e63cca583af5c77a062891)
* hadoop-common-project/hadoop-kms/src/site/apt/index.apt.vm
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-kms/src/main/sbin/kms.sh
* hadoop-common-project/hadoop-kms/src/main/conf/kms-env.sh


 Add JAVA_LIBRARY_PATH to KMS startup options
 

 Key: HADOOP-11329
 URL: https://issues.apache.org/jira/browse/HADOOP-11329
 Project: Hadoop Common
  Issue Type: Bug
  Components: kms, security
Reporter: Dian Fu
Assignee: Arun Suresh
 Fix For: 2.7.0

 Attachments: HADOOP-11329.1.patch, HADOOP-11329.2.patch, 
 HADOOP-11329.3.patch, HADOOP-11329.4.patch, HADOOP-11329.5.patch, 
 HADOOP-11329.6.patch, HADOOP-11329.7.patch, HADOOP-11329.8.patch, 
 HADOOP-11329.9.patch


 Currently, HADOOP_HOME isn't part of the start up options of KMS. If I add 
 the the following configuration to core-site.xml of kms,
 {code} property
   namehadoop.security.crypto.codec.classes.aes.ctr.nopadding/name
   valueorg.apache.hadoop.crypto.OpensslAesCtrCryptoCodec/value
  /property
 {code} kms server will throw the following exception when receive 
 generateEncryptedKey request
 {code}
 2014-11-24 10:23:18,189 DEBUG org.apache.hadoop.crypto.OpensslCipher: Failed 
 to load OpenSSL Cipher.
 java.lang.UnsatisfiedLinkError: 
 org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl()Z
 at 
 org.apache.hadoop.util.NativeCodeLoader.buildSupportsOpenssl(Native Method)
 at 
 org.apache.hadoop.crypto.OpensslCipher.clinit(OpensslCipher.java:85)
 at 
 org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec.init(OpensslAesCtrCryptoCodec.java:50)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native 
 Method)
 at 
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
 at 
 org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:129)
 at 
 org.apache.hadoop.crypto.CryptoCodec.getInstance(CryptoCodec.java:67)
 at 
 org.apache.hadoop.crypto.CryptoCodec.getInstance(CryptoCodec.java:100)
 at 
 org.apache.hadoop.crypto.key.KeyProviderCryptoExtension$DefaultCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:256)
 at 
 org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:371)
 at 
 org.apache.hadoop.crypto.key.kms.server.EagerKeyGeneratorKeyProviderCryptoExtension$CryptoExtension$EncryptedQueueRefiller.fillQueueForKey(EagerKeyGeneratorKeyProviderCryptoExtension.java:77)
 at 
 org.apache.hadoop.crypto.key.kms.ValueQueue$1.load(ValueQueue.java:181)
 at 
 org.apache.hadoop.crypto.key.kms.ValueQueue$1.load(ValueQueue.java:175)
 at 
 com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
 at 
 com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
 at 
 com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
 at 
 com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
 at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
 at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
 at 
 com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
 at 
 org.apache.hadoop.crypto.key.kms.ValueQueue.getAtMost(ValueQueue.java:256)
 at 
 org.apache.hadoop.crypto.key.kms.ValueQueue.getNext(ValueQueue.java:226)
 at 
 org.apache.hadoop.crypto.key.kms.server.EagerKeyGeneratorKeyProviderCryptoExtension$CryptoExtension.generateEncryptedKey(EagerKeyGeneratorKeyProviderCryptoExtension.java:126)
 at 
 org.apache.hadoop.crypto.key.KeyProviderCryptoExtension.generateEncryptedKey(KeyProviderCryptoExtension.java:371)
 at 
 org.apache.hadoop.crypto.key.kms.server.KeyAuthorizationKeyProvider.generateEncryptedKey(KeyAuthorizationKeyProvider.java:192)
 at org.apache.hadoop.crypto.key.kms.server.KMS$9.run(KMS.java:379)
 at org.apache.hadoop.crypto.key.kms.server.KMS$9.run(KMS.java:375
 {code}
 The reason is that it cannot find libhadoop.so. This will 

[jira] [Commented] (HADOOP-11238) Group Cache should not cause namenode pause

2014-12-08 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238539#comment-14238539
 ] 

Benoy Antony commented on HADOOP-11238:
---

Reviewed the latest patch. Looks good. 

I have one issue with refresh(). 
It invalidates the cache, but it doesn't clear the negative-cache. I believe, 
we should clear the negative cache in refresh().
Though the issue is not introduced by this patch, it is better to fix it.
Could you please fix it and add a test case to test it ?


 Group Cache should not cause namenode pause
 ---

 Key: HADOOP-11238
 URL: https://issues.apache.org/jira/browse/HADOOP-11238
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.1
Reporter: Chris Li
Assignee: Chris Li
Priority: Minor
 Attachments: HADOOP-11238.patch, HADOOP-11238.patch


 This patch addresses an issue where the namenode pauses during group 
 resolution by only allowing a single group resolution query on expiry. There 
 are two scenarios:
 1. When there is not yet a value in the cache, all threads which make a 
 request will block while a single thread fetches the value.
 2. When there is already a value in the cache and it is expired, the new 
 value will be fetched in the background while the old value is used by other 
 threads
 This is handled by guava's cache.
 Negative caching is a feature built into the groups cache, and since guava's 
 caches don't support different expiration times, we have a separate negative 
 cache which masks the guava cache: if an element exists in the negative cache 
 and isn't expired, we return it.
 In total the logic for fetching a group is:
 1. If username exists in static cache, return the value (this was already 
 present)
 2. If username exists in negative cache and negative cache is not expired, 
 raise an exception as usual
 3. Otherwise Defer to guava cache (see two scenarios above)
 Original Issue Below:
 
 Our namenode pauses for 12-60 seconds several times every hour. During these 
 pauses, no new requests can come in.
 Around the time of pauses, we have log messages such as:
 2014-10-22 13:24:22,688 WARN org.apache.hadoop.security.Groups: Potential 
 performance problem: getGroups(user=x) took 34507 milliseconds.
 The current theory is:
 1. Groups has a cache that is refreshed periodically. Each entry has a cache 
 expiry.
 2. When a cache entry expires, multiple threads can see this expiration and 
 then we have a thundering herd effect where all these threads hit the wire 
 and overwhelm our LDAP servers (we are using ShellBasedUnixGroupsMapping with 
 sssd, how this happens has yet to be established)
 3. group resolution queries begin to take longer, I've observed it taking 1.2 
 seconds instead of the usual 0.01-0.03 seconds when measuring in the shell 
 `time groups myself`
 4. If there is mutual exclusion somewhere along this path, a 1 second pause 
 could lead to a 60 second pause as all the threads compete for the resource. 
 The exact cause hasn't been established
 Potential solutions include:
 1. Increasing group cache time, which will make the issue less frequent
 2. Rolling evictions of the cache so we prevent the large spike in LDAP 
 queries
 3. Gate the cache refresh so that only one thread is responsible for 
 refreshing the cache



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11287) Simplify UGI#reloginFromKeytab for Java 7+

2014-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238555#comment-14238555
 ] 

Hadoop QA commented on HADOOP-11287:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12685831/HADOOP-11287-120814.patch
  against trunk revision 6c5bbd7.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 65 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.metrics2.impl.TestMetricsSystemImpl

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5193//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5193//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5193//console

This message is automatically generated.

 Simplify UGI#reloginFromKeytab for Java 7+
 --

 Key: HADOOP-11287
 URL: https://issues.apache.org/jira/browse/HADOOP-11287
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Attachments: HADOOP-11287-120814.patch


 HADOOP-10786 uses reflection to make {{UGI#reloginFromKeytab}} work with Java 
 6/7/8. In 2.7 Java 6 will no longer be supported, thus the code can be 
 simplified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10788) Rewrite kms to use new shell framework

2014-12-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10788:
--
Status: Open  (was: Patch Available)

 Rewrite kms to use new shell framework
 --

 Key: HADOOP-10788
 URL: https://issues.apache.org/jira/browse/HADOOP-10788
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: scripts
 Attachments: HADOOP-10788-01.patch, HADOOP-10788-02.patch, 
 HADOOP-10788.patch


 kms was not rewritten to use the new shell framework.  It should be reworked 
 to take advantage of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10788) Rewrite kms to use new shell framework

2014-12-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10788?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10788:
--
Attachment: HADOOP-10788-03.patch

-03:
* document truststore password
* move basic catalina_opts handling to hadoop-functions.sh
* changed vars to reflect merged catalina_opts handling


 Rewrite kms to use new shell framework
 --

 Key: HADOOP-10788
 URL: https://issues.apache.org/jira/browse/HADOOP-10788
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: scripts
 Attachments: HADOOP-10788-01.patch, HADOOP-10788-02.patch, 
 HADOOP-10788-03.patch, HADOOP-10788.patch


 kms was not rewritten to use the new shell framework.  It should be reworked 
 to take advantage of it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-7154) Should set MALLOC_ARENA_MAX in hadoop-config.sh

2014-12-08 Thread Ben Roling (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7154?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238591#comment-14238591
 ] 

Ben Roling commented on HADOOP-7154:


Ok, so after further consideration I think my last comment/question was 
probably somewhat silly.  I think the problems the high vmem sizes present to 
Hadoop are probably obvious to many as Todd originally suggested.  I feel sort 
of dumb for not realizing more quickly.

MapReduce (and YARN) monitor virtual memory sizes of task processes and kill 
them when they get too big.  For example, mapreduce.map.memory.mb controls the 
max virtual memory size of a map task.  WIthout MALLOC_ARENA_MAX this would be 
broken since tasks would have super inflated vmem sizes.

[~tlipcon] - do I have that about right?  Are there other types of problems you 
were noticing?

Basically it seems any piece of software that tries to make decisions based on 
process vmem size is going to be messed up by the glibc change and likely has 
to implement MALLOC_ARENA_MAX.  For some reason the fact that Hadoop was making 
such decisions was escaping me when I made my last comment.

 Should set MALLOC_ARENA_MAX in hadoop-config.sh
 ---

 Key: HADOOP-7154
 URL: https://issues.apache.org/jira/browse/HADOOP-7154
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 0.22.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Fix For: 1.0.4, 0.22.0

 Attachments: hadoop-7154.txt


 New versions of glibc present in RHEL6 include a new arena allocator design. 
 In several clusters we've seen this new allocator cause huge amounts of 
 virtual memory to be used, since when multiple threads perform allocations, 
 they each get their own memory arena. On a 64-bit system, these arenas are 
 64M mappings, and the maximum number of arenas is 8 times the number of 
 cores. We've observed a DN process using 14GB of vmem for only 300M of 
 resident set. This causes all kinds of nasty issues for obvious reasons.
 Setting MALLOC_ARENA_MAX to a low number will restrict the number of memory 
 arenas and bound the virtual memory, with no noticeable downside in 
 performance - we've been recommending MALLOC_ARENA_MAX=4. We should set this 
 in hadoop-env.sh to avoid this issue as RHEL6 becomes more and more common.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11353) Add support for .hadooprc

2014-12-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11353:
--
Attachment: HADOOP-11353-01.patch

-01:
* Add some docs to CommandManual.apt.vm

I'll be opening another JIRA to update this for other changes.

 Add support for .hadooprc
 -

 Key: HADOOP-11353
 URL: https://issues.apache.org/jira/browse/HADOOP-11353
 Project: Hadoop Common
  Issue Type: New Feature
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: scripts
 Attachments: HADOOP-11353-01.patch, HADOOP-11353.patch


 The system should be able to read in user-defined env vars from ~/.hadooprc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11353) Add support for .hadooprc

2014-12-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11353:
--
Status: Open  (was: Patch Available)

 Add support for .hadooprc
 -

 Key: HADOOP-11353
 URL: https://issues.apache.org/jira/browse/HADOOP-11353
 Project: Hadoop Common
  Issue Type: New Feature
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: scripts
 Attachments: HADOOP-11353-01.patch, HADOOP-11353.patch


 The system should be able to read in user-defined env vars from ~/.hadooprc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11353) Add support for .hadooprc

2014-12-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11353:
--
Status: Patch Available  (was: Open)

 Add support for .hadooprc
 -

 Key: HADOOP-11353
 URL: https://issues.apache.org/jira/browse/HADOOP-11353
 Project: Hadoop Common
  Issue Type: New Feature
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: scripts
 Attachments: HADOOP-11353-01.patch, HADOOP-11353.patch


 The system should be able to read in user-defined env vars from ~/.hadooprc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11363) Hadoop maven surefire-plugin uses must set heap size

2014-12-08 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238609#comment-14238609
 ] 

Steve Loughran commented on HADOOP-11363:
-

we could go to 4, easily enough, will apply a new patch

 Hadoop maven surefire-plugin uses must set heap size
 

 Key: HADOOP-11363
 URL: https://issues.apache.org/jira/browse/HADOOP-11363
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.0
 Environment: java 8
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-11363-001.patch


 Some of the hadoop tests (especially HBase) are running out of memory on Java 
 8, due to there not being enough heap for them
 The heap size of surefire test runs is *not* set in {{MAVEN_OPTS}}, it needs 
 to be explicitly set as an argument to the test run.
 I propose
 # {{hadoop-project/pom.xml}} defines the maximum heap size and test timeouts 
 for surefire builds as properties
 # modules which run tests use these values for their memory  timeout 
 settings.
 # these modules should also set the surefire version they want to use



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11363) Hadoop maven surefire-plugin uses must set heap size

2014-12-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11363:

Status: Patch Available  (was: Open)

 Hadoop maven surefire-plugin uses must set heap size
 

 Key: HADOOP-11363
 URL: https://issues.apache.org/jira/browse/HADOOP-11363
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.0
 Environment: java 8
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-11363-001.patch, HADOOP-11363-002.patch


 Some of the hadoop tests (especially HBase) are running out of memory on Java 
 8, due to there not being enough heap for them
 The heap size of surefire test runs is *not* set in {{MAVEN_OPTS}}, it needs 
 to be explicitly set as an argument to the test run.
 I propose
 # {{hadoop-project/pom.xml}} defines the maximum heap size and test timeouts 
 for surefire builds as properties
 # modules which run tests use these values for their memory  timeout 
 settings.
 # these modules should also set the surefire version they want to use



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11363) Hadoop maven surefire-plugin uses must set heap size

2014-12-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11363:

Attachment: HADOOP-11363-002.patch

Patch -002 goes to 4G heap

one thing to note is that this will also be the heap requirement on local dev 
systems, though if people do want to tune it down they can set the 
{{maven-surefire-plugin.argLine}} property to a different  value

 Hadoop maven surefire-plugin uses must set heap size
 

 Key: HADOOP-11363
 URL: https://issues.apache.org/jira/browse/HADOOP-11363
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.0
 Environment: java 8
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-11363-001.patch, HADOOP-11363-002.patch


 Some of the hadoop tests (especially HBase) are running out of memory on Java 
 8, due to there not being enough heap for them
 The heap size of surefire test runs is *not* set in {{MAVEN_OPTS}}, it needs 
 to be explicitly set as an argument to the test run.
 I propose
 # {{hadoop-project/pom.xml}} defines the maximum heap size and test timeouts 
 for surefire builds as properties
 # modules which run tests use these values for their memory  timeout 
 settings.
 # these modules should also set the surefire version they want to use



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11363) Hadoop maven surefire-plugin uses must set heap size

2014-12-08 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11363:

Status: Open  (was: Patch Available)

 Hadoop maven surefire-plugin uses must set heap size
 

 Key: HADOOP-11363
 URL: https://issues.apache.org/jira/browse/HADOOP-11363
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.0
 Environment: java 8
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-11363-001.patch, HADOOP-11363-002.patch


 Some of the hadoop tests (especially HBase) are running out of memory on Java 
 8, due to there not being enough heap for them
 The heap size of surefire test runs is *not* set in {{MAVEN_OPTS}}, it needs 
 to be explicitly set as an argument to the test run.
 I propose
 # {{hadoop-project/pom.xml}} defines the maximum heap size and test timeouts 
 for surefire builds as properties
 # modules which run tests use these values for their memory  timeout 
 settings.
 # these modules should also set the surefire version they want to use



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10530) Make hadoop trunk build on Java7+ only

2014-12-08 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238642#comment-14238642
 ] 

Haohui Mai commented on HADOOP-10530:
-

For some reason jenkins is still picking up Java 6 
(https://builds.apache.org/job/PreCommit-HDFS-Build/8957//console):

{noformat}
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (clean) @ hadoop-main ---
[WARNING] Rule 1: org.apache.maven.plugins.enforcer.RequireJavaVersion failed 
with message:
Detected JDK Version: 1.6.0-45 is not in the allowed range [1.7,).
[INFO] 
{noformat}



 Make hadoop trunk build on Java7+ only
 --

 Key: HADOOP-10530
 URL: https://issues.apache.org/jira/browse/HADOOP-10530
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0, 2.6.0
 Environment: Java 1.7+
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HADOOP-10530-001.patch, HADOOP-10530-002.patch, 
 HADOOP-10530-003.patch, HADOOP-10530-004.patch, HADOOP-10530-005.patch, 
 HADOOP-10530-debug.000.patch, Screen Shot 2014-09-20 at 18.09.05.png


 As discussed on hadoop-common, hadoop 3 is envisaged to be Java7+ *only* 
 -this JIRA covers switching the build for this
 # maven enforcer plugin to set Java version = {{[1.7)}}
 # compiler to set language to java 1.7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11363) Hadoop maven surefire-plugin uses must set heap size

2014-12-08 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238652#comment-14238652
 ] 

stack commented on HADOOP-11363:


+1

 Hadoop maven surefire-plugin uses must set heap size
 

 Key: HADOOP-11363
 URL: https://issues.apache.org/jira/browse/HADOOP-11363
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.0
 Environment: java 8
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-11363-001.patch, HADOOP-11363-002.patch


 Some of the hadoop tests (especially HBase) are running out of memory on Java 
 8, due to there not being enough heap for them
 The heap size of surefire test runs is *not* set in {{MAVEN_OPTS}}, it needs 
 to be explicitly set as an argument to the test run.
 I propose
 # {{hadoop-project/pom.xml}} defines the maximum heap size and test timeouts 
 for surefire builds as properties
 # modules which run tests use these values for their memory  timeout 
 settings.
 # these modules should also set the surefire version they want to use



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-6616) Improve documentation for rack awareness

2014-12-08 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238661#comment-14238661
 ] 

Allen Wittenauer commented on HADOOP-6616:
--

Somewhere along the way, this change got dropped.  At least, I can't find a 
record of it in branch-2 or trunk.

 Improve documentation for rack awareness
 

 Key: HADOOP-6616
 URL: https://issues.apache.org/jira/browse/HADOOP-6616
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Jeff Hammerbacher
Assignee: Adam Faris
  Labels: newbie
 Fix For: 3.0.0

 Attachments: hadoop-6616.patch, hadoop-6616.patch.2, 
 hadoop-6616.patch.3, hadoop-6616.patch.4


 The current documentation for rack awareness 
 (http://hadoop.apache.org/common/docs/r0.20.0/cluster_setup.html#Hadoop+Rack+Awareness)
  should be augmented to include a sample script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11364) [Java 8] Over usage of virtual memory

2014-12-08 Thread Mohammad Kamrul Islam (JIRA)
Mohammad Kamrul Islam created HADOOP-11364:
--

 Summary: [Java 8] Over usage of virtual memory
 Key: HADOOP-11364
 URL: https://issues.apache.org/jira/browse/HADOOP-11364
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam


In our Hadoop 2 + Java8 effort , we found few jobs are being Killed by Hadoop 
due to excessive virtual memory allocation.  Although the physical memory usage 
is low.

The most common error message is Container [pid=??,containerID=container_??] 
is running beyond virtual memory limits. Current usage: 365.1 MB of 1 GB 
physical memory used; 3.2 GB of 2.1 GB virtual memory used. Killing container.

We see this problem for MR job as well as in spark driver/executor.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11090) [Umbrella] Support Java 8 in Hadoop

2014-12-08 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238669#comment-14238669
 ] 

Mohammad Kamrul Islam commented on HADOOP-11090:


I took some other short-cut to build with Java 8 by disabling the java-doc 
generation.
I passed -Dmaven.javadoc.skip=true in the mvn command.

However, we must resolve this either by disabling doclint or by changing the 
doc manually.



 [Umbrella] Support Java 8 in Hadoop
 ---

 Key: HADOOP-11090
 URL: https://issues.apache.org/jira/browse/HADOOP-11090
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam

 Java 8 is coming quickly to various clusters. Making sure Hadoop seamlessly 
 works  with Java 8 is important for the Apache community.
   
 This JIRA is to track  the issues/experiences encountered during Java 8 
 migration. If you find a potential bug , please create a separate JIRA either 
 as a sub-task or linked into this JIRA.
 If you find a Hadoop or JVM configuration tuning, you can create a JIRA as 
 well. Or you can add  a comment  here.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11363) Hadoop maven surefire-plugin uses must set heap size

2014-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11363?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238671#comment-14238671
 ] 

Hadoop QA commented on HADOOP-11363:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12685881/HADOOP-11363-002.patch
  against trunk revision ddffcd8.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5194//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5194//console

This message is automatically generated.

 Hadoop maven surefire-plugin uses must set heap size
 

 Key: HADOOP-11363
 URL: https://issues.apache.org/jira/browse/HADOOP-11363
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.7.0
 Environment: java 8
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-11363-001.patch, HADOOP-11363-002.patch


 Some of the hadoop tests (especially HBase) are running out of memory on Java 
 8, due to there not being enough heap for them
 The heap size of surefire test runs is *not* set in {{MAVEN_OPTS}}, it needs 
 to be explicitly set as an argument to the test run.
 I propose
 # {{hadoop-project/pom.xml}} defines the maximum heap size and test timeouts 
 for surefire builds as properties
 # modules which run tests use these values for their memory  timeout 
 settings.
 # these modules should also set the surefire version they want to use



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11287) Simplify UGI#reloginFromKeytab for Java 7+

2014-12-08 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11287:
---
Status: Open  (was: Patch Available)

 Simplify UGI#reloginFromKeytab for Java 7+
 --

 Key: HADOOP-11287
 URL: https://issues.apache.org/jira/browse/HADOOP-11287
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Attachments: HADOOP-11287-120814.patch


 HADOOP-10786 uses reflection to make {{UGI#reloginFromKeytab}} work with Java 
 6/7/8. In 2.7 Java 6 will no longer be supported, thus the code can be 
 simplified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11287) Simplify UGI#reloginFromKeytab for Java 7+

2014-12-08 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11287:
---
Attachment: HADOOP-11287-120814.patch

Kick Jenkins again to see if the findbugs warnings are reproducible. Those 
warnings appears to be unrelated to this change. 

 Simplify UGI#reloginFromKeytab for Java 7+
 --

 Key: HADOOP-11287
 URL: https://issues.apache.org/jira/browse/HADOOP-11287
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Attachments: HADOOP-11287-120814.patch, HADOOP-11287-120814.patch


 HADOOP-10786 uses reflection to make {{UGI#reloginFromKeytab}} work with Java 
 6/7/8. In 2.7 Java 6 will no longer be supported, thus the code can be 
 simplified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11287) Simplify UGI#reloginFromKeytab for Java 7+

2014-12-08 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11287:
---
Status: Patch Available  (was: Open)

 Simplify UGI#reloginFromKeytab for Java 7+
 --

 Key: HADOOP-11287
 URL: https://issues.apache.org/jira/browse/HADOOP-11287
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Attachments: HADOOP-11287-120814.patch, HADOOP-11287-120814.patch


 HADOOP-10786 uses reflection to make {{UGI#reloginFromKeytab}} work with Java 
 6/7/8. In 2.7 Java 6 will no longer be supported, thus the code can be 
 simplified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11353) Add support for .hadooprc

2014-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11353?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238712#comment-14238712
 ] 

Hadoop QA commented on HADOOP-11353:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12685876/HADOOP-11353-01.patch
  against trunk revision ddffcd8.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 65 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5195//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5195//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5195//console

This message is automatically generated.

 Add support for .hadooprc
 -

 Key: HADOOP-11353
 URL: https://issues.apache.org/jira/browse/HADOOP-11353
 Project: Hadoop Common
  Issue Type: New Feature
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
  Labels: scripts
 Attachments: HADOOP-11353-01.patch, HADOOP-11353.patch


 The system should be able to read in user-defined env vars from ~/.hadooprc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11364) [Java 8] Over usage of virtual memory

2014-12-08 Thread Mohammad Kamrul Islam (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238716#comment-14238716
 ] 

Mohammad Kamrul Islam commented on HADOOP-11364:


My findings and quick resolutions:
By default, Java 8 allocates extra virtual memory then Java 7. However, we can 
control the non-heap memory usage by limiting the maximum allowed values for 
some JVM  parameters  such as  -XX:ReservedCodeCacheSize=100M 
-XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=256

For M/R based job (such as Pig, Hive etc), user can pass the following  JVM -XX 
parameters as part of mapreduce.reduce.java.opts or mapreduce.map.java.opts
{noformat}
mapreduce.reduce.java.opts  '-XX:ReservedCodeCacheSize=100M 
-XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=256m -Xmx1536m -Xms512m 
-Djava.net.preferIPv4Stack=true'
{noformat}

Similarly for Spark job, we need to pass the same parameters in the Spark 
AM/master and executor. Spark community is working on the ways to pass these 
type of parameters easily. In Spark-1.1.0, user can pass it for spark-cluster 
based job submission as follows. For general job submission, user has to wait 
until https://issues.apache.org/jira/browse/SPARK-4461 is released.
{noformat}
spark.driver.extraJavaOptions = -XX:ReservedCodeCacheSize=100M 
-XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=256m
{noformat}

For Spark executor, pass the following.
{noformat} 
spark.executor.extraJavaOptions = -XX:ReservedCodeCacheSize=100M 
-XX:MaxMetaspaceSize=256m -XX:CompressedClassSpaceSize=256m
{noformat}

 These parameters can be set in conf/spark-defaults.conf as well.

 [Java 8] Over usage of virtual memory
 -

 Key: HADOOP-11364
 URL: https://issues.apache.org/jira/browse/HADOOP-11364
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Mohammad Kamrul Islam
Assignee: Mohammad Kamrul Islam

 In our Hadoop 2 + Java8 effort , we found few jobs are being Killed by Hadoop 
 due to excessive virtual memory allocation.  Although the physical memory 
 usage is low.
 The most common error message is Container [pid=??,containerID=container_??] 
 is running beyond virtual memory limits. Current usage: 365.1 MB of 1 GB 
 physical memory used; 3.2 GB of 2.1 GB virtual memory used. Killing 
 container.
 We see this problem for MR job as well as in spark driver/executor.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10530) Make hadoop trunk build on Java7+ only

2014-12-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238732#comment-14238732
 ] 

Colin Patrick McCabe commented on HADOOP-10530:
---

[~stev...@iseran.com], it looks like this broke the HDFS precommit job.  I am 
changing the configuration such that

{code}
export JAVA_HOME=${TOOLS_HOME}/java/latest
{code}

is now:

{code}
export JAVA_HOME=${TOOLS_HOME}/java/jdk1.7.0_55
{code}

In the future, we should make sure that all the precommit jobs work before 
committing stuff like this, not just the hadoop-common one...

 Make hadoop trunk build on Java7+ only
 --

 Key: HADOOP-10530
 URL: https://issues.apache.org/jira/browse/HADOOP-10530
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0, 2.6.0
 Environment: Java 1.7+
Reporter: Steve Loughran
Assignee: Steve Loughran
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HADOOP-10530-001.patch, HADOOP-10530-002.patch, 
 HADOOP-10530-003.patch, HADOOP-10530-004.patch, HADOOP-10530-005.patch, 
 HADOOP-10530-debug.000.patch, Screen Shot 2014-09-20 at 18.09.05.png


 As discussed on hadoop-common, hadoop 3 is envisaged to be Java7+ *only* 
 -this JIRA covers switching the build for this
 # maven enforcer plugin to set Java version = {{[1.7)}}
 # compiler to set language to java 1.7



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11349) RawLocalFileSystem leaks file descriptor while creating a file if creat succeeds but chmod fails.

2014-12-08 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated HADOOP-11349:
--
Fix Version/s: 2.7.0
   Status: Patch Available  (was: Open)

 RawLocalFileSystem leaks file descriptor while creating a file if creat 
 succeeds but chmod fails.
 -

 Key: HADOOP-11349
 URL: https://issues.apache.org/jira/browse/HADOOP-11349
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.6.0
Reporter: Chris Nauroth
Assignee: Varun Saxena
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-11349.patch


 {{RawLocalFileSystem}} currently implements some file creation operations as 
 a sequence of 2 syscalls: create the file, followed by setting its 
 permissions.  If creation succeeds, but then setting permission causes an 
 exception to be thrown, then there is no attempt to close the previously 
 opened file, resulting in a file descriptor leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11287) Simplify UGI#reloginFromKeytab for Java 7+

2014-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238757#comment-14238757
 ] 

Hadoop QA commented on HADOOP-11287:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12685895/HADOOP-11287-120814.patch
  against trunk revision ddffcd8.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 65 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5196//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5196//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5196//console

This message is automatically generated.

 Simplify UGI#reloginFromKeytab for Java 7+
 --

 Key: HADOOP-11287
 URL: https://issues.apache.org/jira/browse/HADOOP-11287
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Attachments: HADOOP-11287-120814.patch, HADOOP-11287-120814.patch


 HADOOP-10786 uses reflection to make {{UGI#reloginFromKeytab}} work with Java 
 6/7/8. In 2.7 Java 6 will no longer be supported, thus the code can be 
 simplified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11287) Simplify UGI#reloginFromKeytab for Java 7+

2014-12-08 Thread Stephen Chu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238777#comment-14238777
 ] 

Stephen Chu commented on HADOOP-11287:
--

Hi, [~gtCarrera9]. Thanks for working on this.

I believe this will break when run with JDK8 because as described in 
HADOOP-10786: Krb5LoginModule changed subtly in java 8: in particular, if 
useKeyTab and storeKey are specified, then only a KeyTab object is added to the 
Subject's private credentials, whereas in java = 7 both a KeyTab and some 
number of KerberosKey objects were added.

If users run with the current patch on Java 8, then they will incorrectly set 
isKeytab to false because KerberosKey objects will not be added.

 Simplify UGI#reloginFromKeytab for Java 7+
 --

 Key: HADOOP-11287
 URL: https://issues.apache.org/jira/browse/HADOOP-11287
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Attachments: HADOOP-11287-120814.patch, HADOOP-11287-120814.patch


 HADOOP-10786 uses reflection to make {{UGI#reloginFromKeytab}} work with Java 
 6/7/8. In 2.7 Java 6 will no longer be supported, thus the code can be 
 simplified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11287) Simplify UGI#reloginFromKeytab for Java 7+

2014-12-08 Thread Stephen Chu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238780#comment-14238780
 ] 

Stephen Chu commented on HADOOP-11287:
--

To fix the above, we just check for presence of 
javax.security.auth.kerberos.KeyTab, and we don't need the reflection anymore 
because the class is available in JDK7 and up.

 Simplify UGI#reloginFromKeytab for Java 7+
 --

 Key: HADOOP-11287
 URL: https://issues.apache.org/jira/browse/HADOOP-11287
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Attachments: HADOOP-11287-120814.patch, HADOOP-11287-120814.patch


 HADOOP-10786 uses reflection to make {{UGI#reloginFromKeytab}} work with Java 
 6/7/8. In 2.7 Java 6 will no longer be supported, thus the code can be 
 simplified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10134) [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments

2014-12-08 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-10134:

Attachment: (was: HADOOP-10134.000.patch)

 [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments 
 --

 Key: HADOOP-10134
 URL: https://issues.apache.org/jira/browse/HADOOP-10134
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Attachments: 10134-branch-2.patch, 10134-branch-2.patch, 
 10134-trunk.patch, 10134-trunk.patch, 10134-trunk.patch, 
 HADOOP-10134.000.patch


 Javadoc is more strict by default in JDK8 and will error out on malformed or 
 illegal tags found in doc comments. Although tagged as JDK8 all of the 
 required changes are generic Javadoc cleanups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10134) [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments

2014-12-08 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-10134:

Attachment: HADOOP-10134.000.patch

 [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments 
 --

 Key: HADOOP-10134
 URL: https://issues.apache.org/jira/browse/HADOOP-10134
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Attachments: 10134-branch-2.patch, 10134-branch-2.patch, 
 10134-trunk.patch, 10134-trunk.patch, 10134-trunk.patch, 
 HADOOP-10134.000.patch


 Javadoc is more strict by default in JDK8 and will error out on malformed or 
 illegal tags found in doc comments. Although tagged as JDK8 all of the 
 required changes are generic Javadoc cleanups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10134) [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments

2014-12-08 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238802#comment-14238802
 ] 

Haohui Mai commented on HADOOP-10134:
-

[~apurtell], I rebased your patch onto the latest trunk. Does it look good to 
you?

 [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments 
 --

 Key: HADOOP-10134
 URL: https://issues.apache.org/jira/browse/HADOOP-10134
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Attachments: 10134-branch-2.patch, 10134-branch-2.patch, 
 10134-trunk.patch, 10134-trunk.patch, 10134-trunk.patch, 
 HADOOP-10134.000.patch


 Javadoc is more strict by default in JDK8 and will error out on malformed or 
 illegal tags found in doc comments. Although tagged as JDK8 all of the 
 required changes are generic Javadoc cleanups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11349) RawLocalFileSystem leaks file descriptor while creating a file if creat succeeds but chmod fails.

2014-12-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238816#comment-14238816
 ] 

Colin Patrick McCabe commented on HADOOP-11349:
---

+1.

 RawLocalFileSystem leaks file descriptor while creating a file if creat 
 succeeds but chmod fails.
 -

 Key: HADOOP-11349
 URL: https://issues.apache.org/jira/browse/HADOOP-11349
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.6.0
Reporter: Chris Nauroth
Assignee: Varun Saxena
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-11349.patch


 {{RawLocalFileSystem}} currently implements some file creation operations as 
 a sequence of 2 syscalls: create the file, followed by setting its 
 permissions.  If creation succeeds, but then setting permission causes an 
 exception to be thrown, then there is no attempt to close the previously 
 opened file, resulting in a file descriptor leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11321) copyToLocal cannot save a file to an SMB share unless the user has Full Control permissions.

2014-12-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238819#comment-14238819
 ] 

Colin Patrick McCabe commented on HADOOP-11321:
---

Awesome.  Let me know if you want help on the fchmod thing... we could split 
this JIRA into one for Windows and one for Linux.  Or maybe it makes sense to 
keep it all here, and just do it for both platforms in this patch?

I reviewed HADOOP-11349, which is a small fix for just the FD leak (good find)

 copyToLocal cannot save a file to an SMB share unless the user has Full 
 Control permissions.
 

 Key: HADOOP-11321
 URL: https://issues.apache.org/jira/browse/HADOOP-11321
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.6.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
 Attachments: HADOOP-11321.1.patch, HADOOP-11321.2.patch, 
 winutils.tmp.patch


 In Hadoop 2, it is impossible to use {{copyToLocal}} to copy a file from HDFS 
 to a destination on an SMB share.  This is because in Hadoop 2, the 
 {{copyToLocal}} maps to 2 underlying {{RawLocalFileSystem}} operations: 
 {{create}} and {{setPermission}}.  On an SMB share, the user may be 
 authorized for the {{create}} but denied for the {{setPermission}}.  Windows 
 denies the {{WRITE_DAC}} right required by {{setPermission}} unless the user 
 has Full Control permissions.  Granting Full Control isn't feasible for most 
 deployments, because it's insecure.  This is a regression from Hadoop 1, 
 where {{copyToLocal}} only did a {{create}} and didn't do a separate 
 {{setPermission}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11238) Update the NameNode's Group Cache in the background when possible

2014-12-08 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11238:
--
Summary: Update the NameNode's Group Cache in the background when possible  
(was: Group Cache should not cause namenode pause)

 Update the NameNode's Group Cache in the background when possible
 -

 Key: HADOOP-11238
 URL: https://issues.apache.org/jira/browse/HADOOP-11238
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.1
Reporter: Chris Li
Assignee: Chris Li
Priority: Minor
 Attachments: HADOOP-11238.patch, HADOOP-11238.patch


 This patch addresses an issue where the namenode pauses during group 
 resolution by only allowing a single group resolution query on expiry. There 
 are two scenarios:
 1. When there is not yet a value in the cache, all threads which make a 
 request will block while a single thread fetches the value.
 2. When there is already a value in the cache and it is expired, the new 
 value will be fetched in the background while the old value is used by other 
 threads
 This is handled by guava's cache.
 Negative caching is a feature built into the groups cache, and since guava's 
 caches don't support different expiration times, we have a separate negative 
 cache which masks the guava cache: if an element exists in the negative cache 
 and isn't expired, we return it.
 In total the logic for fetching a group is:
 1. If username exists in static cache, return the value (this was already 
 present)
 2. If username exists in negative cache and negative cache is not expired, 
 raise an exception as usual
 3. Otherwise Defer to guava cache (see two scenarios above)
 Original Issue Below:
 
 Our namenode pauses for 12-60 seconds several times every hour. During these 
 pauses, no new requests can come in.
 Around the time of pauses, we have log messages such as:
 2014-10-22 13:24:22,688 WARN org.apache.hadoop.security.Groups: Potential 
 performance problem: getGroups(user=x) took 34507 milliseconds.
 The current theory is:
 1. Groups has a cache that is refreshed periodically. Each entry has a cache 
 expiry.
 2. When a cache entry expires, multiple threads can see this expiration and 
 then we have a thundering herd effect where all these threads hit the wire 
 and overwhelm our LDAP servers (we are using ShellBasedUnixGroupsMapping with 
 sssd, how this happens has yet to be established)
 3. group resolution queries begin to take longer, I've observed it taking 1.2 
 seconds instead of the usual 0.01-0.03 seconds when measuring in the shell 
 `time groups myself`
 4. If there is mutual exclusion somewhere along this path, a 1 second pause 
 could lead to a 60 second pause as all the threads compete for the resource. 
 The exact cause hasn't been established
 Potential solutions include:
 1. Increasing group cache time, which will make the issue less frequent
 2. Rolling evictions of the cache so we prevent the large spike in LDAP 
 queries
 3. Gate the cache refresh so that only one thread is responsible for 
 refreshing the cache



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10134) [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments

2014-12-08 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238829#comment-14238829
 ] 

Andrew Purtell commented on HADOOP-10134:
-

Have you tried compiling the result? My guess is there have been more '/p' 
and other illegal tags added via commits over the months. 

 [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments 
 --

 Key: HADOOP-10134
 URL: https://issues.apache.org/jira/browse/HADOOP-10134
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Attachments: 10134-branch-2.patch, 10134-branch-2.patch, 
 10134-trunk.patch, 10134-trunk.patch, 10134-trunk.patch, 
 HADOOP-10134.000.patch


 Javadoc is more strict by default in JDK8 and will error out on malformed or 
 illegal tags found in doc comments. Although tagged as JDK8 all of the 
 required changes are generic Javadoc cleanups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11238) Update the NameNode's Group Cache in the background when possible

2014-12-08 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238830#comment-14238830
 ] 

Colin Patrick McCabe commented on HADOOP-11238:
---

bq. It's currently like this because the test case uses dependency injection to 
test timing. We could use a fake guava timer but I wanted to avoid tight 
coupling between hadoop and the guava library. Let me know what you think.

Sure, seems reasonable.

bq. Good idea. I added {{\[expireAfterWrite\]}} using 10*cacheTimeout, and 
added some comments

Great.

{code}
201  * This method will block if a cache entry doesn't exist, and
202  * any subsequent requests for the user will wait on the first
203  * request to return. If a user already exists in the cache,
204  * this will be run in the background.
{code}
Maybe say any subsequent requests for the user will wait on *this* request to 
return to make it slightly clearer.

Small nit: can you give different names to your different patches?  There was a 
discussion about this on hadoop-common-dev recently, the consensus is that 
putting numbers on each one makes it clearer which is the latest (yes, we know 
JIRA attaches a date as well :)

I renamed the jira to Update the NameNode's Group Cache in the background when 
possible to better reflect the current patch (i.e. talked about the solution, 
as well as the problem)

Thanks again.

 Update the NameNode's Group Cache in the background when possible
 -

 Key: HADOOP-11238
 URL: https://issues.apache.org/jira/browse/HADOOP-11238
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.1
Reporter: Chris Li
Assignee: Chris Li
Priority: Minor
 Attachments: HADOOP-11238.patch, HADOOP-11238.patch


 This patch addresses an issue where the namenode pauses during group 
 resolution by only allowing a single group resolution query on expiry. There 
 are two scenarios:
 1. When there is not yet a value in the cache, all threads which make a 
 request will block while a single thread fetches the value.
 2. When there is already a value in the cache and it is expired, the new 
 value will be fetched in the background while the old value is used by other 
 threads
 This is handled by guava's cache.
 Negative caching is a feature built into the groups cache, and since guava's 
 caches don't support different expiration times, we have a separate negative 
 cache which masks the guava cache: if an element exists in the negative cache 
 and isn't expired, we return it.
 In total the logic for fetching a group is:
 1. If username exists in static cache, return the value (this was already 
 present)
 2. If username exists in negative cache and negative cache is not expired, 
 raise an exception as usual
 3. Otherwise Defer to guava cache (see two scenarios above)
 Original Issue Below:
 
 Our namenode pauses for 12-60 seconds several times every hour. During these 
 pauses, no new requests can come in.
 Around the time of pauses, we have log messages such as:
 2014-10-22 13:24:22,688 WARN org.apache.hadoop.security.Groups: Potential 
 performance problem: getGroups(user=x) took 34507 milliseconds.
 The current theory is:
 1. Groups has a cache that is refreshed periodically. Each entry has a cache 
 expiry.
 2. When a cache entry expires, multiple threads can see this expiration and 
 then we have a thundering herd effect where all these threads hit the wire 
 and overwhelm our LDAP servers (we are using ShellBasedUnixGroupsMapping with 
 sssd, how this happens has yet to be established)
 3. group resolution queries begin to take longer, I've observed it taking 1.2 
 seconds instead of the usual 0.01-0.03 seconds when measuring in the shell 
 `time groups myself`
 4. If there is mutual exclusion somewhere along this path, a 1 second pause 
 could lead to a 60 second pause as all the threads compete for the resource. 
 The exact cause hasn't been established
 Potential solutions include:
 1. Increasing group cache time, which will make the issue less frequent
 2. Rolling evictions of the cache so we prevent the large spike in LDAP 
 queries
 3. Gate the cache refresh so that only one thread is responsible for 
 refreshing the cache



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10134) [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments

2014-12-08 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238834#comment-14238834
 ] 

Haohui Mai commented on HADOOP-10134:
-

I compiled on java 1.8.0_25 and fixed all the errors.

 [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments 
 --

 Key: HADOOP-10134
 URL: https://issues.apache.org/jira/browse/HADOOP-10134
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Attachments: 10134-branch-2.patch, 10134-branch-2.patch, 
 10134-trunk.patch, 10134-trunk.patch, 10134-trunk.patch, 
 HADOOP-10134.000.patch


 Javadoc is more strict by default in JDK8 and will error out on malformed or 
 illegal tags found in doc comments. Although tagged as JDK8 all of the 
 required changes are generic Javadoc cleanups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11365) Use Java 7's HttpCookie class to handle Secure and HttpOnly flag

2014-12-08 Thread Haohui Mai (JIRA)
Haohui Mai created HADOOP-11365:
---

 Summary: Use Java 7's HttpCookie class to handle Secure and 
HttpOnly flag
 Key: HADOOP-11365
 URL: https://issues.apache.org/jira/browse/HADOOP-11365
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu


HADOOP-10379 and HADOOP-10710 introduced support for the Secure and HttpOnly 
flags for hadoop auth cookie. The current implementation includes custom codes 
so that it can be compatible with Java 6. Since Hadoop has moved to Java 7 
these code can be replaced by the Java's HttpCookie class.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11287) Simplify UGI#reloginFromKeytab for Java 7+

2014-12-08 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11287:
---
Attachment: HADOOP-11287-120814-1.patch

Hi [~schu], thank you very much for pointing this out! In my updated patch, I 
followed your suggestion to check the presence of KeyTab, and keep the 
reflection part removed. 

 Simplify UGI#reloginFromKeytab for Java 7+
 --

 Key: HADOOP-11287
 URL: https://issues.apache.org/jira/browse/HADOOP-11287
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Attachments: HADOOP-11287-120814-1.patch, HADOOP-11287-120814.patch, 
 HADOOP-11287-120814.patch


 HADOOP-10786 uses reflection to make {{UGI#reloginFromKeytab}} work with Java 
 6/7/8. In 2.7 Java 6 will no longer be supported, thus the code can be 
 simplified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11366) Fix findbug warnings after move to Java 7

2014-12-08 Thread Li Lu (JIRA)
Li Lu created HADOOP-11366:
--

 Summary: Fix findbug warnings after move to Java 7
 Key: HADOOP-11366
 URL: https://issues.apache.org/jira/browse/HADOOP-11366
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Li Lu
Assignee: Li Lu


After move to Java 7, there are 65 findbugs warnings for Hadoop common 
codebase. We may want to fix this. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10134) [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments

2014-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238853#comment-14238853
 ] 

Hadoop QA commented on HADOOP-10134:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12685911/HADOOP-10134.000.patch
  against trunk revision ddffcd8.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 4 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-minikdc 
hadoop-maven-plugins.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5198//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5198//artifact/patchprocess/newPatchFindbugsWarningshadoop-auth.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5198//artifact/patchprocess/newPatchFindbugsWarningshadoop-minikdc.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5198//console

This message is automatically generated.

 [JDK8] Fix Javadoc errors caused by incorrect or illegal tags in doc comments 
 --

 Key: HADOOP-10134
 URL: https://issues.apache.org/jira/browse/HADOOP-10134
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.3.0
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Attachments: 10134-branch-2.patch, 10134-branch-2.patch, 
 10134-trunk.patch, 10134-trunk.patch, 10134-trunk.patch, 
 HADOOP-10134.000.patch


 Javadoc is more strict by default in JDK8 and will error out on malformed or 
 illegal tags found in doc comments. Although tagged as JDK8 all of the 
 required changes are generic Javadoc cleanups.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11349) RawLocalFileSystem leaks file descriptor while creating a file if creat succeeds but chmod fails.

2014-12-08 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238900#comment-14238900
 ] 

Gera Shegalov commented on HADOOP-11349:


#  We should consider catching {{Throwable}} to make it more robust. 
# {{out.close}} may throw an exception that will hide the original problem, we 
probably should just  catch and log it without throwing, to make sure that the 
original exception is thrown

Please add a whitespace after {{catch}} 

 RawLocalFileSystem leaks file descriptor while creating a file if creat 
 succeeds but chmod fails.
 -

 Key: HADOOP-11349
 URL: https://issues.apache.org/jira/browse/HADOOP-11349
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.6.0
Reporter: Chris Nauroth
Assignee: Varun Saxena
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-11349.patch


 {{RawLocalFileSystem}} currently implements some file creation operations as 
 a sequence of 2 syscalls: create the file, followed by setting its 
 permissions.  If creation succeeds, but then setting permission causes an 
 exception to be thrown, then there is no attempt to close the previously 
 opened file, resulting in a file descriptor leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11287) Simplify UGI#reloginFromKeytab for Java 7+

2014-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238904#comment-14238904
 ] 

Hadoop QA commented on HADOOP-11287:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12685919/HADOOP-11287-120814-1.patch
  against trunk revision ddffcd8.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 65 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5199//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5199//artifact/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5199//console

This message is automatically generated.

 Simplify UGI#reloginFromKeytab for Java 7+
 --

 Key: HADOOP-11287
 URL: https://issues.apache.org/jira/browse/HADOOP-11287
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Attachments: HADOOP-11287-120814-1.patch, HADOOP-11287-120814.patch, 
 HADOOP-11287-120814.patch


 HADOOP-10786 uses reflection to make {{UGI#reloginFromKeytab}} work with Java 
 6/7/8. In 2.7 Java 6 will no longer be supported, thus the code can be 
 simplified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11287) Simplify UGI#reloginFromKeytab for Java 7+

2014-12-08 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14238990#comment-14238990
 ] 

Haohui Mai commented on HADOOP-11287:
-

+1.

The findbugs warnings are unrelated and being tracked by HADOOP-11366. I'll 
commit it shortly.

 Simplify UGI#reloginFromKeytab for Java 7+
 --

 Key: HADOOP-11287
 URL: https://issues.apache.org/jira/browse/HADOOP-11287
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Attachments: HADOOP-11287-120814-1.patch, HADOOP-11287-120814.patch, 
 HADOOP-11287-120814.patch


 HADOOP-10786 uses reflection to make {{UGI#reloginFromKeytab}} work with Java 
 6/7/8. In 2.7 Java 6 will no longer be supported, thus the code can be 
 simplified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11287) Simplify UGI#reloginFromKeytab for Java 7+

2014-12-08 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11287:

   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~gtCarrera9] for the 
contribution.

 Simplify UGI#reloginFromKeytab for Java 7+
 --

 Key: HADOOP-11287
 URL: https://issues.apache.org/jira/browse/HADOOP-11287
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Fix For: 2.7.0

 Attachments: HADOOP-11287-120814-1.patch, HADOOP-11287-120814.patch, 
 HADOOP-11287-120814.patch


 HADOOP-10786 uses reflection to make {{UGI#reloginFromKeytab}} work with Java 
 6/7/8. In 2.7 Java 6 will no longer be supported, thus the code can be 
 simplified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10476) Bumping the findbugs version to 2.0.2

2014-12-08 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-10476:

Attachment: HADOOP-10476.001.patch

 Bumping the findbugs version to 2.0.2
 -

 Key: HADOOP-10476
 URL: https://issues.apache.org/jira/browse/HADOOP-10476
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HADOOP-10476.000.patch, HADOOP-10476.001.patch


 The findbug version used by hadoop is pretty old (1.3.9). The old version of 
 Findbugs itself have some bugs (like 
 http://sourceforge.net/p/findbugs/bugs/918/, hit by HADOOP-10474). 
 Futhermore, newer version is able to catch more bugs.
 It's a good time to bump the findbugs version to the latest stable version, 
 2.0.2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11287) Simplify UGI#reloginFromKeytab for Java 7+

2014-12-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239000#comment-14239000
 ] 

Hudson commented on HADOOP-11287:
-

FAILURE: Integrated in Hadoop-trunk-Commit #6673 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6673/])
HADOOP-11287. Simplify UGI#reloginFromKeytab for Java 7+. Contributed by Li Lu. 
(wheat9: rev 0ee41612bb237331fc7130a6fb8b5e3366fcc221)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Simplify UGI#reloginFromKeytab for Java 7+
 --

 Key: HADOOP-11287
 URL: https://issues.apache.org/jira/browse/HADOOP-11287
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
 Fix For: 2.7.0

 Attachments: HADOOP-11287-120814-1.patch, HADOOP-11287-120814.patch, 
 HADOOP-11287-120814.patch


 HADOOP-10786 uses reflection to make {{UGI#reloginFromKeytab}} work with Java 
 6/7/8. In 2.7 Java 6 will no longer be supported, thus the code can be 
 simplified.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10476) Bumping the findbugs version to 3.0.0

2014-12-08 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239001#comment-14239001
 ] 

Haohui Mai commented on HADOOP-10476:
-

The v1 patch bumps the findbugs version to 3.

 Bumping the findbugs version to 3.0.0
 -

 Key: HADOOP-10476
 URL: https://issues.apache.org/jira/browse/HADOOP-10476
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HADOOP-10476.000.patch, HADOOP-10476.001.patch


 The findbug version used by hadoop is pretty old (1.3.9). The old version of 
 Findbugs itself have some bugs (like 
 http://sourceforge.net/p/findbugs/bugs/918/, hit by HADOOP-10474).
 Futhermore, Java 8 is only supported by findbugs 3.0.0 or newer.
 It's a good time to bump the findbugs version to 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10476) Bumping the findbugs version to 2.0.2

2014-12-08 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-10476:

Description: 
The findbug version used by hadoop is pretty old (1.3.9). The old version of 
Findbugs itself have some bugs (like 
http://sourceforge.net/p/findbugs/bugs/918/, hit by HADOOP-10474).

Futhermore, Java 8 is only supported by findbugs 3.0.0 or newer.

It's a good time to bump the findbugs version to 3.0.0.

  was:
The findbug version used by hadoop is pretty old (1.3.9). The old version of 
Findbugs itself have some bugs (like 
http://sourceforge.net/p/findbugs/bugs/918/, hit by HADOOP-10474). Futhermore, 
newer version is able to catch more bugs.

It's a good time to bump the findbugs version to the latest stable version, 
2.0.2.


 Bumping the findbugs version to 2.0.2
 -

 Key: HADOOP-10476
 URL: https://issues.apache.org/jira/browse/HADOOP-10476
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HADOOP-10476.000.patch, HADOOP-10476.001.patch


 The findbug version used by hadoop is pretty old (1.3.9). The old version of 
 Findbugs itself have some bugs (like 
 http://sourceforge.net/p/findbugs/bugs/918/, hit by HADOOP-10474).
 Futhermore, Java 8 is only supported by findbugs 3.0.0 or newer.
 It's a good time to bump the findbugs version to 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10476) Bumping the findbugs version to 3.0.0

2014-12-08 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10476?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-10476:

Summary: Bumping the findbugs version to 3.0.0  (was: Bumping the findbugs 
version to 2.0.2)

 Bumping the findbugs version to 3.0.0
 -

 Key: HADOOP-10476
 URL: https://issues.apache.org/jira/browse/HADOOP-10476
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HADOOP-10476.000.patch, HADOOP-10476.001.patch


 The findbug version used by hadoop is pretty old (1.3.9). The old version of 
 Findbugs itself have some bugs (like 
 http://sourceforge.net/p/findbugs/bugs/918/, hit by HADOOP-10474).
 Futhermore, Java 8 is only supported by findbugs 3.0.0 or newer.
 It's a good time to bump the findbugs version to 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10476) Bumping the findbugs version to 3.0.0

2014-12-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239027#comment-14239027
 ] 

Hadoop QA commented on HADOOP-10476:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12685941/HADOOP-10476.001.patch
  against trunk revision 0ee4161.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5200//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5200//console

This message is automatically generated.

 Bumping the findbugs version to 3.0.0
 -

 Key: HADOOP-10476
 URL: https://issues.apache.org/jira/browse/HADOOP-10476
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: build
Reporter: Haohui Mai
Assignee: Haohui Mai
 Attachments: HADOOP-10476.000.patch, HADOOP-10476.001.patch


 The findbug version used by hadoop is pretty old (1.3.9). The old version of 
 Findbugs itself have some bugs (like 
 http://sourceforge.net/p/findbugs/bugs/918/, hit by HADOOP-10474).
 Futhermore, Java 8 is only supported by findbugs 3.0.0 or newer.
 It's a good time to bump the findbugs version to 3.0.0.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11349) RawLocalFileSystem leaks file descriptor while creating a file if creat succeeds but chmod fails.

2014-12-08 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239049#comment-14239049
 ] 

Varun Saxena commented on HADOOP-11349:
---

The findbugs warnings are coming for System.out and System.err being null. 
Nothing to do with code change.
Has Jenkins build env changed recently ? Because even in MapReduce, a similar 
issue was raised - MAPREDUCE-6184

 RawLocalFileSystem leaks file descriptor while creating a file if creat 
 succeeds but chmod fails.
 -

 Key: HADOOP-11349
 URL: https://issues.apache.org/jira/browse/HADOOP-11349
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.6.0
Reporter: Chris Nauroth
Assignee: Varun Saxena
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-11349.patch


 {{RawLocalFileSystem}} currently implements some file creation operations as 
 a sequence of 2 syscalls: create the file, followed by setting its 
 permissions.  If creation succeeds, but then setting permission causes an 
 exception to be thrown, then there is no attempt to close the previously 
 opened file, resulting in a file descriptor leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-6616) Improve documentation for rack awareness

2014-12-08 Thread Adam Faris (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6616?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239067#comment-14239067
 ] 

Adam Faris commented on HADOOP-6616:


It looks like all the topology information regarding rack awareness was removed 
as 'cruft' in HADOOP-8427, when an effort to convert forrest docs to APT.   See 
patch numbered 5 and the diff shows everything related to rack awareness has 
been removed.  Removal is unfortunate as the documentation is still relevant 
for current versions of Hadoop.

 Improve documentation for rack awareness
 

 Key: HADOOP-6616
 URL: https://issues.apache.org/jira/browse/HADOOP-6616
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Jeff Hammerbacher
Assignee: Adam Faris
  Labels: newbie
 Fix For: 3.0.0

 Attachments: hadoop-6616.patch, hadoop-6616.patch.2, 
 hadoop-6616.patch.3, hadoop-6616.patch.4


 The current documentation for rack awareness 
 (http://hadoop.apache.org/common/docs/r0.20.0/cluster_setup.html#Hadoop+Rack+Awareness)
  should be augmented to include a sample script.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11349) RawLocalFileSystem leaks file descriptor while creating a file if creat succeeds but chmod fails.

2014-12-08 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239069#comment-14239069
 ] 

Varun Saxena commented on HADOOP-11349:
---

Findbugs issue seems to be cause by http://sourceforge.net/p/findbugs/bugs/918/ 
as mentioned in comments section of MAPREDUCE-6184

 RawLocalFileSystem leaks file descriptor while creating a file if creat 
 succeeds but chmod fails.
 -

 Key: HADOOP-11349
 URL: https://issues.apache.org/jira/browse/HADOOP-11349
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.6.0
Reporter: Chris Nauroth
Assignee: Varun Saxena
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-11349.patch


 {{RawLocalFileSystem}} currently implements some file creation operations as 
 a sequence of 2 syscalls: create the file, followed by setting its 
 permissions.  If creation succeeds, but then setting permission causes an 
 exception to be thrown, then there is no attempt to close the previously 
 opened file, resulting in a file descriptor leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11349) RawLocalFileSystem leaks file descriptor while creating a file if creat succeeds but chmod fails.

2014-12-08 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated HADOOP-11349:
--
Attachment: HADOOP-11349.002.patch

 RawLocalFileSystem leaks file descriptor while creating a file if creat 
 succeeds but chmod fails.
 -

 Key: HADOOP-11349
 URL: https://issues.apache.org/jira/browse/HADOOP-11349
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.6.0
Reporter: Chris Nauroth
Assignee: Varun Saxena
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-11349.002.patch, HADOOP-11349.patch


 {{RawLocalFileSystem}} currently implements some file creation operations as 
 a sequence of 2 syscalls: create the file, followed by setting its 
 permissions.  If creation succeeds, but then setting permission causes an 
 exception to be thrown, then there is no attempt to close the previously 
 opened file, resulting in a file descriptor leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11349) RawLocalFileSystem leaks file descriptor while creating a file if creat succeeds but chmod fails.

2014-12-08 Thread Varun Saxena (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11349?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239071#comment-14239071
 ] 

Varun Saxena commented on HADOOP-11349:
---

Thanks [~jira.shegalov] for the review. 
Attached a new patch addressing these comments.

Kindly review [~jira.shegalov] and [~cmccabe]

 RawLocalFileSystem leaks file descriptor while creating a file if creat 
 succeeds but chmod fails.
 -

 Key: HADOOP-11349
 URL: https://issues.apache.org/jira/browse/HADOOP-11349
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.6.0
Reporter: Chris Nauroth
Assignee: Varun Saxena
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-11349.002.patch, HADOOP-11349.patch


 {{RawLocalFileSystem}} currently implements some file creation operations as 
 a sequence of 2 syscalls: create the file, followed by setting its 
 permissions.  If creation succeeds, but then setting permission causes an 
 exception to be thrown, then there is no attempt to close the previously 
 opened file, resulting in a file descriptor leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11349) RawLocalFileSystem leaks file descriptor while creating a file if creat succeeds but chmod fails.

2014-12-08 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated HADOOP-11349:
--
Status: Open  (was: Patch Available)

 RawLocalFileSystem leaks file descriptor while creating a file if creat 
 succeeds but chmod fails.
 -

 Key: HADOOP-11349
 URL: https://issues.apache.org/jira/browse/HADOOP-11349
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.6.0
Reporter: Chris Nauroth
Assignee: Varun Saxena
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-11349.002.patch, HADOOP-11349.patch


 {{RawLocalFileSystem}} currently implements some file creation operations as 
 a sequence of 2 syscalls: create the file, followed by setting its 
 permissions.  If creation succeeds, but then setting permission causes an 
 exception to be thrown, then there is no attempt to close the previously 
 opened file, resulting in a file descriptor leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11349) RawLocalFileSystem leaks file descriptor while creating a file if creat succeeds but chmod fails.

2014-12-08 Thread Varun Saxena (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11349?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Saxena updated HADOOP-11349:
--
Status: Patch Available  (was: Open)

 RawLocalFileSystem leaks file descriptor while creating a file if creat 
 succeeds but chmod fails.
 -

 Key: HADOOP-11349
 URL: https://issues.apache.org/jira/browse/HADOOP-11349
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 2.6.0
Reporter: Chris Nauroth
Assignee: Varun Saxena
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-11349.002.patch, HADOOP-11349.patch


 {{RawLocalFileSystem}} currently implements some file creation operations as 
 a sequence of 2 syscalls: create the file, followed by setting its 
 permissions.  If creation succeeds, but then setting permission causes an 
 exception to be thrown, then there is no attempt to close the previously 
 opened file, resulting in a file descriptor leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11367) Fix warnings from findbugs 3.0 in hadoop-streaming

2014-12-08 Thread Li Lu (JIRA)
Li Lu created HADOOP-11367:
--

 Summary: Fix warnings from findbugs 3.0 in hadoop-streaming
 Key: HADOOP-11367
 URL: https://issues.apache.org/jira/browse/HADOOP-11367
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Li Lu
Assignee: Li Lu


When locally run findbugs 3.0, there are new warnings generated. This Jira aims 
to address the new warnings in hadoop-streaming. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11367) Fix warnings from findbugs 3.0 in hadoop-streaming

2014-12-08 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11367:
---
Attachment: HADOOP-11367-120814.patch

This patch addresses the warning generated by findbugs 3.0 against 
hadoop-streaming, complaining about reliance on default encoding. 

 Fix warnings from findbugs 3.0 in hadoop-streaming
 --

 Key: HADOOP-11367
 URL: https://issues.apache.org/jira/browse/HADOOP-11367
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Li Lu
Assignee: Li Lu
 Attachments: HADOOP-11367-120814.patch


 When locally run findbugs 3.0, there are new warnings generated. This Jira 
 aims to address the new warnings in hadoop-streaming. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11367) Fix warnings from findbugs 3.0 in hadoop-streaming

2014-12-08 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11367:
---
Status: Patch Available  (was: Open)

 Fix warnings from findbugs 3.0 in hadoop-streaming
 --

 Key: HADOOP-11367
 URL: https://issues.apache.org/jira/browse/HADOOP-11367
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Li Lu
Assignee: Li Lu
 Attachments: HADOOP-11367-120814.patch


 When locally run findbugs 3.0, there are new warnings generated. This Jira 
 aims to address the new warnings in hadoop-streaming. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11367) Fix warnings from findbugs 3.0 in hadoop-streaming

2014-12-08 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11367?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11367:

Issue Type: Sub-task  (was: Bug)
Parent: HADOOP-10477

 Fix warnings from findbugs 3.0 in hadoop-streaming
 --

 Key: HADOOP-11367
 URL: https://issues.apache.org/jira/browse/HADOOP-11367
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Li Lu
Assignee: Li Lu
 Attachments: HADOOP-11367-120814.patch


 When locally run findbugs 3.0, there are new warnings generated. This Jira 
 aims to address the new warnings in hadoop-streaming. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10481) Fix new findbugs warnings in hadoop-auth

2014-12-08 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239095#comment-14239095
 ] 

Haohui Mai commented on HADOOP-10481:
-

[~swarnim], are you still working on it? Do you mind if I take it over? As the 
findbug issue manifests itself in the current pre-commit run, I would love to 
help out and accelerate the process.

 Fix new findbugs warnings in hadoop-auth
 

 Key: HADOOP-10481
 URL: https://issues.apache.org/jira/browse/HADOOP-10481
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Swarnim Kulkarni
  Labels: newbie
 Attachments: HADOOP-10481.1.patch.txt, HADOOP-10481.2.patch.txt


 The following findbugs warnings need to be fixed:
 {noformat}
 [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ hadoop-auth ---
 [INFO] BugInstance size is 2
 [INFO] Error size is 0
 [INFO] Total bugs: 2
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(FilterConfig):
  String.getBytes() 
 [org.apache.hadoop.security.authentication.server.AuthenticationFilter] At 
 AuthenticationFilter.java:[lines 76-455]
 [INFO] Found reliance on default encoding in 
 org.apache.hadoop.security.authentication.util.Signer.computeSignature(String):
  String.getBytes() [org.apache.hadoop.security.authentication.util.Signer] 
 At Signer.java:[lines 34-96]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10478) Fix new findbugs warnings in hadoop-maven-plugins

2014-12-08 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14239094#comment-14239094
 ] 

Haohui Mai commented on HADOOP-10478:
-

[~swarnim], are you still working on it? Do you mind if I take it over? As the 
findbug issue manifests itself in the current pre-commit run, I would love to 
help out and accelerate the process.

 Fix new findbugs warnings in hadoop-maven-plugins
 -

 Key: HADOOP-10478
 URL: https://issues.apache.org/jira/browse/HADOOP-10478
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Haohui Mai
Assignee: Swarnim Kulkarni
  Labels: newbie

 The following findbug warning needs to be fixed:
 {noformat}
 [INFO] --- findbugs-maven-plugin:2.5.3:check (default-cli) @ 
 hadoop-maven-plugins ---
 [INFO] BugInstance size is 1
 [INFO] Error size is 0
 [INFO] Total bugs: 1
 [INFO] Found reliance on default encoding in new 
 org.apache.hadoop.maven.plugin.util.Exec$OutputBufferThread(InputStream): new 
 java.io.InputStreamReader(InputStream) 
 [org.apache.hadoop.maven.plugin.util.Exec$OutputBufferThread] At 
 Exec.java:[lines 89-114]
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >