[ 
https://issues.apache.org/jira/browse/HADOOP-8052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13205031#comment-13205031
 ] 

Hadoop QA commented on HADOOP-8052:
-----------------------------------

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12514037/HADOOP-8052.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 3 new or modified tests.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

    +1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

    +1 core tests.  The patch passed unit tests in .

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/584//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/584//console

This message is automatically generated.
                
> Hadoop Metrics2 should emit Float.MAX_VALUE (instead of Double.MAX_VALUE) to 
> avoid making Ganglia's gmetad core
> ---------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8052
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8052
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: metrics
>    Affects Versions: 0.23.0, 1.0.0
>            Reporter: Varun Kapoor
>            Assignee: Varun Kapoor
>              Labels: patch
>         Attachments: HADOOP-8052-branch-1.patch, HADOOP-8052-branch-1.patch, 
> HADOOP-8052.patch, HADOOP-8052.patch
>
>
> Ganglia's gmetad converts the doubles emitted by Hadoop's Metrics2 system to 
> strings, and the buffer it uses is 256 bytes wide.
> When the SampleStat.MinMax class (in org.apache.hadoop.metrics2.util) emits 
> its default min value (currently initialized to Double.MAX_VALUE), it ends up 
> causing a buffer overflow in gmetad, which causes it to core, effectively 
> rendering Ganglia useless (for some, the core is continuous; for others who 
> are more fortunate, it's only a one-time Hadoop-startup-time thing).
> The fix needed to Ganglia is simple - the buffer needs to be bumped up to be 
> 512 bytes wide, and all will be well - but instead of requiring a minimum 
> version of Ganglia to work with Hadoop's Metrics2 system, it might be more 
> prudent to just use Float.MAX_VALUE.
> An additional problem caused in librrd (which Ganglia uses 
> beneath-the-covers) by the use of Double.MIN_VALUE (which functions as the 
> default max value) is an underflow when librrd runs the received strings 
> through libc's strtod(), but the librrd code is good enough to check for 
> this, and only emits a warning - moving to Float.MIN_VALUE fixes that as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to