[ 
https://issues.apache.org/jira/browse/HADOOP-8052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13205025#comment-13205025
 ] 

Varun Kapoor commented on HADOOP-8052:
--------------------------------------

I chose to not change the data type housed in the MinMax class because that 
would, as is alluded to by your helpful hint about clipping, be a much bigger 
change with a wider-reaching impact. 

Instead, my justification for just changing the default values of 'max' and 
'min' to Float.MIN_VALUE and Float.MAX_VALUE, respectively, is:

a) It is the smallest required change that fixes both problems with gmetad I 
mentioned in my original description.
b) It will likely be rare that we emit metrics outside of the range of 
[E-38:E+38], so the regions on the number line outside of that range (that 
Double.MAX_VALUE and Double.MIN_VALUE provide access to) are not really needed.

Also, I've left the types of the 2 new constants as 'double' in case we ever 
want to move back to using Double's extremes in the future - my current fix 
signifies (or at least I hope it does) that the *only* change we're making is 
in the default values of the 'min' and 'max' fields.
                
> Hadoop Metrics2 should emit Float.MAX_VALUE (instead of Double.MAX_VALUE) to 
> avoid making Ganglia's gmetad core
> ---------------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-8052
>                 URL: https://issues.apache.org/jira/browse/HADOOP-8052
>             Project: Hadoop Common
>          Issue Type: Bug
>          Components: metrics
>    Affects Versions: 0.23.0, 1.0.0
>            Reporter: Varun Kapoor
>            Assignee: Varun Kapoor
>              Labels: patch
>         Attachments: HADOOP-8052-branch-1.patch, HADOOP-8052-branch-1.patch, 
> HADOOP-8052.patch, HADOOP-8052.patch
>
>
> Ganglia's gmetad converts the doubles emitted by Hadoop's Metrics2 system to 
> strings, and the buffer it uses is 256 bytes wide.
> When the SampleStat.MinMax class (in org.apache.hadoop.metrics2.util) emits 
> its default min value (currently initialized to Double.MAX_VALUE), it ends up 
> causing a buffer overflow in gmetad, which causes it to core, effectively 
> rendering Ganglia useless (for some, the core is continuous; for others who 
> are more fortunate, it's only a one-time Hadoop-startup-time thing).
> The fix needed to Ganglia is simple - the buffer needs to be bumped up to be 
> 512 bytes wide, and all will be well - but instead of requiring a minimum 
> version of Ganglia to work with Hadoop's Metrics2 system, it might be more 
> prudent to just use Float.MAX_VALUE.
> An additional problem caused in librrd (which Ganglia uses 
> beneath-the-covers) by the use of Double.MIN_VALUE (which functions as the 
> default max value) is an underflow when librrd runs the received strings 
> through libc's strtod(), but the librrd code is good enough to check for 
> this, and only emits a warning - moving to Float.MIN_VALUE fixes that as well.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to