Actually, this oft-stated java puzzler is not as puzzleresque as it
first appears.
Quick question: Find a value for 'x' so that the following snippet
prints "Wow!":
if (Double.MAX_VALUE < x) System.out.println("Wow!");
scroll down for the answer!
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
answer: double x = Double.POSITIVE_INFINITY; of course.
So, MAX_VALUE also isn't the actual max value in the terms most people
who consider this a puzzler are more familiar with (integral/'human'
math). Therefore, if you deem MIN_VALUE to be a puzzler, you really
ought to consider MAX_VALUE a puzzler as well. Even more correct would
be to say that neither are a puzzler; instead, the entire concept of
IEEE math is itself a puzzler: Here there be monsters!!
Some serious monsters in this ocean:
1. There is the concept of underflow, which is what happens when a
calculation is positive, but the result lies between 0 and the first
positive representable number that is larger than 0 (which would be
Double.MIN_VALUE). This is problematic, because what are you going to
do? You can either throw an error, or you can produce NaN (which is
semantically not really correct; NaN is for stuff like infinity*0, or
0/0, which are actual mathematical indeterminates, but then again, the
result is a well defined number that you can nevertheless not at all
represent, not even able to round to a close-enough simile, so, NaN is
fair game), or you can just round to 0, or you can round to that
minimum value. Java chooses 'round to 0', though this is really very
bad. The effects of this is a puzzler.
2. Mirror concept #1 but on the high end: 2^1024, and all numbers
above that, are not representable as doubles (they are above
Double.MAX_VALUE). You can here choose to throw an exception, round up
to Infinity, or to Double.MAX_VALUE, or to NaN. Java chooses infinity,
and perhaps it's easier to see here why this is bad. The difference
between a number larger than 2^1024 and infinity is, well, infinite!
It's quite literally the worst possible rounding error you could
possibly make: A rounding error of infinity! - This causes all sorts
of conceptual errors; For example, x*x/x can be infinity even if x
isn't, which clearly makes no sense; the result should be x. Once you
accept that rounding up to infinity is really pretty dumb, you can
extrapolate that rounding numbers between 0 and 2^-1024 (which are
really extremely small numbers, so it's a rounding error of very very
very very little) is still bad: Let's say you get:
double factor = (operation that causes underflow);
double result = 0.000000002/factor;
Then, due to rounding-to-0 on the underflow, result will be infinity.
Mathematically speaking, it's more in the ballpark of 2^1000, which is
again an infinite distance from the actual result. Thus, underflow and
overflow are really two sides of the same coin, and it would be
absolutely ridiculous to round one up to infinity/down to 0, and throw
an error on the other one.
(NB: Because sometimes, when you control most of the math, you want to
explicitly avoid the negative consequences of rounding off over/
underflow to Infinity/0, but you also want to refrain from throwing an
error condition (NaN or an exception), so you round them off to
MIN_VALUE and MAX_VALUE instead - that's why that constant exists -
though I would agree they could have used different names).
3. All ints are representable, perfectly, as doubles. In fact, all
interesting longs are, too, with 'interesting' defined as: timestamps.
Whether you mean the beginning of the universe or its end, even if you
take quite a few liberties in picking these exotic dates, they'll fall
well within +-2^52 milliseconds from jan 1st 1970, which means they
lose no precision at all when stored as a double. This is incidentally
why javascript (where all numbers are doubles) can work with
timestamps without horribly breaking. Yet, common wisdom is that 'you
shouldn't store any number as double if you care about precision'. So,
this: long x = (long)(double)y; will never result in x being not equal
to y, presuming y is a timestamp, provided the timestamp is not devoid
of all meaning. Most programmers I know aren't aware of this.
4. Numbers that seem perfectly simple, such as '1/3', lose precision
when stored as a double. This isn't a coincidence; losing precision
happens almost immediately and thus there is no obvious concept of
'printing' a double, and they really have no sensible string
representation. Sure, java lets you print doubles, but technically you
should only print doubles as arg to the printf function, so that you
can set up the parameters for rounding and the like.
Java programmers (and non-java programmers, too) routinely don't
understand, misjudge, abuse, and otherwise fail to understand the
entire principle of IEEE math. Therefore, the only correct conclusion
here is that you need to know how they work, intricately, before you
should ever be let near them. And when you grok IEEE math, you
instinctively know that MIN_VALUE would be referring to the smallest
positive number, and MAX_VALUE to the largest non-infinite positive
number. Having any sort of constant for negative numbers is pure
sillyness; all negative numbers exist as sign-bit-flipped equivalents
to their positive number, so to turn any positive concept into its
equivalent negative concept, you just apply a minus sign. In
IEEEThink, it's just not done to treat negative numbers as anything
other than a sign bit flip, so, that's natural.
There is, fortunately, an easy way out if you can't be arsed to learn
about IEEE math: Use BigDecimal instead. Fewer surprises.
On Nov 15, 12:20 am, Peter Becker <[email protected]> wrote:
> Also that is just another case of bad naming. According to the order on
> Doubles, the minimum is not what the JDK claims. That static should
> clearly be called MIN_POS_VALUE or maybe even MINIMUM_POSITIVE_VALUE or
> SMALLEST_REPRESENTABLE_POSITIVE_VALUE :-)
>
> BTW: What is the actual use for that field? I honestly don't know what I
> would use it for.
>
> Peter
>
>
>
> Casper Bang wrote:
> > True. Semantics can be a tricky one. For instance, the output of the
> > following is surprising to many:
>
> > System.out.println( max(0.0, -1.0, -2.0) );
>
> > public static double max(double... candidates)
> > {
> > assert(candidates.length > 0);
> > double knownMaxValue = Double.MIN_VALUE;
> > for(double candidate : candidates)
> > if(candidate > knownMaxValue)
> > knownMaxValue = candidate;
> > return knownMaxValue;
> > }
>
> > /Casper
>
> > On Nov 14, 10:57 pm, Peter Becker <[email protected]> wrote:
>
> >> Brian Leathem wrote:
>
> >>> On Nov 13, 8:38 am, Kevin Wright <[email protected]>
> >>> wrote:
>
> >>>> On Fri, Nov 13, 2009 at 4:34 PM, Alexey <[email protected]> wrote:
>
> >>>>> Otherwise, doesn't seem too difficult to write your own such method,
> >>>>> no?
>
> >>> It is indeed trivial to write my own method. I could then package
> >>> that method along with other such methods in a "toolbox" jar that I
> >>> include with all my projects. It just strikes me that in the general
> >>> case it's not good if everyone does this, as one will have to learn a
> >>> new toolbox API every time one joins a new project. I don't have the
> >>> experience of working in many organizations, or on a diverse set of
> >>> projects, but are such toolboxes common? Is there much overlap
> >>> amongst these toolboxes that people have seen?
>
> >>> Granted I'm blowing things somewhat out of proportion with just this
> >>> one method. I was just curious if anyone else had a "standardized"
> >>> way of dealing with this one.
>
> >> There is a little problem with the idea of a generic maximum function on
> >> Comparables: the maximum is not necessarily defined. Let's say we sort
> >> people by age: chances are that you will encounter two people with the
> >> same age. In mathematical terms that means that they are in the same
> >> equivalence class, which means neither of them is considered larger than
> >> the other. Declaring one of them the maximum would not be correct and in
> >> fact the proposed implementation would not be symmetrical: depending on
> >> the order of the parameters you get different results.
>
> >> One way out would be to actually not use the original order, but a more
> >> fine-grained one falling back to something like the system hashcode as
> >> secondary order, i.e. if the comparator/comparable returns "0", then we
> >> compare the references. But that's not really good either since the
> >> semantics are then dependent on the references -- we get into problems
> >> with multiple references to the same object as well as multiple runs of
> >> the program being inconsistent.
>
> >> The only correct solution IMO would be to extend Comparator/Comparable
> >> by something that is explicitly antisymmetric, i.e. a total order. That
> >> then could have a maximum function that would be well-defined.
>
> >> Peter
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "The
Java Posse" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to
[email protected]
For more options, visit this group at
http://groups.google.com/group/javaposse?hl=en
-~----------~----~----~----~------~----~------~--~---