Themotivation behind this PR isnot driven by pattern matching:John Rose (the reporter of the JBS issue) is concerned about having safer casts that, in addition, can be JITted to efficient code.


Yes, of course.  But also, as the platform evolves, it often happens that the same issues arises in separate places.  It would be terrible if asXExact had a different opinion of what could be converted exactly to an int, from the language's `instanceof int`. And similarly, it would be unfortunate if the semantics were driven by nothing more than "who got there first."  So we frequently discover things that we thought were independent increments of functionality, that turn out to want to be co-designed with other things, even those not known at the time we started.  This is just how this game works.

But I get your point that having non-throwing testsbetter serves pattern matching.However, I’m not sure that pattern matching alone would remove the more general need for the proposed methods.


Yes, not sure either.  The game here is "find the primitive"; our first move in this game is not always the right one.  So let's figure it out.

As for the controversial question about -0.0, as you note any proposal will kind of suck. With “safer” cast methods we can have two (or even more) variants.


Also, small changes in terminology can subtly bias our answer. Words like "exact", "information-preserving", "loss of precision", "safe", etc, all seem to circle the same concept, but the exact choice of terminology -- often made at random at the beginning of an investigation -- can bias us one way or the other.  (Concrete illustration: converting -0.0 to 0 seems questionable when your rule is "no loss of information", but seems more ambiguous when your rule is "safe".)

In the context of primitive pattern matching (including the relevant material in the JLS), however, it would be probably much simpler to allow for matches between integral types on one side and for matches between floating-point types on the other side, but not a mix. The nuisance with -0.0 and other special values would disappear altogether.


That was my first thought as well, but then I had second thoughts, as this seemed like wishful thinking.  If we are willing to say:

    long x = 7;
    if (x instanceof int) { /* yes, it is */ }

then it may seem quite odd to not also say

    double x = 7.0;
    if (x instanceof int) { /* this too? */ }

Because, the natural way to interpret the first is "is the value on the left representable in the type on the right."  Clearly 7 is representable as an int.  But so is 7.0; we lose nothing going from double 7.0 -> int 7 -> double 7.0.  And whoosh, now we're sucked into belly of the machine, staring down -0.0 and NaN other abominations, questioning our life choices.

That's not to say that your initial thought is wrong, just that it will be surprising if we go that way.  Maybe surprises are inevitable; maybe this is the least bad of possible surprises.  We should eliminate the "maybes" from this analysis first, though, before deciding.

Thus, while examples like

    if (anIntExpression instanceof byte b)

and

    if (aDoubleExpression instanceof float f)

make perfect sense, would an example like
    if (aDoubleExpression instanceof short s)

be pragmatically reasonable?

IIRC, the discussions about “Primitive type patterns” and “Evolving past reference type patterns” in the amber-spec-experts mailing list of April 2022 don’t even mention the mixed integral/floating-point case.


Correct, we hadn't gotten there yet, we were still trying to wrap our heads around how it should work in the easier cases.

Reply via email to