I like Un{recognized,known}EnumConstantE{rror,xception}.  When we get to sealed types, it will be the same but with something like s/EnumConstant/SealedTypeMember/.

I am still having trouble squaring the Error vs Exception, but you've pulled me from "seems like an Exception to me" into "crap, now I don't know" territory :)

I think what makes me uncomfortable is that there are some enums that are _intended_ to be extended, such as java.lang.annotation.ElementType.  (In fact, we might be adding a new member soon; RECORD_COMPONENT.)  And I would want clients of ElementType to be aware that they never know all the element types, and code accordingly.

Which suggests that enums needs a mechanism to either mark them as sealed (which turns on the enhanced exhaustiveness behavior) or as non-sealed (which would turn it off).

On 4/19/2018 5:43 PM, Kevin Bourrillion wrote:
Necromancing, since I noticed that the spec still contains a hole where this name would go.

*Name:*

  * I think something specific like
    UnexpectedEnumConstantE{rror,xception} would seem the right way to
    go. (Perhaps "Unrecognized"?)

*Hierarchy: *

  * It will want a common supertype it can share with the future
    "unexpected subtype of sealed type" error/exception.
  * As for where that supertype goes, I still maintain that this is
    /exactly/ an IncompatibleClassChangeError (argument below), and
    thus should be a subtype of that. I also see nothing harmed by it
    being an Error instead of Exception.

My claim is that releasing an enum with a certain set of constants is qualitatively equivalent to releasing an interface with a certain set of abstract methods. We know that people key behavior off of enums (that's what enum switch is all about). That means that when we add a constant, we are adding new /contract/, which we (the enum owners) don't know how to fulfill. The call sites need to fulfill it.

Thought experiment: I can already implement an interface in two different ways: the normal way, or via a dynamic proxy that throws an exception if it gets an unexpected method. Let's imagine that the latter way was made exactly as easy to express as the former. I think everyone would probably agree that most implementations would /still/ choose the current behavior. (Yes?) They don't /want/ anything to fail at runtime that could instead fail at compile-time.

Anyway, all of this is just to support the notion that this should be an IncompatibleClassChangeError. Of course, the argument's been made in this thread that it /is/ different from an incompatible class change. My response was that these reasons seem way too subtle to me. Or, have I been persistently missing something?



On Fri, Mar 30, 2018 at 11:31 AM, Kevin Bourrillion <kev...@google.com <mailto:kev...@google.com>> wrote:

    On Fri, Mar 30, 2018 at 10:48 AM, Brian Goetz
    <brian.go...@oracle.com <mailto:brian.go...@oracle.com>> wrote:

        Backing way up, Alex had suggested that the right exception is
        (a subtype of) IncompatibleClassChangeEXCEPTION, rather than
        Error.  I was concerned that ICC* would seem too low-level to
        users, though.  But you're saying ICCE and subtypes are
        helpful to suers, because they guide users to "blame your
        classpath".  SO in that case, is the ICC part a good enough
        trigger?


    (Just to be clear, Remi and I have been advocating for a subtype
    of ICC*Error* all along, in case anyone missed that.)

    All right, I've been focusing too much on the hierarchy, but the
    leaf-level name is more important than that (and the message text
    further still, and since I assume we'll do a fine job of that, I
    can probably relax a little). To answer your question, sure, the
    "ICC" is a pretty decent signal. Have we discussed Cyrill's point
    on -observers that we should create more specific exception types,
    such as UnrecognizedEnumConstantE{rror,xception}?


        For an enum in the same class/package/module as the switch,
        the chance of getting the error at runtime is either zero
        (same class) or effectively zero (same package or module),
        because all sane developers build packages and modules in an
        atomic operation.

        For an enum in a different module as the switch, the chance of
        getting the error at runtime is nonzero, because we're linking
        against a JAR at runtime.

        So an alternative here is to tweak the language so that the
        "conclude exhaustiveness if all enum constants are present"
        behavior should be reserved for the cases where the switch and
        the enum are in the same module?

        (Just a thought.)


    Okay, that is a sane approach, but I think it leaves too much of
    the value on the floor. I often benefit from having my
    exhaustiveness validated and being able to find out at compile
    time if things change in the future.


-- Kevin Bourrillion | Java Librarian | Google,
    Inc. |kev...@google.com <mailto:kev...@google.com>




--
Kevin Bourrillion | Java Librarian | Google, Inc. |kev...@google.com <mailto:kev...@google.com>

Reply via email to