1) Is it legal to have a permitted subclass/subinterface that does not actually 
extend the permitting class or interface?

In past discussions about compilation, I feel like we've leaned towards "no", 
but I don't see a corresponding rule. We're definitely not interested in scenarios 
involving separate compilation; so if the child and parent disagree, we're compiling an 
inconsistent set of classes. Seems reasonable to ask the programmer to fix it. If we 
don't, we end up with downstream design issues like whether to trust the 'permits' clause 
when defining exhaustiveness.

I think we have slowly roped ourselves into this position.  We started out with "no action at a distance", realized this was suboptimal for various reasons, and ended up having a global view at compile time of the hierarchy, so it is reasonable to enforce consistency constraints on that.

At run time, it's convenient for the JVM if the answer is "yes". It's expensive 
to try to validate a PermittedSubtypes attribute all at once (could require O(nm) class 
loads, n=sealed hierarchy height, m=branching factor; although both are typically small). 
Instead, the best time to validate is when another class/interface attempts to extend the 
sealed class/interface.

Yes.  At the VM level, its perfectly OK if the subtype drops a superclass.  This isn't binary compatible, so might result in ICCE, but that's why we have ICCE.

2) What are the constraints on module membership?

The intention is that sealed types are for classes that share a _maintenance domain_ and therefore are routinely co-compiled.

I am worried about being too permissive here with the unnamed module, because we would like for code in the unnamed module to be able to graduate to named modules, and allowing too much flexibility here will be yet another source of circular references which cannot modularize.

3) What are the constraints on package membership?

The "same package if unnamed module" rule was in part an attempt to capture the co-maintained constraint.

At compile time, if the child and parent must successfully refer to each other, 
then we've already guaranteed that they're compiled at the same time. Is there 
something more to be gained from forcing them into the same package?
But have we?  Imagine two packages, A and B, in separate maintenance domains.  First we have:

    package a;
    interface I { }

    package b;
    class C implements a.I { }

We can compile A, and then compile B against A.jar.

Now modify A, and recompile with B.jar on the classpath:

    package a;
    sealed interface I permits b.C { }

This is fine.  Thereafter, these JARs can be recompiled independently.  And I could easily imagine lore surrounding this trick growing up as a "workaround" against the "stupid rule."





Then again, it's not totally unreasonable to tell these code bases they need to 
declare a module if they want cross-package sealed types.)
Yep, that's the answer.

At run time, it's fairly straightforward to check for the same package name 
when we validate a subclass. But it's also doesn't benefit the JVM in any way, 
so maybe this is more of a language-specific restriction that should be ignored 
by the JVM. (E.g., maybe Kotlin doesn't mind compiling sealed hierarchies 
across different packages in the unnamed module, even if Java won't do it.)

Probably right.

Reply via email to