Thanks Eddie.  Indeed, I think these could be clarified.

We should start by putting out the motivation for why the "totality checker" is satisfied with less than full coverage -- this is because we want to be nice, and not pedantically ask users to write case clauses for "silly" cases that probably won't ever come up.  These sometimes include nulls (when deconstruction patterns are involved) and values from the future (when sealing or enums are involved), as well as closing over these with combinators (so Box(null) is part of the remainder for Box<Bag<String>>.)  This should motivate _where_ the totality rules come from, and put claims of "silly values" on a more concrete foundation.

Secondarily, perhaps totality isn't the right term; maybe we need a word for "good enough to satisfy the checker", where the checker is generous in letting us be sloppy regarding silly values.

To your last point, indeed our ability to gauge totality is a best-efforts thing.  If there is a class A with only an implementation C, we can't deduce that without examining every classfile in the world -- unless A is sealed.  Similarly, we can't be expected to deem

    case Foo f when bar() > 0:
    case Foo f when bar() <= 0:

as total on Foo; while it might be possible to solve the most trivial cases, this eventually heads towards a wrestling match with computational physics.  So the best way to win that game is not to play.  Here, the user has the option to refactor to:

    case Foo f:
        if (bar() > 0) { ... }
        else { ... }

which is more scrutable to the totality checker.  Indeed, even if we could tell that the two guards covered all values of `bar()`, this is an analysis better suited to a pure language. (Can we even get away with calling `bar()` only once?  I doubt it.  Writing side-effect-ful/impure guards might be questionable, but we probably have to assume people will do it anyway.)

On 9/18/2020 6:19 PM, Eddie Aftandilian wrote:
Regard Brian's recent update to the type patterns in switch proposal (https://github.com/openjdk/amber-docs/blob/master/site/design-notes/type-patterns-in-switch.md), I had some questions regarding totality.

I really like the new presentation of this in terms of totality.  It helps make clear why certain design decisions fall out of the basic principles behind type patterns in switches.  However, I'm also a bit concerned about what "total" really means in this context.

The term "total" calls to mind a total function, which is a function defined for all input values.  But in this case, there's a carve out -- "total with remainder."  The idea of "total with remainder" makes me uneasy, since if there is a remainder, by definition the function (or switch) is not total.

It's also not clear to me how to characterize which values are "silly" and allowed to be in that remainder.  "Silly" values are intended to be those that the programmer shouldn't have to care about in the common case, but that's not very satisfying as a principle.

Finally, what are the limitations of how we determine totality?  For example, consider a non-sealed abstract class A with a single concrete implementation C.  The pattern "C c" should in principle be total on A, but practically with separate compilation it is impossible to determine that, so we add the constraint that A must be sealed for the totality analysis to work.  Another example might be guards.  Consider two guarded cases where the booleans are simply inverted.  It is clear to a human (or possibly a static analyzer) that the switch is total, but it's probably not practical to determine that in general.

A thought question: if you think of totality analysis as a type of static analysis, is the proposed analysis sound and/or complete?

-Eddie



Reply via email to