On Mar 22, 2022, at 7:44 PM, Kevin Bourrillion 
<kev...@google.com<mailto:kev...@google.com>> wrote:

On Tue, Mar 22, 2022 at 4:56 PM Dan Smith 
<daniel.sm...@oracle.com<mailto:daniel.sm...@oracle.com>> wrote:

In response to some encouragement from Remi, John, and others, I've decided to 
take a closer look at how we might approach the categorization of value and 
identity classes without relying on the IdentityObject and ValueObject 
interfaces.

(For background, see the thread "The interfaces IdentityObject and ValueObject 
must die" in January.)

Could anyone summarize the strongest version of the argument against them? The 
thread is not too easy to follow.

I'm sure there's more, but here's my sense of the notable problems with the 
status quo approach:

- We're adding a marker interface to every concrete class in the Java universe. 
Generally, an extra marker interface shouldn't affect anything, but the Java 
universe is big, and we're bound to break some things (specifically by changing 
reflection behavior and by producing more compile-time intersection types). We 
can ask people to fix their code and make fewer assumptions, but it adds 
upgrade friction, and the budget for breaking stuff is not unlimited.

- Injecting superinterfaces is something entirely new that I think JVMs would 
really rather not be involved with. But it's necessary for compatibly evolving 
class files. We've spent a surprising amount of time working out exactly when 
the interfaces should be injected; separate compilation leads to tricky corner 
cases.

- There's a tension between our use of modifiers and our use of interfaces. 
We've made some ad hoc choices about which are used in which places (e.g., you 
can't declare a concrete value class by saying 'class Foo implements 
ValueObject'). In the JVM, we need modifiers for format checking and interfaces 
for types. This is all okay, but the arbitrariness and redundancy of it is 
unsatisfying and suggests there might be some accidental complexity.

- Subclass restriction: 'implements IdentityObject' has been replaced with the 
'identity' modifier. Complexity cost of special modifiers seems on par with the 
complexity of special rules for inferring and checking the superinterfaces.

The rules for the modifiers are okay. But here's my observation. The simplest 
way to explain those rules would be if the `value` keyword is literally 
shorthand for `extends/implements ValueObject`. I think the rules fall out from 
that, plus:

  *   IO and VO are disjoint. (As interfaces can already be, like `interface 
Foo { int x(); }` and `interface Bar { boolean x(); }`, and if it really came 
down to it, you could literally put an incompatible method into each type and 
blame their noncohabitation on that :-))
  *   A class that breaks the value class rules has committed to being an 
identity class.
  *   We wouldn't know how to make an instance that is "neither", so 
instantiating a "neither" class has to have default behavior, and that has to 
be to give you what it always has.

In each case I've explained why the rule seems very easy to understand to me. 
So from my POV, this still pulls me back to the types anyway. I would say that 
your rules for the modifiers are largely simulating those types.

Yes, it is nice how we get inheritance for free from interfaces. But when you 
compare that with the "plus" list (which I'd summarize as: disjointedness, 
declaration restrictions, and inference), it's not like getting inheritance 
"for free" is such a huge win. It's maybe 20% less complexity or something to 
explain the feature.

Of course the big win is that interfaces are types, so we already know how to 
use them in the static type system. As your later comments suggest, I think our 
expectations for static typing are probably the most important factor in 
deciding which strategy best meets our needs.

Reply via email to