Niclas Hedhman wrote:
Peter,
I don't want to sound patronizing or "besser-wisser", but I disagree with
your ambitions.
Major releases are an opportunity, where "clean up" should occur but happen
as seldom as possible. Constraining such opportunity causes long-term
maintenance head-aches for the community which might drain it of the little
energy that exists, IMHO pretty much the current state of River.
Now, if you formalizes your list of what is possible, I think we can devise
strategy on how to make such changes without breaking source compatibility
as well.
Chapter 13 of the Java Language specification third ed is a good place
to start. Attempting the above suggestion places additional constraints
on the changes possible, this would in fact make life more difficult for
the programmer to implement a new feature while preserving backward
compatibility. However if & when we determine that we must break
backward compatibility, we can at that point in time, check to see if
Binary compatibility is at least salvageable. I would like to reserve
the opportunity to determine if binary compatibility can be salvaged
after it has been determined that source code compile time compatibility
must be broken in order to achieve an outcome. This constraint wouldn't
apply until it has been deemed necessary to break compile time
compatibility.
Process:
1. It is deemed that compile time backward compatibility must be broken
to implement feature X.
2. Committers vote on adding of the feature.
3. If the vote outcome is in favour, then check if Binary Compatibility
can be preserved in the implementation.
4. If not then Binary compatibility is also broken.
<Diverging tangentially (should be another thread) for a moment:
I've been thinking about applications written for a future hypothetical
version of River, with code base services that utilise Package based
bytecode dependency analysis with Public Package API evolution mapping
and ClassLoader Package Isolation and Compatible Package Substitution.
I've been thinking about a simple way to allow interface evolution.
I'm thinking of a djinn that's runs indefinitely, individual services
and clients can be shut down on a regular basis, however the djinn
itself doesn't. When a client or service needs to refresh or upgrade
class files it can do so by persisting and restarting.
Runtime linking must be honoured, if older bytecode exists in a djinn,
provided that new separately compiled byte code that it depends upon
honours the required methods etc, then it is safe to co exist, even
though it was not compiled at the same time and the sources for both
would not compile. API compatibility would be determine prior by static
bytecode analysis by the codebase service prior to making the bytecode
available.
Currently a public interface cannot be changed once implemented due to
compile time compatibility constraints, old methods must continue to be
implemented and new methods can only be created by creating new
interfaces extending the old interface . However Binary runtime
compatibility permits additional methods in interfaces.
What if we declare an interface or annotation that allowed a method to
be discarded and allowed new methods to be implemented without requiring
the compile time constraints?
This interface might be called Evolve, or perhaps an annotation called
@EvolvingInterface
All Interfaces that extend Evolve or have the @EvolvingInterface
annotation must also declare two exceptions for every interface method:
throws ExtinctMethodException (new)
throws NotImplementedException
To retain run time backward compatibility a method is never removed from
a class implementing an interface, an interface method could be marked
@Depreciated however. Then programmers implementing this interface
could choose at their convenience to change their implementation for any
interface methods marked as such to throw an ExtinctMethodException.
They should only do this however after implementing the new
functionality if it exists, unless the method has been abandoned
altogether, the functionality might have been refactored and moved
elsewhere.
When a new method is added to an interface, for compile time
compatibility, a programmer can add the method to all classes
implementing the interface, initially throwing NotImplementedExceptions
until they have the resources / time available to implement the new
methods, this would reduce the burden on programmers to preserve compile
time compatibility.
Thus when a runtime object receives an ExtinctMethodException it can
choose how to handle it, perhaps by calling a Singleton Static class to
request the JVM persist its current state and restart, thus loading
later compatible code from the codebase service that implements the
functionality in a compatible way on restarting.
Serialization can be used for persistence, the replacement bytecode
would have already been checked by the codebase service for
serialVersionUID compatibility.
The persistence framework might be dynamically updated with handling
code from the codebase service to create a totally different object
graph from the persisted data. I haven't given this further thought at
this stage, this would be a later project if the Static bytecode
Analysis and codebase service is successful.
Any long lived Objects with Xuid Object identities that other objects
refer to must however remain runtime compatible.
Incompatible evolution / branches of Packages, could co exist in the
same JVM, in separate Classloaders. It would not be permissible however
to implement incompatible evolutions of services.
This is something I was notoriouisly a PITA among co-workers about
20-25 years ago. I called it designing for "forward compatibility", the art
of removing future obstacles of being backward compatibility.
One such product (RTOS) released 2.0 in 1986 and is still in its 2.x series.
Such is not "automatic" and can not be bolted on afterwards in a compatible
manner, but if River allows itself to break compatibility for a 3.0 release
with such forward-thinking mindset you have, then I think it can be possible
to not break compatibility for decades.
This sounds interesting, what your saying pretty much is, cut your
losses, break backward compatibility now and you wont have to later on,
while reducing efforts for adding new functionality. Did you have
something in mind?
Cheers,
Peter.
If I don't make sense, just ignore me, and I'll keep my advice to myself.
-- Niclas
On Oct 1, 2009 7:24 PM, "Peter Firmstone" <[email protected]> wrote:
Niclas Hedhman wrote: > > On Thu, Oct 1, 2009 at 9:20 AM, Peter Firmstone <
[email protected]> wrote: ...
Less than desirable. A fork perhaps? Just kidding...
But it's not out of the question, it really depends upon the trade off, ie;
what we gain. It isn't something that is prohibited, just to be carefully
considered.
A Major Release doesn't need to break source code compatibility to be a
major release, however if such an occurrence eventuates, it's not
unreasonable to expect application developers to make some changes before
compiling and distribution. If it is possible not to break binary
compatibility while gaining a great new feature, then shouldn't we keep that
option available? This enables users of the platform to upgrade while
allowing existing application implementations to prevail. The significance
comes from River being a Platform much like Java is a platform.
For instance the following Changes Break Source code compatibility but not
binary compatibility:
Adding new methods to existing Interfaces, you could simply extend the old
interface by creating a new interface, however being able to provide the
same service interface to both old and new bytecode is advantageous.
(Especially when old objects might gain new life from new bytecode... bear
with me...)
In a Class designed for inheritance, for instance, one might need to
increase the visibility of one of its methods, ie public from protected. A
subclass in an Application that overrides this method with protected access
and has other classes within itself that utilise the overridden behaviour
would no longer compile due to this change, however binary compatibility
hasn't been broken.
Changes to the throws clause of a method or constructor doesn't change
binary compatibility.
Additional overloading of existing methods or constructors in classes or
interfaces doesn't break binary compatibility.
The use case would be that we realise that in order to have the excellent
new feature X, that we must break compile time compatibility. But since
binary compatibility is more flexible than compile time, we might be able to
salvage binary compatibility and avoid inconveniencing users who don't have
the ability to rewrite and recompile their application code. The lock-in
characteristic of old software platform binary applications is the biggest
obstacle in platform migration. Tools to mitigate it somewhat when possible
are a blessing.
I must confess that I deliberately threw the comment about binary
compatibility into the mix since it is of relevance to the codebase analysis
service that I'm working on. Binary Compatibility flexibility allows a more
natural evolution of class files, based on identification of forward binary
compatible API, this is particularly relevant to long lived objects.
Clients can receive new compatible class files for their old objects when
they restart their jvm. River in its current form is not suitable for the
semantic web, due to current code base limitations and ClassLoader issues.
I want to make it so. Overly ambitious perhaps, however I've always
relished a good challenge, without this aspect of my personality, I'd
probably have little motivation to be active on this list. Such motivations
have drawbacks too though.
Cheers,
Peter.
Personally, I think this requirement is too ambitious. If source >
compatibility is broken, you ...