I've broken this out as it deserves it's own thread, now that I've
mentioned it:
I've been thinking about applications written for a future
hypothetical version of River, with code base services that utilise
Package based bytecode dependency analysis with Public Package API
evolution mapping and ClassLoader Package Isolation and Compatible
Package Substitution. I've been thinking about a simple way to allow
interface evolution.
I'm thinking of a djinn that's runs indefinitely, individual services
and clients can be shut down on a regular basis, however the djinn
itself doesn't. When a client or service needs to refresh or upgrade
class files it can do so by persisting and restarting.
Runtime linking must be honoured, if older bytecode exists in a djinn,
provided that new separately compiled byte code that it depends upon
honours the required methods etc, then it is safe to co exist, even
though it was not compiled at the same time and the sources for both
would not compile. API compatibility would be determine prior by
static bytecode analysis by the codebase service prior to making the
bytecode available.
Currently a public interface cannot be changed once implemented due to
compile time compatibility constraints, old methods must continue to
be implemented and new methods can only be created by creating new
interfaces extending the old interface . However Binary runtime
compatibility permits additional methods in interfaces.
What if we declare an interface or annotation that allowed a method to
be discarded and allowed new methods to be implemented without
requiring the compile time constraints?
This interface might be called Evolve, or perhaps an annotation called
@EvolvingInterface
All Interfaces that extend Evolve or have the @EvolvingInterface
annotation must also declare two exceptions for every interface method:
throws ExtinctMethodException (new)
throws NotImplementedException
To retain run time backward compatibility a method is never removed
from a class implementing an interface, an interface method could be
marked @Depreciated however. Then programmers implementing this
interface could choose at their convenience to change their
implementation for any interface methods marked as such to throw an
ExtinctMethodException. They should only do this however after
implementing the new functionality if it exists, unless the method has
been abandoned altogether, the functionality might have been
refactored and moved elsewhere.
When a new method is added to an interface, for compile time
compatibility, a programmer can add the method to all classes
implementing the interface, initially throwing
NotImplementedExceptions until they have the resources / time
available to implement the new methods, this would reduce the burden
on programmers to preserve compile time compatibility.
Thus when a runtime object receives an ExtinctMethodException it can
choose how to handle it, perhaps by calling a Singleton Static class
to request the JVM persist its current state and restart, thus loading
later compatible code from the codebase service that implements the
functionality in a compatible way on restarting.
Serialization can be used for persistence, the replacement bytecode
would have already been checked by the codebase service for
serialVersionUID compatibility.
The persistence framework might be dynamically updated with handling
code from the codebase service to create a totally different object
graph from the persisted data. I haven't given this further thought
at this stage, this would be a later project if the Static bytecode
Analysis and codebase service is successful.
Any long lived Objects with Xuid Object identities that other objects
refer to must however remain runtime compatible.
Incompatible evolution / branches of Packages, could co exist in the
same JVM, in separate Classloaders. It would not be permissible
however to implement incompatible evolutions of services.
Hypothetically:
A ClassLoader Hierarchy or Implementation might also include an option
to throw exception to throw such as an ExtinctClassDefinitionException
when an earlier class version that contains a subset of the API of the
required class version is found for maximum runtime compatibility.
Requiring a JVM restart. In this case the class file version required
is a later evolution of the loaded class file.
Think of this scenario:
A service is exported, the class files for one Package currently loaded
in the services JVM have become extinct.
A client looks up the service, the client downloads a later class file
version for a service proxy that is compatible from a codebase server.
When the client receives the service proxy, it will receive the latest
version from the codebase service.
The client passes in a local object to one of the Services methods, upon
receiving the marshalled object, the Service's ClassLoader system
identifies that the class file for that object belongs to a package
which is a later version with an extended API, if the API for the class
in question hasn't changed the Service continues to function. If
however the marshalled class has an extended API then the earlier
version cannot be substituted for it. The ClassLoader in the Service
throws an ExtinctClassDefinitionException, notifying the service
implementation to commence a restart. The service should now also throw
a RemoteException.
If however the downloaded proxy is of a later package version than the
service's interface and contains an additional method, then that proxy
needs to be able to identify any methods that don't exist in the Service
and throw a NotImplementedException for any non existent methods. A
class file version check method needs to be identified based on the
bytecode API static analysis.
The reverse scenario could also occur:
A service is restarted and contains the latest class files.
A client looks up the service, the client already contains earlier
versions of the the required class files, upon attempting to load any of
these classes, the ClassLoader system will throw an
ExtinctClassDefinitionException.
While potentially incompatible interfaces could exist in a djinn, the
proxy hides the incompatibilities. Locally within a JVM, incompatible
Package implementations are identified and obscured from each other
using ClassLoader isolation, while packages that have evolved in a
binary compatible manner should cause the JVM to request a restart once
later class versions are required. Dependencies are declared based on
API identified using Static Bytecode Analysis - by the codebase service.
What about embedded devices with high availability requirements /
constraints?
I'm sure the jini surrogate architecture can help here.
So as you can see, I've changed my mind about the VersionedClasses
interfaces I was working on earlier, delegates are too much compromise.
Besides Package upgrades will probably not occur often enough to require
high uptime and availability, which can instead be achieved with
redundant services rather than uptime.
Cheers,
Peter.