Tom brought up an interesting point about dependencies recently, I
thought I might touch on dependencies, as there a few types (Tom
mentioned #1):
1. Platform Dependencies at distribution time - Only the Java
Platform, we distribute a single binary archive for River
installation that contains all dependencies. Or do we make a
small installer that downloads & installs River on demand for you,
also allowing you to select other River projects, like Rio,
Bantam, or other river applications?
2. Application Dependencies.
3. Service Dependencies.
Well I'm in agreeable with Tom on #1 (the good), until we come up with
something better, but the way in which we currently handle #2 (the bad)
and #3 (the ugly), perhaps leaves something to be desired:
#3 - Currently service dependencies are managed by annotating a codebase
URL using MarshalledObject. This works in an Intranet environment, it
won't on the Internet. Why not? For isolation, each service proxy is
tied to a codebase, each codebase has it's own URL which is coupled to a
unique ClassLoader in the client.
On an Intranet, all services use the same http codebase, on the
internet, with the current scheme, we would have potentially an
exponential number of codebases, with much duplication. The bad news is
that ClassLoader's are memory hungry, so we'll soon run out of both
network bandwidth and memory when downloading multiple duplicate
codebases all providing similar services. The obvious answer is that
services with a common API can share the same bytecode, which might work
for a short period of time, at least until an incompatible change is
made to one service instance. It get's worse for smart proxies, how can
a service interface be visible as the same runtime type as a different
proxy implementation with the same service interface when they're in
different ClassLoaders? Then what happens when the codebase moves? It
just doesn't work.
How can #3 be made to work? Instead of ignoring dependencies, we can
manage them, a few things are required:
* A codebase service - by using a service the codebase location can
be discovered dynamically.
* The Service Interfaces must have a version identity. (Even if we
prohibited change, we couldn't enforce it). If we have a version,
we can finally change an interface without breaking existing
implementations (they just wouldn't be able to see our interface).
* The service download, must also have an identity, otherwise we
cannot locate it.
Michael Warres, solved the http URL problem and almost solved the
bytecode Identity problem by using a message digest with codebase services.
Tim Blackman discovered that when using message digests of bytecode,
different compilers produced different bytecode optimisations with the
same source code and thus have different message digests. If you use
the message digest as the identity, if you want to fix a bug, or use a
different compiler, you can't replace your original bytecode, because
it's identity has changed. The identity needs to be determinable by a
package API signature (thanks Tim for the suggestion) and or a version
number for that package.
#2 - We don't even address #2, however we could with codebase services.
If our application implementation depends on other software, if we don't
want to distribute all our application dependant libraries, with our
application, these dependencies instead could be discovered at runtime
using codebase services (a client might already have these installed and
they can be shared, reducing memory and network consumption). Now we
can update our application and all it's dependencies dynamically over
time by discovering all our dependencies dynamically. What do we need
to achieve this? Codebase Service, Code Identity, Code Version a neat
way of packaging this information and a way to organise our ClassLoaders.
How might a user perceive this? Well we could have a distributed
application store (no one owns it and everyone participates) where
available network services are discovered using something like Bonjour.
One way would be to have an Apache River runtime, included with
Opensolaris or Linux distributions. (Debian or RedHat are good places
to include it as these feed other distributions).
Then if a user permits it, each installation could advertise Apache
River as a download using Bonjour Services (DNS Multicast), with a neat
description, so other users could discover and download it. It could be
made available on a popular website or through other distribution means
like magazines (on Blu Ray disks, including installation files for PC's
etc, poke that disk into any Blu Ray player and River's menu's pop up).
When you download and install Apache River, we could provide an
Application Discovery Window that discovers and display's Network
Application (Services with ServiceUI's). In this window we might want
to obscure services from the user that are designed to be consumed by
machines rather than a service that has an Application with a UI. We
could do something with JFrame's where a number of applications could
exist in one window, say chat, music streaming, a news feed, games etc,
where the user gets to mix and match what's available. The user might
tear off frames, so they can exist outside the Net Application window.
There could be locally discoverable bus timetables, University Lecture
notes, discovered when you turn up for your University lecture. A
doctor might have a PDA with wireless that show's a patients details and
vital signs as they walk past hospital beds. The possibilities are
endless. We could have a list of Services that users can download and
publish themselves. We could end up with a Network Application
Participation Model. These applications could use JavaSpaces to share
messages or information or offload processing.
Now what Version of the Web would you call that? Make Web 2.0 look
outdated?
The good part is, I think that this puzzle is now solvable, why?
* We can utilise DNS-SD to spread Jini services globally.
* We can utilise OSGi to make Jini services modular, give our
bytecodes identity, versions, control ClassLoader structure and
resolve dependencies at runtime, without user intervention.
* Codebase services provide a means to discover bytecode for
objects, long after their original service is no longer available.
* Codebase services provide URL location independence, removing the
current coupling between http codebases and ClassLoaders.
It isn't science fiction, it's possible.
-Peter.