Hmm, Gregg, I'm guessing you've got something in mind? If you do, please donate it ;)

My general Ramblings, WARNING: may veer wildly off course, thoughts subject to change based on good suggestions too:

Yes, I'm thinking about a URL structure to annotate marshalled data with the package name and version number. This should assist people utilising OSGi to control ClassLoader visibility using jar Manifests and your new CodebaseAccessClassLoader, without River requiring it (what a mouth full). OSGi doesn't specify how to deal with deserializing objects, I suspect that's why R-OSGi (A Separate entity than OSGi which is a spec) has it's own binary serialization mechanism, which is proprietary, it doesn't preclude the use of another protocol though.

R-OSGi clearly, is heavily influenced by Jini, however OSGi and it's lookup semantics are best suited to their original design focus, JVM local modularity. The OSGi lookup semantics, when applied to distributed computing cause problems during deserialization, R-OSGi despite having it's own binary serialization, exposes issues with ClassLoaders and class visibility because the registrar doesn't declare all associated classes (superclasses, parameters, method returns), only one interface by name, whereas Jini lookup semantics don't prevent determination of these classes allowing for better control of class visibility (although this hasn't been done yet other than Preferred Classes). For that reason OSGi Service lookup semantics don't map well to distributed systems, but it does do a superb job of providing local jvm services, but were concerned with distributed services and they're very different beasties.

Perhaps it is fair to say that both Jini and OSGi registrars target their intended scope appropriately. Therefore in an OSGi framework, where an application can utilise both Jini and OSGi services, should not attempt to map a local OSGi service to a Jini distributed service and vice versa, but instead use each for it's intended purpose.

Which brings me back to Service Interfaces, and a past river-dev discussion about Maven dependencies, I haven't used Maven, so can't comment too much, but you rightly pointed out that the dependencies are not on the *-dl.jar but instead the Service Interfaces defined in the Jini spec and hence the jsk-platform.jar. This I think, underpins most of the misunderstanding surrounding Jini technology, it is the Service Interface on which everything depends. I've thought about this and believe that for non platform services, the Service Interface and any return or parameter interfaces / classes, should be packaged (in a jar or jar's) separately from service implementations, as indeed, Jini service implementations are. The service implementations (service and service-dl jar's) then should depend on the ServiceInterface.jar (SI.jar)

This is where Package Versioning comes in, when you vary your service implementation, you want a specific version linking your service.jar to your service-dl.jar, well you could just rename both jar's I suppose, but that doesn't fit well with some frameworks. The service implementation (service.jar and service-dl.jar) is entirely a private concern, no classes that form any part of any public API belong within it, you can do whatever you like with any interfaces contained within an implementation without harming any external software, so long as the service.jar (server) and service-dl.jar (client) versions match.

Everything within a Service Interface jar (SI.jar) should be public API interfaces (whether from another jar, java or jini platform API classes), it must also be stateless and not require any form of persistence whatsoever.

Then ClassLoader visibility should be:

SI.jar ClassLoaders should be made visible to everything utilising it, as it forms contracts of compatibility between different Service implementations and clients, just like platform API or classes. When we want to extend a Service Interface, perhaps by adding a new interface and method, we can increment its version, the version scheme publishes the expected level of backward compatibility, that way existing Services still work with the new version and the latest and greatest versions utilising new interfaces within the SI.jar don't break by loading an earlier version and get looked up by both new and old clients.

As an example if we had a Sales Broker Service we might have:

SalesBrokerService.jar - The service interface API.

BobTheBroker.jar - Bob's service implementation.
BobTheBroker-dl.jar - Bob's proxy.

This doesn't prevent Bill also providing the same service:
BillsBrokerage.jar - Bill's service implementation
BillsBrokerage-dl.jar - Bill's proxy.

All implementers use the same SalesBrokerService.jar, the clients do too. The proxy's ClassLoaders are not directly visible to the Client Application ClassLoader, instead the client holds a reference to the proxy via the SalesBrokerService Classloader class type's, which are visible to both the Proxy ClassLoader and Client Application ClassLoader.

This brings me to Codebase downloads and proxy sharing, Bill and Bob, don't share proxy implementations, however Bill might provide a number of fail over services and want all his clients to use the same codebase.

Common codebase schemes could be broken up in a couple of different ways:

One way to share codebase is utilise the same codebase in different ClassLoaders, Bill might want to do this if he uses static class variables, specific to each service server node in his proxies (in this case Bill's the Principal):
CodebaseA->ClassLoader1->proxy1
CodebaseA->ClassLoader2->proxy2
CodebaseA->ClassLoader3->proxy3

However Bob (The Principal), might be happy to have all of his proxy object instances share the same ClassLoader and the same permissions.
CodebaseA->ClassLoader1->proxy1
CodebaseA->ClassLoader1->proxy2
CodebaseA->ClassLoader1->proxy3

The above apply only to smart proxy's.

For dumb proxy's all proxy's must be loaded in the ServiceInterface ClassLoader as they are just Java Reflective proxy's and don't require additional classes.

Dumb proxy's can be loaded like this:

CodebaseSI->ClassLoaderSI->proxy1- Bob's Service proxy
CodebaseSI->ClassLoaderSI->proxy2 - Bill's Service proxy etc.
CodebaseSI->ClassLoaderSI->proxy3
CodebaseSI->ClassLoaderSI->proxy4
CodebaseSI->ClassLoaderSI->proxy5

And can belong to anyone.

I have exactly no idea at this stage how to communicate these models into their respective semantics for determining class loading schemes during unmarshalling.

Anyone with ideas don't be afraid to post.

Now something handy that OSGi does is each bundle contains a list of permissions it requires, if we adopt this format for permissions for Service-dl.jar implementations and perhaps SI.jar too, it enables us to specifically restrict permission grants, it is like a contract of trust, the proxy tells you prior to loading it how much trust you must bestow upon it for full functionality, you might decide to have a set of grants tighter than those requested, but that's up to you, the client.

But one thing is clear, we can't afford to download a particular jar more than once.

Any new implementations must also play well within an existing Jini cluster too, so a Service might register two identical proxy's with different ServiceRegistrar's, one with the old httpmd: URL scheme, and one with a new Package Version URL scheme that requires a codebase to be looked up. The actual Service-dl.jar will be the same, just downloaded in different ways and loaded in different classloader trees by different client nodes.

The interesting part of Jini lookup ServiceTemplate's is it's basically looking for instanceof SomeServiceInterface. The Marshalled proxy needs to commmunicate all packages and versions required for unmarshalling at the client, this could include any number of jar files to be downloaded.

So it's really all about how we package our services.

Then we can create an upload site with public ServiceInterface source and jar files that many people and companies can sign, forming webs of trust. We also need a pool of common Entry classes that people can utilise. That way if we're using delayed proxy unmarshalling, entries can be unmarshalled for filtering operations without downloading any proxy codebases.

Now we can have an OSGi compatible versioning scheme and simplified class loader framework without requiring OSGi (no OSGi Services, no OSGi bundle stop / start / persistence), perhaps even utilising some felix code in River for people that want versioning but not OSGi, but we should also provide the pieces for applications to fully utilise OSGi frameworks if they wish too, without requiring other nodes to do so.

Cheers,

Peter.


Gregg Wonderly wrote:
One of the the things that I played around with was a Protocol handler which 
would use a URL structure that specified such versioning information.  It would 
lookup services implementing CodeBaseAccess and ask them if the could provide 
such a jar file.

This kind of thing makes it easier to deal with some issues about total number 
of codebase sources, but I am still not sure that it solves the problem you are 
thinking about.

Gregg Wonderly

Sent from my iPad

On May 5, 2010, at 9:00 PM, Peter Firmstone <[email protected]> wrote:

The other thing I'm working on is a PackageVersion annotation, using the 
implementation version and package name from the Java Package Version spec, so 
developers can version their proxy's allowing sharing of compatible bytecode 
for reduced codebase downloads.

I'm hoping that these things combined will assist to enable lookup over the 
internet.

Peter Firmstone wrote:
Gregg Wonderly wrote:
Many of my service APIs have streaming sockets needed for I/O based activities. 
 For example, remote event monitoring happens through an ObjectInputStream that 
is proxied through the smart proxy on the client to a socket end point that the 
proxy construction provided the details of on the server.
This too is interesting Gregg,  I've done something similar with the 
StreamServiceRegistrar; I've created a new interface called ResultStream, to 
mimic an ObjectInputStream, which is returned from lookup.  The idea is to 
provide a simple interface and minimise network requests by allowing a smart 
proxy implementation to request and cache larger chunks.  The main advantage of 
the Stream like behaviour, is to enable incremental filtering stages and delay 
unmarshalling of proxy's until after initial Entry filtering, then to control 
the progress of unmarshalling, so your only dealing with one proxy at at time. 
Further filtering can be performed after each unmarshalling, such as checking 
method constraints.  Any unsuitable proxy's can be thrown away before the next 
is unmarshalled, allowing garbage collection to clean as you go and prevent 
memory exhaustion.

The StreamServiceRegistrar lookup method also takes parameters for Entry 
classes that are to be unmarshalled for initial filtering, allowing delayed 
unmarshalling of uninteresting entries.

Unmarshalling will still be performed by the Registrar implementation, the 
client just gets to chose when it happens.

Cheers,

Peter.



Reply via email to