I finally had the chance to look through the org.apache.river name change work Dennis Reedy has done, it all looks very impressive, he's even taken the time to tidy up the qa suite. I haven't had time to run any tests or look at the jtreg test suite, I promise I'll make some time in the near future. Before we release this code there is an opportunity to tidy up the org.apache.river name space even further. In the Jini days, com.sun.jini.* was implementation code, it wasn't part of the Jini public API, should we now use org.apache.river.* for this purpose? There is some new public api, in org.apache.river.api.* and at the time new implementation code was being placed into org.apache.river.impl.* and now the com.sun.jini.* namespace has been moved to org.apache.river.*. Should we consider placing the new api in the net.jini.* namespace? It's worth looking at the javadoc as most of the new classes are package private. There are also discovery constraints in the implementation namespace that should be moved into the public api in my opinion, thoughts?

IPv6, for River, this is big, IPv6 has auto network configuration, powerful multicast abilities, IPSec and no need for NAT. IPv6 is going to allow our existing discovery protocols to work over the internet.

The examples project looks promising, I like how Greg Trasuk has structured the examples into api, server and clients, Greg has done a lot of work to tidy this up. Our existing example code is relatively old, I did notice some bad practices by current standards as a consequence. If we want to reduce our support burden, we should encourage new users to use best practise.

The issue that stuck out the most was letting 'this' escape during construction. All River service implementations now implement the Starter interface to avoid letting 'this' escape during construction, however since there are a number of downstream “Container” projects and there was controversy surrounding the start method; if someone wants to propose something less controversial for user examples, please do so, hopefully it won't upset anybody still clinging to unsafe publication.

On the topic of “Letting 'this' escape”, because readObject methods behave like constructors, the jvm performs a final freeze after the readObject method completes, however there are a number of places where River lets 'this' escape during deserialization, I did have some solution options for this, including a better way to deserialize...

I try not to discuss River security after another developer raised concerns it was scaring off new developers. I'm going to take the liberty to discuss security and performance briefly.

Some points:

   *

     IPv6 will enable River to traverse the internet, easily.

   *

     IPv6 is plug and play – autoconfiguration of network devices.

   *

     River is well positioned for the internet of things, but needs IPv6.

   *

     IPv4 NAT is pretty much what killed the Jini iot tech 20 years
     ago, as Jini was distributed, not centralised, web services have
     grown up around this centralised model.

   *

     River security isn't ready, our crypto protocols need updating and
     proxy trust is currently flawed.


The issue with proxy trust:

   *

     We can discover a lookup service securely.

   *

     We can't connect to services securely, service proxy's are
     downloaded and deserialized before trust is established.


Don't despair, the security issues are easily fixed.

Back in the early days, Jini used RMI and RMI used skeletons and stubs. The stubs had to be downloaded, so you always had codebase downloads. Now, codebase downloads aren't always required, but the lookup service implementation, is still designed around codebase downloads.

When security was enhanced in Jini 2.0, we were given the concept of a bootstrap proxy, which is just a reflective proxy that doesn't require a codebase download. So what does River do, it downloads a codebase, deserializes a service proxy and then requests a bootstrap proxy from it. At this point in time the river client authenticates the service, then River asks the bootstrap proxy for a TrustVerifier instance to check the service proxy, permissions are dynamically granted (but how do we know what permissions are required?) and method constraints are applied. This is called proxy preparation and it's a configuration concern, as are the exporters. Yep this is a complex processs.

How could this flaw be fixed without impacting the client?

Easy, the lookup service shouldn't contain the service proxy, only the bootstrap proxy. Guess what, big performance increase, just like delayed codebase downloads, thank you Gregg Wonderly, for identifying and trailblazing that path at least a decade ago.

During proxy preparation (a process determined by configuration), instead of asking the service proxy for a bootstrap proxy, the lookup service should only contain the bootstrap proxy and clients obtain the service proxy from it, after authentication, constraints are applied, permissions granted to the proxy and the process is complete. This is much simpler than our current proxy trust establishment. Less serialization overhead, less network traffic, more performance. A configuration flag could restore the old behaviour of course.

The client is none the wiser, it still receives a fully prepared and constrained proxy. All River services already implement ServiceProxyAccessor, although part of the start package, it's an interface that provides access to the service proxy.

We only need a new Exporter (easy) that creates the bootstrap proxy and ensures it doesn't implement any interfaces that would require a codebase download.

Then when a service is registered with a lookup service, Reggie obtains the bootstrap proxy from the exported service proxy in the client jvm.

It also means the entire process is simpler, developers no longer need to learn the complex TrustVerifier process as it becomes an Endpoint and system concern.

Thoughts?

Regards,

Peter.


Reply via email to