Sim IJskes - QCG wrote:
Tom Hobbs wrote:

Certainly in my experience, detecting errors and recovering is always the job of the client. To use a daft example; why would a web page detect that
a browser has unexpectedly disappeared and try to find a new browser to
display itself on? But in the event of a web server going down, it's always
the browser/etc that needs to go any find another copy of the page
somewhere.

This is not always the case. For instance in the transport layer. A server can detect that an ack/nack is overdue and start a retransmission.

But thats not what i tried to express. In that specific email i meant a client of the service. I haven't seen any self healing behaviour in the jeri transports, or the layers between the actual java.lang.Proxy of the service and its transport, so any hickup there will lead to a RemoteException. So i guess, with the current state of affairs, the only place for selfhealing (with keeping the RemoteReference the same) is for a SmartProxy.

What you have done, is created a ServiceWrapper which does the wrapping/proxying on the clients initiative, and retrieves a new RemoteReference for every transport error. This is also a perfectly valid approach.

The only problem i see, is (in both scenarios) that when an anonymous (not registered, bu exported) remote reference gets serialized as for instance a return value from a call to the service, and this reference is passed through the system, it will still experience transport errors. So this remote reference needs to be wrapped also, either at the server or the client side.

While writing this, i'm thinking this might be also fixed in the invocation layer. Altough it still only guards against transport errors, and not against dropping a member of a server cluster.

This style of service wrapping worked very well in a complex trading
platform that I was previously involved in.  It enabled us, with the
provision of some additional business rules - especially regarding state, to
take down services at random and have the system automatically recover
without interrupting the client.  It truly was a self-healing system.

Indeed, i can see this. And very practical for dynamic cluster scaling issues, for instance during a deployment of a new version. (i'm thinking about: during the change reducing the number of cluster members, upgrading the freed cluster members, and a hot switchover for the 2 groups).

Gr. Sim

Indeed, this does look very useful, thanks for the contribution & example. It's a different problem domain than firewall traversal, I'm looking into to how to handle possible security issues with dynamic address & port changes on firewalls at a much lower level, in my case I want to ensure I have the same service and if not throw a RemoteException or a Security Exception, otherwise the opportunity might exist for an attacker to substitute a service after it has been authenticated. Once the connection is lost that's where your solution is useful.

Cheers,

Peter.

Reply via email to