>
> I not sure if here you are saying that
>   ComponentA calls ComponentC
> or
>   ComponentB calls ComponentC
>
> And which component should the callback be delivered to?
Component B calls ComponentC  and the callback should be delivered to ComponentB

> I think you are saying that if you set a callback object on a
> ServiceReference then the details are not serialized and don't flow across
> the wire if the reference is passed to some other component. Is that
> correct?
Yes. That's correct.

>
> >
> >
> > There are performance problems  in using the current domain
> > implementation. If you see the code, every nodes when it looks for a
> > service, whether it's local gives out the endpoint string, but if the
> > service is remote (not in that node) for each request to the service.
> > The node queries the domain node, and this it don't scale up. I've
>
> That not good. Possibly another way to look at this is if we change the
> protocol and make the domain responsible to telling nodes when things
> change.
A pull protocol, this way of operating requires multicast
notification. If it's not done correctly
this doesn't scale. Consider a domain with 20 nodes or JVM that's the
same. Each node has 20 components. What will be happen in your
picture?  And then what about the domain as single point of failure?
What do you think about it? Do you think that "scattering" information
in nodes could be useful?


> This process of asking the domain for information about registered
> services is causing other problems also. For example, now that we are trying
> to wire bindings other than the sca binding we have to be careful of the
> order in which composites are started  as none of the other bindings know
> anything about the domain.

> The service infomation is injected into bindings
> the by the node on startup so this doesn't cover the case where a node is
> started but all of it's referenced services are not yet available in the
> domain. Switching to a notification based approach would make this much
> easier to manage. Does this make sense?
>
> The cache will likely be valuable for local reference so can you say a
> little more about how it fits in?

My EndpointCache is an object inside SCANodeImpl. with . Now it's
caching your endpoint strings with a LRU-like policy. Inside that
object, there's expiring thread, which checks if
an endpoint is expired. Like every cache, you can have a miss, in that
case it calls your domain service method findServiceEndpoint and it
stores it on a ConcurrentHashMap.
If you have an hit, it gives you back the endpoint string. Everytime
that you find it, it updates
ConcurrentHashMap's item hit with System.currentTimeMillis(), so only
the old not used items will expire. It's the simpler way I know to do
caching.

> Yes - membership has been ignored until now mainly because others already do
> this kind of thing. But you are right that we should start thinking about
> it. We know currently when nodes are added to the domain as they register.
> But, as is always the case, the problematic thing is working out when they
> go away except in the unlikely event that they shut down cleanly.

You'll have dangling references in this case.

> We could
> potentially do something with service references and have then report any
> invocation error they have to the domain and then have the domain only ping
> nodes that have had invocation errors reported.
Cool! So you when an invocation error fails, you pass it to the
domain, and it checks
if the node is up.

>We could probably tie this
> into the management funtion we should be building as we may at some stage
> need to be able to offer information about the messages that are flowing
> through the system and any errors that are occuring so using this info to
> test for node problem s sounds attractive to me.
>
> >
> >  I looked Apache Tribes, it needs some work, using an hearthbeat is a
> > scalability issue if the nodes are more then 100. And when i was
>
>  haven't tried tribe. How much control do you have over how frequently the
> heartbeat happens?
You have a fine grain control, you can set by your own. And you can
set the expire time,
for your members.


> Another approach I started investigating (and then stopped to do other
> things) is having many physical nodes running as part of a single logical
> node.

The so called Virtual Node, a node spreads in k nodes. It's widely used
for load balancing in different places.

>You will note I added some interfaces to the node and node factory but
> that is as far as it got. This reflects a typical cluster/grid arrangement
> where many identical processes are started to run the same task. As load on
> the system builds work is shared out amongst the identical nodes.
>
> For Tuscany I imagined that we could set up a Tomcat cluster behind Apache
> HTTP service runing the loadbalance. A node would be run on each Tomcat
> instance on the cluster all registering with the domain with the same
> logical node URL (but different physical node URLs). The logical URL is the
> URL of the Apache HTTP server. The same composite is started on all of these
> nodes. An reference that is wired to a component in the composite running on
> the cluster adopts the logical URL and hence messages arrive at the Apache
> HTTP service and are subsequently load balanced across the cluster.

A lot of work :).  You need to be to rewire things at composite level.

Ciao,
Giorgio.

---------------------------------------------------------------------
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to