On Dec 11, 2007 9:07 PM, Giorgio Zoppi <[EMAIL PROTECTED]> wrote: > > > > I not sure if here you are saying that > > ComponentA calls ComponentC > > or > > ComponentB calls ComponentC > > > > And which component should the callback be delivered to? > Component B calls ComponentC and the callback should be delivered to > ComponentB
> > > I think you are saying that if you set a callback object on a > > ServiceReference then the details are not serialized and don't flow > across > > the wire if the reference is passed to some other component. Is that > > correct? > Yes. That's correct. Sounds like we need a JIRA for this. Logically it sounds like callback information should be passed along with service references but I would have to study the spec to see if this is actually what is required. > > > > > > > > > > > > > There are performance problems in using the current domain > > > implementation. If you see the code, every nodes when it looks for a > > > service, whether it's local gives out the endpoint string, but if the > > > service is remote (not in that node) for each request to the service. > > > The node queries the domain node, and this it don't scale up. I've > > > > That not good. Possibly another way to look at this is if we change the > > protocol and make the domain responsible to telling nodes when things > > change. > A pull protocol, this way of operating requires multicast > notification. If it's not done correctly > this doesn't scale. Consider a domain with 20 nodes or JVM that's the > same. Each node has 20 components. What will be happen in your > picture? And then what about the domain as single point of failure? > What do you think about it? Do you think that "scattering" information > in nodes could be useful? > Two things help us make pushing information out to nodes fairly efficient 1. To domain knows which nodes are running which composites so it only needs to send composite information to nodes that care about it 2. The domain only changes when someone starts or stops a composite. Relative to the amount of time the nodes spend processing messages I don't expect this to happen very often. I'm trying to cut down on the amount of time a node spends asking the domain for endpoint information and provide this to the node only when it is known to have changed. You are right that the domain is a single point of failure. I'm choosing to ignore that for the time being. We could use a failover strategy to compensate for this in the future or, if we are really ambitions, distribute all domain information out across all nodes. This would be possible as we instigated the DomainProxy to hide the actual domain implementation from the node. Lots of work though. > > > > > This process of asking the domain for information about registered > > services is causing other problems also. For example, now that we are > trying > > to wire bindings other than the sca binding we have to be careful of the > > order in which composites are started as none of the other bindings > know > > anything about the domain. > > > The service infomation is injected into bindings > > the by the node on startup so this doesn't cover the case where a node > is > > started but all of it's referenced services are not yet available in the > > domain. Switching to a notification based approach would make this much > > easier to manage. Does this make sense? > > > > The cache will likely be valuable for local reference so can you say a > > little more about how it fits in? > > My EndpointCache is an object inside SCANodeImpl. with . Now it's > caching your endpoint strings with a LRU-like policy. Inside that > object, there's expiring thread, which checks if > an endpoint is expired. Like every cache, you can have a miss, in that > case it calls your domain service method findServiceEndpoint and it > stores it on a ConcurrentHashMap. > If you have an hit, it gives you back the endpoint string. Everytime > that you find it, it updates > ConcurrentHashMap's item hit with System.currentTimeMillis(), so only > the old not used items will expire. It's the simpler way I know to do > caching. Having the cache is required if we stick with the pull model where the node asks the domain for information. If we can get rid of that then having no cache at all is simpler even than having a simple cache implementation ;-) I'm think that we can get away with very few notification from domain to node. The event that causes it it a composite being started. > > > > Yes - membership has been ignored until now mainly because others > already do > > this kind of thing. But you are right that we should start thinking > about > > it. We know currently when nodes are added to the domain as they > register. > > But, as is always the case, the problematic thing is working out when > they > > go away except in the unlikely event that they shut down cleanly. > > You'll have dangling references in this case. > > > > We could > > potentially do something with service references and have then report > any > > invocation error they have to the domain and then have the domain only > ping > > nodes that have had invocation errors reported. > Cool! So you when an invocation error fails, you pass it to the > domain, and it checks > if the node is up. Even if we didn't use it for that purpose it would be useful to have the domain know where errors are occurring so some future management function can see the state of the domain. But it seems to be a potential further use of the information. > > > >We could probably tie this > > into the management funtion we should be building as we may at some > stage > > need to be able to offer information about the messages that are flowing > > through the system and any errors that are occuring so using this info > to > > test for node problem s sounds attractive to me. > > > > > > > > I looked Apache Tribes, it needs some work, using an hearthbeat is a > > > scalability issue if the nodes are more then 100. And when i was > > > > haven't tried tribe. How much control do you have over how frequently > the > > heartbeat happens? > You have a fine grain control, you can set by your own. And you can > set the expire time, > for your members. Sounds like it would be worth having a play with. > > > > > Another approach I started investigating (and then stopped to do other > > things) is having many physical nodes running as part of a single > logical > > node. > > The so called Virtual Node, a node spreads in k nodes. It's widely used > for load balancing in different places. > > >You will note I added some interfaces to the node and node factory but > > that is as far as it got. This reflects a typical cluster/grid > arrangement > > where many identical processes are started to run the same task. As load > on > > the system builds work is shared out amongst the identical nodes. > > > > For Tuscany I imagined that we could set up a Tomcat cluster behind > Apache > > HTTP service runing the loadbalance. A node would be run on each Tomcat > > instance on the cluster all registering with the domain with the same > > logical node URL (but different physical node URLs). The logical URL is > the > > URL of the Apache HTTP server. The same composite is started on all of > these > > nodes. An reference that is wired to a component in the composite > running on > > the cluster adopts the logical URL and hence messages arrive at the > Apache > > HTTP service and are subsequently load balanced across the cluster. > > A lot of work :). You need to be to rewire things at composite level. I think we could make this work pretty much within the framework we already have without rewiring. If we wanted CompositeA to be load balanced we would contribute CompositeA to the domain and associate it with a virtual node (aside - we don't have a method which allows you to select particular nodes for composites but I think we need one). The domain builds one copy of CompositeA and sets all service enpoints to the address of the HTTPD workload manager. In reality CompositeA will run on many nodes behind HTTPDs workload manager but as far as other components in the domain are concerned they just wire to components in the single copy of CompositeA that they think is running on the virtual node. Does that make sense? > > > Ciao, > Giorgio. > > --------------------------------------------------------------------- > To unsubscribe, e-mail: [EMAIL PROTECTED] > For additional commands, e-mail: [EMAIL PROTECTED] > >
