On Dec 6, 2007 6:03 PM, Simon Laws <[EMAIL PROTECTED]> wrote:

> Hi Girogio
>
> Thanks for posting you thoughts. Some comments below...
>
> Regards
>
> Simon
>
> On Dec 6, 2007 4:22 PM, Giorgio Zoppi <[EMAIL PROTECTED] > wrote:
>
> > I've still problems with JIRA 1907. So for now I have to refrain to
> > create dynamic wiring for now, because  ComponentUpdater doens't work
> > as I intended. I misunderstand something that happens in wiring so i
> > need more debugging on it but and now it's time for me to build
> > performance test in our cluster. We want to see if your infrastructure
> > is efficient.
> > I'm going to use callable reference instead of calling back.
> > My previous callback problem was due to the fact that in
> > CallableReferenceImpl, there's not enough state to recreate a proper
> > wire to callback. The Object callback  is trasient, oif you set it and
> > serialize and send away that reference, you'll loose it. What it miss
> > is a way to recreate a callback properly in this case. I might
> > investigate on how to solve this use case, but i lack of time. Any
> > suggestions?
> >
> > Let me explain better:
> > Suppose the following scenario:
> > - Scenario A
> > The actors are : ComonentA, ComponentB, ComponentC
> >
> > * ComponentA holds a CallableReference's ComponentC (it contains
> > ComponentC scdl) , send it to ComponentB sets itself as callback to
> > ComponentC, calls a Component C method
> > and the callback from Component C will fail.
>
>
> I not sure if here you are saying that
>   ComponentA calls ComponentC
> or
>   ComponentB calls ComponentC
>
> And which component should the callback be delivered to?
>
> I think you are saying that if you set a callback object on a
> ServiceReference then the details are not serialized and don't flow across
> the wire if the reference is passed to some other component. Is that
> correct?
>
> >
> >
> > There are performance problems  in using the current domain
> > implementation. If you see the code, every nodes when it looks for a
> > service, whether it's local gives out the endpoint string, but if the
> > service is remote (not in that node) for each request to the service.
> > The node queries the domain node, and this it don't scale up. I've
>
> That not good. Possibly another way to look at this is if we change the
> protocol and make the domain responsible to telling nodes when things
> change. This process of asking the domain for information about registered
> services is causing other problems also. For example, now that we are trying
> to wire bindings other than the sca binding we have to be careful of the
> order in which composites are started  as none of the other bindings know
> anything about the domain. The service infomation is injected into bindings
> the by the node on startup so this doesn't cover the case where a node is
> started but all of it's referenced services are not yet available in the
> domain. Switching to a notification based approach would make this much
> easier to manage. Does this make sense?
>
> The cache will likely be valuable for local reference so can you say a
> little more about how it fits in?
>
>
> > created an endpoint's string cache with an expiring ttl to fix this
> > problem.Just testing it now.
> > That's because in my application i call the same method a lot of times.
> > Looking into the future work, it will be nice:
> > * having a membership support, that it scale up for  a domain. A
> > domain is a business enviroment, and it could be made of a lot of
> > nodes.
>
> Yes - membership has been ignored until now mainly because others already
> do this kind of thing. But you are right that we should start thinking about
> it. We know currently when nodes are added to the domain as they register.
> But, as is always the case, the problematic thing is working out when they
> go away except in the unlikely event that they shut down cleanly. We could
> potentially do something with service references and have then report any
> invocation error they have to the domain and then have the domain only ping
> nodes that have had invocation errors reported. We could probably tie this
> into the management funtion we should be building as we may at some stage
> need to be able to offer information about the messages that are flowing
> through the system and any errors that are occuring so using this info to
> test for node problem s sounds attractive to me.
>
> >
> >  I looked Apache Tribes, it needs some work, using an hearthbeat is a
> > scalability issue if the nodes are more then 100. And when i was
>
>  haven't tried tribe. How much control do you have over how frequently the
> heartbeat happens?
>
> >
> > looking P2P system research, I found a Scalable Membership protocol
> > called SCAMP. You'll discover more in my homepage at cli :
> > http://www.cli.di.unipi.it/~zoppi/scamp.pdf
> > <http://www.cli.di.unipi.it/%7Ezoppi/scamp.pdf>
> > * having the possibility to move composite from a node to another node
> > in order to achieve load balancing and fail over. This requires that a
> > composite have to be stopped in a safe way: when all
> > components/services inside it are stopped, you can stop the composite
> > and move around. This is a lot of work and there're a lot of research
> > papers about it. Just find the way that you think more appropriate for
> > this task, and just find it, i.e:
> >
> > http://asna.ewi.utwente.nl/research/Ph.D.%20Theses/wegdam-PhD-dyn-reconf-load-distr-middleware-2003.pdf
> >
>
> Another approach I started investigating (and then stopped to do other
> things) is having many physical nodes running as part of a single logical
> node. You will note I added some interfaces to the node and node factory but
> that is as far as it got. This reflects a typical cluster/grid arrangement
> where many identical processes are started to run the same task. As load on
> the system builds work is shared out amongst the identical nodes.
>
> For Tuscany I imagined that we could set up a Tomcat cluster behind Apache
> HTTP service runing the loadbalance. A node would be run on each Tomcat
> instance on the cluster all registering with the domain with the same
> logical node URL (but different physical node URLs). The logical URL is the
> URL of the Apache HTTP server. The same composite is started on all of these
> nodes. An reference that is wired to a component in the composite running on
> the cluster adopts the logical URL and hence messages arrive at the Apache
> HTTP service and are subsequently load balanced across the cluster.
>
> >
> > Well, these are my thoughts about current limitations in SCA
> > scalability. Maybe i'll discover more after testing. The cool thing
> > about SCA is its inherent simplicity, and it could be used for
> > composing complex software distributed around Internet. Just 1 cent,
> > to start a discussion.
>
> This is a good conversation starter Giorgio. Lots of good stuff here.
>
> >
> >
> >
> > Cheers,
> > Giorgio.
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: [EMAIL PROTECTED]
> > For additional commands, e-mail: [EMAIL PROTECTED]
> >
> >
> Hi Giorgio

I checked some code in today to remove some of the calls back to the domain
to find service endpoints. I've added some logic to do domain level wiring
and to subsequently detect which references, and the composites they belong
to, have changed as a result. When a composite is changed the domain finds
which nodes it's deployed to and serializes the composite out to that node.
The node compares it against the version it has and updates its running
version as appropriate. At the moment it only worries about references but,
if there were a requirement in the future, I guess it could check other
things as well.

I've left in the ability for the domain to return service endpoint
information on request as this is used in the default binding when callable
references are created. I think it would be trick to notify an unlimited
number of callable references that service endpoint information has become
available or has changes so they can go and look the information up in
demand.

Regards

Simon

Reply via email to