On 6/1/07, Afkham Azeez <[EMAIL PROTECTED]> wrote:



On 5/31/07, Rajith Attapattu <[EMAIL PROTECTED] > wrote:
>
> I introduced distinct commands (add/update/remove) to make it more
> simple and readable (to avoid update context being too complicated).


Please take a look at UpdateServiceGroupContextCommand.java or any other
Update context command and let me know  of any code that looks complicated.


I Azeez didn't imply that the current update code was complicated or for
that matter any reference to the current code. My comment was about my
thinking behind the design decision to introduce distinct events instead of
one.

When there are thousands of requests, the NW will get flooded with
clustering messages. We have to minimize the number of messages we send.


I agree with optimizing the messages we send across the network as much as
possible. So +1 for that. Having distinct commands doesn't mean u need to
send them seperately. You can pack the create and update commands together
(or create a single command to for this), bcos a create is always followed
by an update. So optimisations can be achived independent of the way we
handle our events.

When I commited the initial code drop it was more of a prototype for proof
of concept and was more focused on getting it working rather than
performance (which was my second goal), however my bad for not keeping an
eye on performance.

This also simplifies the clustering implementation, since an updateContext
> message can be received before a contextAdded message due to the order in
> which the >threads run at a particular time.
> Afkam, I didn't encounter this during my very limited testing. However
> an update is always followed after a context is created. I am not sure how
> these events get mixed up due to thread issues. We should be able to
> guarantee causal ordering at the least.


This might not have occurred in a simple unit testing scenario, but does
occur under some load.


I tested the initial implementation on 5 nodes, however I never load tested
:(.

The ReplicationHandler sends out the Update context message before the
create context message is sent out by the ContextManager.

This is certainly worth investigating. As I said we need to garuntee causal
ordering at the least.

I'm investigating this. However, I strongly feel that we need not
unnecessarily send messages. Do you see any advantage in sending an
addContext message when a single updateContext message can do the trick?
Also note that I've changed the ReplicationHandler to send out only a single
updateContext message which aggregates all the updates to all the the
relevant contexts, instead of sending a message for each property in each
context. Please review the new architecture & code.

I don't see an advantage in sending more messages than we have to (see my
comment above). However replicating distinct events can be an advantage.
When I receive an update how would I know whether it was a create+update or
a regular update ?
The absence of a the context in the receiving node doesn't necessarily imply
that it was create + update. It could be that this node was down when the
real creation happened.

If we have logic which is sensitive to the life cycle of  a service, it
might be important to  know distinct events.  However as I mentioned
distinct events  can be  batched (I'd  rather use the word packed) together
as opposed to be sent as different messages thereby achieving the
optimisation u noted.

Similarly, there is a cleanup mechanism which runs whenever a new contexts
gets added. Hence there is no need to send out a removeContext message.


This is not good enough. We need to remove a context ASAP as soon as a
client logs out. Consider this. Client logs into Node A and does some work.
All information is replicated to Node B & C. The client logout from Node A.
We don't replicate the remove command and no new contexts are added, hence
no cleanup mechanism is executed.
Now some other entity can point to Node B or C and continue the session as
those contexts are still available in those nodes. Further any cleanup
associated with the lifecycle of the session (ex resource release) will not
be executed on Node B and C. These kind of situations will make the cluster
unreliable.

We can't compromise integrity for the sake of performance. We need to
achieve both without compromising each other.

IMHO, these changes will greatly reduce the complexity and improve the
efficiency of the clustering implementation.

Can you explain the problem a bit more? To me the solution seems more of a
> hack due to some other issues in the system.


There may be some problems  which need investigation, but this  definitely
is not a hack. Think of a node receiving a  request. At that point, the
relevant contexts are created. In other words, contexts are created on
demand. Similarly, what is wrong in creating contexts on demand only when
necessary, when an update message is received? Why send unnecessary cleanup
messages when Axis2 already has a mechanism to do this?


See my comment about cleanup .

Anyways if u think this change makes it easy, please go ahead with it.
>
> Regards,
>
> Rajith
>
> On 5/30/07, Chamikara Jayalath < [EMAIL PROTECTED]> wrote:
> >
> > Hi Azeez,
> >
> > On 5/30/07, Afkham Azeez < [EMAIL PROTECTED]> wrote:
> > >
> > > I think there is no need for the contextAdded & contextRemoved
> > > methods in the ContextManager interface. contextUpdated should suffice.
> > >
> > > The reason is, when a contextUpdated message is received, if it does
> > > not exist on a particular node, it can be created. Similary, the context
> > > cleanup mechanism will take care of cleaning up the contexts on each node,
> > > hence removeContext is also not needed. This also simplifies the 
clustering
> > > implementation, since an updateContext message can be received before a
> > > contextAdded message due to the order in which the threads run at a
> > > particular time. Otherwise, the sender has to ensure that always a
> > > createContext message is sent to the channel before an updateContext 
message
> > > is sent.
> > >
> > > Thoughts?
> >
> >
> > Agree. +1 for the change.
> >
> > Chamikara
> >
> >
> > --
> > > Thanks
> > > Afkham Azeez
> > >
> > > http://www.wso2.org
> > > GPG Fingerprint: 643F C2AF EB78 F886 40C9  B2A2 4AE2 C887 665E 0760
> >
> >
> >
> >
> > --
> > Chamikara Jayalath
> > WSO2 Inc.
> > http://wso2.com/
> > http://wso2.org/ - For your Oxygen needs
>
>
>


--
Thanks
Afkham Azeez

http://www.wso2.org
GPG Fingerprint: 643F C2AF EB78 F886 40C9  B2A2 4AE2 C887 665E 0760

Reply via email to