Re: Enforcing sequencing of asynchronous event handling code

2011-10-20 Thread David
I try to embrace the asynchronous nature of GWT as much as possible.
However when I can't get around it, I usually just nest my rpc calls

rpcservice.message1()...
onSuccess() {
   rpcservice.message2()...

On Oct 19, 4:39 pm, Jeff Chimene jchim...@gmail.com wrote:
 On 10/19/2011 12:45 PM, Grant Rettke wrote:

  Hi,

  Our Goal:

  On the client side, events on the bus should be processed one at a
  time by their handler, in order, FIFO.

 Some (sort-of mutually exclusive) solutions:

 1) Implement condition variables
 (http://en.wikipedia.org/wiki/Condition_variable)

 2) Only fire Event B after Event A processing occurs. You might need
 extra events e.g. Event A prepare, Event C prepare, ... Event N
 prepare. After handling these events, the secondary events, Event A,
 Event B, ... Event N are fired. Some implementations create three
 basic events, Prepare, Acknowledge, Commit and pass identity
 information in the event cargo.

 I /think/ (it's hard to tell, even though your description is quite
 detailed), that (1) will be your best bet. Consider a scheme  that
 implements an event-specific bit mask that's checked against the
 condition variable. Each event handler delays via
 Scheduler.get().scheduleIncremental(new Scheduler.RepeatingCommand(){})
 until the bit(s) corresponding to the predecessor event(s) are set in
 the condition variable.

 There are other ideas, such as a central event handler. However, the
 problem with that and related solutions is that they don't scale well,
 and have a tendency to tightly couple unrelated code. The advantage of
 (1) is that the event handling code remains, but it's gated by logic
 that checks a condition variable before proceeding. After event
 processing completes, the condition variable is set to the appropriate
 value.

 In your scenario, Event A is bit 0, event B is bit 1, Event C is bit 2.

 A is not gated, and sets bit zero
 B is gated by bit zero set, and sets bit one
 C is gated by bit zero and bit one set, and sets bit two.









  Details:

  Just took over a pretty involved GWT/Spring/Hibernate/Gilead system.
  App
  works great; it is super fast, snappy, and responsive. One problem
  sitting
  in the backlog, though, results from the fact that some single-
  threaded
  server side code is entered by multiple threads working on the same
  piece
  of data. End result is that the data gets stomped on. When we add
  uniqueness constraints we see that a duplicate key exception is
  occurring,
  so we know something is happening that should not.

  From a user perspective, this should never happen, because they
  move along on their way using the app, clicking on things, in
  sequence,
  and that sequence makes sense, and in doing so it adds events to the
  event bus... basically EVT_A - EVT_B - EVT_C - ... and so on. EVT_A
  handling should complete before EVT_B handling and EVT_B handling
  should
  complete before EVT_C handling and so on and so forth.

  In practice a problem manifests though, because on the client side the
  handlers all fire off right away (as one would expect) resulting in
  multiple threads computing against the same data in the same place.
  Here
  is how it looks:

  TIMESTEP 1

  EVT_A (DATA_COPY_1) - HDL_A - SVR_CALL -
  compute_method(DATA_COPY_1)

  EVT_B (DATA_COPY_2) - HDL_B - SVR_CALL -
  compute_method(DATA_COPY_2)

  EVT_C (DATA_COPY_3) - HDL_C - SVR_CALL -
  compute_method(DATA_COPY_3)

  On the server side, multiple threads (coming from the handler servlet)
  end up entering compute_method at the same time, and behold, things
  blow up.
  So, our desired behavior is that for certain paths of work, we want
  sequencing. Looking at the app further, we decided that rather than
  track
  down all of the unique flows, that instead the entire app should
  behave such that EVT_* are handled in order, period. This makes sense
  from
  a user perspective, and by following a blanket-approach we could force
  the
  entire app to just do the right thing so to speak. The above would
  look
  more like this where queue is the event bus and handler code executing
  is all of the work no matter client-side presenter or server side that
  it
  takes to satisfy the goal for that event:

  TIMESTEP 1
  QUEUE {empty} HANDLER_CODE_EXECUTING{none}

  TIMESTEP 2
  QUEUE{EVT_A} HANDLER_CODE_EXECUTING{none}

  TIMESTEP 3
  QUEUE{EVT_B} HANDLER_CODE_EXECUTING{EVT_A}

  TIMESTEP 4
  QUEUE{EVT_C:EVT_B} HANDLER_CODE_EXECUTING{EVT_A}

  TIMESTEP 5
  QUEUE{EVT_C:EVT_B} HANDLER_CODE_EXECUTING{none}

  TIMESTEP 6
  QUEUE{EVT_C} HANDLER_CODE_EXECUTING{EVT_B}

  TIMESTEP 7
  QUEUE{EVT_C} HANDLER_CODE_EXECUTING{none}

  TIMESTEP 8
  QUEUE{empty} HANDLER_CODE_EXECUTING{EVT_C}

  TIMESTEP 9
  QUEUE{empty} HANDLER_CODE_EXECUTING{none}

  Talking more we looked at client-side vs. server-side to make this
  change.
  We felt like queuing up all requests on the client side would have the
  benefit of being real-Java, but the downside is that queueing up
  

Re: Enforcing sequencing of asynchronous event handling code

2011-10-20 Thread Grant Rettke
Thanks Jeff and David.

We are limited in our ability to explicitly control order either via
mutexes/bits or via the sequential execution of events in the
callbacks of previous events because we have inherited an application
that fires events all over the place that should have been ordered,
but are not. We believe that it is too difficult to clean it up, and,
that doing a global queue is the simplest thing... of course history
will tell us for sure whether we were right or not :).

On Thu, Oct 20, 2011 at 9:04 AM, David levy...@gmail.com wrote:
 I try to embrace the asynchronous nature of GWT as much as possible.
 However when I can't get around it, I usually just nest my rpc calls

 rpcservice.message1()...
 onSuccess() {
   rpcservice.message2()...

 On Oct 19, 4:39 pm, Jeff Chimene jchim...@gmail.com wrote:
 On 10/19/2011 12:45 PM, Grant Rettke wrote:

  Hi,

  Our Goal:

  On the client side, events on the bus should be processed one at a
  time by their handler, in order, FIFO.

 Some (sort-of mutually exclusive) solutions:

 1) Implement condition variables
 (http://en.wikipedia.org/wiki/Condition_variable)

 2) Only fire Event B after Event A processing occurs. You might need
 extra events e.g. Event A prepare, Event C prepare, ... Event N
 prepare. After handling these events, the secondary events, Event A,
 Event B, ... Event N are fired. Some implementations create three
 basic events, Prepare, Acknowledge, Commit and pass identity
 information in the event cargo.

 I /think/ (it's hard to tell, even though your description is quite
 detailed), that (1) will be your best bet. Consider a scheme  that
 implements an event-specific bit mask that's checked against the
 condition variable. Each event handler delays via
 Scheduler.get().scheduleIncremental(new Scheduler.RepeatingCommand(){})
 until the bit(s) corresponding to the predecessor event(s) are set in
 the condition variable.

 There are other ideas, such as a central event handler. However, the
 problem with that and related solutions is that they don't scale well,
 and have a tendency to tightly couple unrelated code. The advantage of
 (1) is that the event handling code remains, but it's gated by logic
 that checks a condition variable before proceeding. After event
 processing completes, the condition variable is set to the appropriate
 value.

 In your scenario, Event A is bit 0, event B is bit 1, Event C is bit 2.

 A is not gated, and sets bit zero
 B is gated by bit zero set, and sets bit one
 C is gated by bit zero and bit one set, and sets bit two.









  Details:

  Just took over a pretty involved GWT/Spring/Hibernate/Gilead system.
  App
  works great; it is super fast, snappy, and responsive. One problem
  sitting
  in the backlog, though, results from the fact that some single-
  threaded
  server side code is entered by multiple threads working on the same
  piece
  of data. End result is that the data gets stomped on. When we add
  uniqueness constraints we see that a duplicate key exception is
  occurring,
  so we know something is happening that should not.

  From a user perspective, this should never happen, because they
  move along on their way using the app, clicking on things, in
  sequence,
  and that sequence makes sense, and in doing so it adds events to the
  event bus... basically EVT_A - EVT_B - EVT_C - ... and so on. EVT_A
  handling should complete before EVT_B handling and EVT_B handling
  should
  complete before EVT_C handling and so on and so forth.

  In practice a problem manifests though, because on the client side the
  handlers all fire off right away (as one would expect) resulting in
  multiple threads computing against the same data in the same place.
  Here
  is how it looks:

  TIMESTEP 1

  EVT_A (DATA_COPY_1) - HDL_A - SVR_CALL -
  compute_method(DATA_COPY_1)

  EVT_B (DATA_COPY_2) - HDL_B - SVR_CALL -
  compute_method(DATA_COPY_2)

  EVT_C (DATA_COPY_3) - HDL_C - SVR_CALL -
  compute_method(DATA_COPY_3)

  On the server side, multiple threads (coming from the handler servlet)
  end up entering compute_method at the same time, and behold, things
  blow up.
  So, our desired behavior is that for certain paths of work, we want
  sequencing. Looking at the app further, we decided that rather than
  track
  down all of the unique flows, that instead the entire app should
  behave such that EVT_* are handled in order, period. This makes sense
  from
  a user perspective, and by following a blanket-approach we could force
  the
  entire app to just do the right thing so to speak. The above would
  look
  more like this where queue is the event bus and handler code executing
  is all of the work no matter client-side presenter or server side that
  it
  takes to satisfy the goal for that event:

  TIMESTEP 1
  QUEUE {empty} HANDLER_CODE_EXECUTING{none}

  TIMESTEP 2
  QUEUE{EVT_A} HANDLER_CODE_EXECUTING{none}

  TIMESTEP 3
  QUEUE{EVT_B} HANDLER_CODE_EXECUTING{EVT_A}

  TIMESTEP 4
  

Enforcing sequencing of asynchronous event handling code

2011-10-19 Thread Grant Rettke
Hi,

Our Goal:

On the client side, events on the bus should be processed one at a
time by their handler, in order, FIFO.

Details:

Just took over a pretty involved GWT/Spring/Hibernate/Gilead system.
App
works great; it is super fast, snappy, and responsive. One problem
sitting
in the backlog, though, results from the fact that some single-
threaded
server side code is entered by multiple threads working on the same
piece
of data. End result is that the data gets stomped on. When we add
uniqueness constraints we see that a duplicate key exception is
occurring,
so we know something is happening that should not.

From a user perspective, this should never happen, because they
move along on their way using the app, clicking on things, in
sequence,
and that sequence makes sense, and in doing so it adds events to the
event bus... basically EVT_A - EVT_B - EVT_C - ... and so on. EVT_A
handling should complete before EVT_B handling and EVT_B handling
should
complete before EVT_C handling and so on and so forth.

In practice a problem manifests though, because on the client side the
handlers all fire off right away (as one would expect) resulting in
multiple threads computing against the same data in the same place.
Here
is how it looks:

TIMESTEP 1

EVT_A (DATA_COPY_1) - HDL_A - SVR_CALL -
compute_method(DATA_COPY_1)

EVT_B (DATA_COPY_2) - HDL_B - SVR_CALL -
compute_method(DATA_COPY_2)

EVT_C (DATA_COPY_3) - HDL_C - SVR_CALL -
compute_method(DATA_COPY_3)

On the server side, multiple threads (coming from the handler servlet)
end up entering compute_method at the same time, and behold, things
blow up.
So, our desired behavior is that for certain paths of work, we want
sequencing. Looking at the app further, we decided that rather than
track
down all of the unique flows, that instead the entire app should
behave such that EVT_* are handled in order, period. This makes sense
from
a user perspective, and by following a blanket-approach we could force
the
entire app to just do the right thing so to speak. The above would
look
more like this where queue is the event bus and handler code executing
is all of the work no matter client-side presenter or server side that
it
takes to satisfy the goal for that event:

TIMESTEP 1
QUEUE {empty} HANDLER_CODE_EXECUTING{none}

TIMESTEP 2
QUEUE{EVT_A} HANDLER_CODE_EXECUTING{none}

TIMESTEP 3
QUEUE{EVT_B} HANDLER_CODE_EXECUTING{EVT_A}

TIMESTEP 4
QUEUE{EVT_C:EVT_B} HANDLER_CODE_EXECUTING{EVT_A}

TIMESTEP 5
QUEUE{EVT_C:EVT_B} HANDLER_CODE_EXECUTING{none}

TIMESTEP 6
QUEUE{EVT_C} HANDLER_CODE_EXECUTING{EVT_B}

TIMESTEP 7
QUEUE{EVT_C} HANDLER_CODE_EXECUTING{none}

TIMESTEP 8
QUEUE{empty} HANDLER_CODE_EXECUTING{EVT_C}

TIMESTEP 9
QUEUE{empty} HANDLER_CODE_EXECUTING{none}

Talking more we looked at client-side vs. server-side to make this
change.
We felt like queuing up all requests on the client side would have the
benefit of being real-Java, but the downside is that queueing up
threads
seems to go against the spirit and architecture of the server itself
(Tomcat).
Client side seems like equal amount of work, but we have the added
benefit
of not having to worry about out-of-order client requests and so on.
At this
point we plan to make the change on the client side. One option we
decided
against was using the UI to block the user from doing things out of
sequence
or before they should be doing them, and if we had been able to do
this
from the start we would have, but at this point feel that adding the
global
queuing would be much less work than identifying all of the code paths
that can result in out-of-order-execution-errors.

Our goal: on the client side, events should be processed one at a time
by
their handler, in order, FIFO.

Not sure of the approach yet, haven't dug in deep. Curious to know
what you
think about this problem we are facing, our proposed solution, what is
the standard GWT way to manage this, and how you might do it.

While this feels like a big change, gotta imagine that our situation
is not
totally unique because indeed sometimes you need sequencing.

Look forward to your thoughts and advice.

Best wishes,

Grant

-- 
You received this message because you are subscribed to the Google Groups 
Google Web Toolkit group.
To post to this group, send email to google-web-toolkit@googlegroups.com.
To unsubscribe from this group, send email to 
google-web-toolkit+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/google-web-toolkit?hl=en.



Re: Enforcing sequencing of asynchronous event handling code

2011-10-19 Thread Jeff Chimene
On 10/19/2011 12:45 PM, Grant Rettke wrote:
 Hi,
 
 Our Goal:
 
 On the client side, events on the bus should be processed one at a
 time by their handler, in order, FIFO.

Some (sort-of mutually exclusive) solutions:

1) Implement condition variables
(http://en.wikipedia.org/wiki/Condition_variable)

2) Only fire Event B after Event A processing occurs. You might need
extra events e.g. Event A prepare, Event C prepare, ... Event N
prepare. After handling these events, the secondary events, Event A,
Event B, ... Event N are fired. Some implementations create three
basic events, Prepare, Acknowledge, Commit and pass identity
information in the event cargo.

I /think/ (it's hard to tell, even though your description is quite
detailed), that (1) will be your best bet. Consider a scheme  that
implements an event-specific bit mask that's checked against the
condition variable. Each event handler delays via
Scheduler.get().scheduleIncremental(new Scheduler.RepeatingCommand(){})
until the bit(s) corresponding to the predecessor event(s) are set in
the condition variable.

There are other ideas, such as a central event handler. However, the
problem with that and related solutions is that they don't scale well,
and have a tendency to tightly couple unrelated code. The advantage of
(1) is that the event handling code remains, but it's gated by logic
that checks a condition variable before proceeding. After event
processing completes, the condition variable is set to the appropriate
value.

In your scenario, Event A is bit 0, event B is bit 1, Event C is bit 2.

A is not gated, and sets bit zero
B is gated by bit zero set, and sets bit one
C is gated by bit zero and bit one set, and sets bit two.

 
 Details:
 
 Just took over a pretty involved GWT/Spring/Hibernate/Gilead system.
 App
 works great; it is super fast, snappy, and responsive. One problem
 sitting
 in the backlog, though, results from the fact that some single-
 threaded
 server side code is entered by multiple threads working on the same
 piece
 of data. End result is that the data gets stomped on. When we add
 uniqueness constraints we see that a duplicate key exception is
 occurring,
 so we know something is happening that should not.
 
 From a user perspective, this should never happen, because they
 move along on their way using the app, clicking on things, in
 sequence,
 and that sequence makes sense, and in doing so it adds events to the
 event bus... basically EVT_A - EVT_B - EVT_C - ... and so on. EVT_A
 handling should complete before EVT_B handling and EVT_B handling
 should
 complete before EVT_C handling and so on and so forth.
 
 In practice a problem manifests though, because on the client side the
 handlers all fire off right away (as one would expect) resulting in
 multiple threads computing against the same data in the same place.
 Here
 is how it looks:
 
 TIMESTEP 1
 
 EVT_A (DATA_COPY_1) - HDL_A - SVR_CALL -
 compute_method(DATA_COPY_1)
 
 EVT_B (DATA_COPY_2) - HDL_B - SVR_CALL -
 compute_method(DATA_COPY_2)
 
 EVT_C (DATA_COPY_3) - HDL_C - SVR_CALL -
 compute_method(DATA_COPY_3)
 
 On the server side, multiple threads (coming from the handler servlet)
 end up entering compute_method at the same time, and behold, things
 blow up.
 So, our desired behavior is that for certain paths of work, we want
 sequencing. Looking at the app further, we decided that rather than
 track
 down all of the unique flows, that instead the entire app should
 behave such that EVT_* are handled in order, period. This makes sense
 from
 a user perspective, and by following a blanket-approach we could force
 the
 entire app to just do the right thing so to speak. The above would
 look
 more like this where queue is the event bus and handler code executing
 is all of the work no matter client-side presenter or server side that
 it
 takes to satisfy the goal for that event:
 
 TIMESTEP 1
 QUEUE {empty} HANDLER_CODE_EXECUTING{none}
 
 TIMESTEP 2
 QUEUE{EVT_A} HANDLER_CODE_EXECUTING{none}
 
 TIMESTEP 3
 QUEUE{EVT_B} HANDLER_CODE_EXECUTING{EVT_A}
 
 TIMESTEP 4
 QUEUE{EVT_C:EVT_B} HANDLER_CODE_EXECUTING{EVT_A}
 
 TIMESTEP 5
 QUEUE{EVT_C:EVT_B} HANDLER_CODE_EXECUTING{none}
 
 TIMESTEP 6
 QUEUE{EVT_C} HANDLER_CODE_EXECUTING{EVT_B}
 
 TIMESTEP 7
 QUEUE{EVT_C} HANDLER_CODE_EXECUTING{none}
 
 TIMESTEP 8
 QUEUE{empty} HANDLER_CODE_EXECUTING{EVT_C}
 
 TIMESTEP 9
 QUEUE{empty} HANDLER_CODE_EXECUTING{none}
 
 Talking more we looked at client-side vs. server-side to make this
 change.
 We felt like queuing up all requests on the client side would have the
 benefit of being real-Java, but the downside is that queueing up
 threads
 seems to go against the spirit and architecture of the server itself
 (Tomcat).
 Client side seems like equal amount of work, but we have the added
 benefit
 of not having to worry about out-of-order client requests and so on.
 At this
 point we plan to make the change on the client side. One option we
 decided
 against was using the UI to block