Before I explain an OK first-order approximation, ask yourself is it really
worth the time to worry about the serialization/deserialization cost?  I can
tell you from first hand it's minimal compared with the typical code I've
written.  Now maybe if your passing in large amounts of data (think
megabytes), the story might be different.

At which point you've got to be wondering why do you actually need to
transfer that much data & wouldn't it be better to split up the result in
chunks - highly unlikely the user is going to need that much data.  And at
that point your bigger costs are probably going to be actually transferring
the data across the network.

Now that I've had my spiel about the dangers of micro-optimizing, I'm giving
you the tools to shoot yourself in the foot (in terms of wasting your time).

I dunno if it's strictly possible without actually modifying the code.  You
could always add instrumentation to the GWT code & rebuild your own flavour
of it (simply get the timestamp difference surrounding the
(de)/serialization & add that to a global diff value & update the amount of
times it's been called.  Then you can keep a running average of the cost
that you can query at any time.

If that scares you, you could try to estimate it, but only if your client &
server are the same machine.

Right before making the RPC call, store the current timestamp in a local
variable.
On entry into the RPC call, store the timestamp in a local variable & put a
breakpoint right after.

The difference between the two times is the cost of going through the client
communication stack (serializing parameters, making the AJAX request,
creating the network packets, sending them to the server, handling the
packets by the server, handling an AJAX request by your app server &
deserializing your parameters.

Working on the assumption that the most expensive part of that process is
serialization/deserialization (which is reasonable cause transmission time
should be 0, and since it's a socket-to-socket communication on a local
machine, you shouldn't see any significant latency overhead in the kernel -
maybe 100 us at most), then you can estimate how much time that took..

Repeat the process by recording the exit time (this would have to be in a
global variable) from the RPC call & the time on the client when the call
returned.  This will give you an idea of the cost of serializing the return
value.

This is a good-enough first-order approximation.  Note, that if you want
this to be done in a logging fashion, you'll have to include the timestamp
in your RPC request parameters & result which might skew your results (do
not print timestamps as that definitely screws things up because printing to
screen is relatively expensive when your dealing with these kinds of
microbenchmarks).

On Mon, Mar 16, 2009 at 8:33 AM, Sonar <[email protected]> wrote:

>
> Hi All,
>
> I'd like to be taught how do I log the time GWT spends on serialization
> \deserialization of remote method parameters \ return types.
>
> Thanks in advance
>
> Alex
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Google Web Toolkit" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to 
[email protected]
For more options, visit this group at 
http://groups.google.com/group/Google-Web-Toolkit?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to