quite interesting.

i tried to understand the quota and performance measurements as well,
though the only real meaning i got out of the profiling was a rough
relationship between the profiling numbers and the log cpu_ms /
api_cpu_ms values.
enough to get an understanding whats going on in your program, but not
enough to be reliable exact

On Jul 23, 6:45 am, Greg Tracy <[email protected]> wrote:
> It's worth pointing out the two other peculiarities...
>
> 1. When a DeadlineExceeded exception is thrown, the API usage is
> reported as zero and the CPU usage is very low - lower than other
> requests that don't hit the deadline.
>
> 2. When a DeadlineExceeded exception is thrown, the "cpu_ms" time
> reported in the log are also oddly low even though the wall clock time
> is at 30 secs.
>
> Just trying to understand what's being measured and where all my time
> is going...
>
> Thanks!
>
> On Jul 22, 10:17 pm, Greg Tracy <[email protected]> wrote:
>
>
>
> > I'd like to better understand the CPU time measurements in app engine.
> > As I've been profiling an app, I've started to see inconsistencies in
> > the various metrics. I'm striving to improve every number that's in my
> > control, but still lack a fundamental understanding of what's being
> > measured, the difference between various "CPU times" and how they are
> > reflected in the quotas.
>
> > For example, I expected to be able to measure cpu time and api time
> > (via the quota package) at the end of a request handler and draw some
> > conclusions about the cpu_ms and api_cpu_ms times reported in the
> > admin log for each handler request. Specifically, I expected that
> > ratio of the quota measurements and the log time be consistent for
> > every call, but I haven't seen that for the cpu numbers.
>
> > The experiment went as follows...
>
> > class MainHandler(webapp.RequestHandler) {
>
> >   def get(self):
>
> >       try:
> >           # do all the expensive magic...
> >       finally:
> >             end_api = quota.get_request_api_cpu_usage()
> >             end_cpu = quota.get_request_cpu_usage()
> >             logging.warning("total API cycles %s, total CPU cycles %s"
> > % (end_api, end_cpu))
>
> >       return
>
> > When I check the ratio of these metrics with those reported in the
> > admin log over mutliple calls, it's perfectly stable for the API
> > numbers at 1.2.
>
> > When I check the ratio for the CPU metric, it's much more eratic...
>
> > 0.111
> > 0.131
> > 0.124
>
> > 0.050
> > 0.028
> > 0.023
> > 0.024
> > 0.112
>
> > 0.048
> > 0.052
> > 0.147
> > 0.057
> > 0.048
> > 0.040
> > 0.138
>
> > 0.024
> > 0.027
> > 0.030
> > 0.023
> > 0.036
> > 0.025
>
> > Why is that?
>
> > How can I get more granularity with the CPU usage statistics? Can I
> > measure the quotas through the handler and guarantee that I'm measure
> > just a datastore CPU time vs. non-datastore CPU time?
>
> > Thanks!

-- 
You received this message because you are subscribed to the Google Groups 
"Google App Engine" group.
To post to this group, send email to [email protected].
To unsubscribe from this group, send email to 
[email protected].
For more options, visit this group at 
http://groups.google.com/group/google-appengine?hl=en.

Reply via email to