> However I'd like to point out that the math below is misleading (the average 
> time for the non-blocking case is also miscalculated but 
> it's not my point). The number that matters more in real life is throughput. 
> For the blocking case it's 3/30 = 0.1 request per second.

I think it depends on whether you are trying to characterise system performance 
(processing time) or perceived user experience (queuing time + processing 
time).   My users are kind of selfish in that they don't care how many 
transactions per second I can get through,  just how long it takes for them to 
get a response from when they submit the request.

Making the DB calls non-blocking does help a very small bit in driving up API 
server utilisation  - but my point was that time spent in the DB is such a 
small part of the total time in the API server that it's not the thing that 
needs to be optimised first.     

Any queuing system will explode when its utilisation approaches 100%, blocking 
or not.   Moving to non-blocking just means that you can hit 100% utilisation 
in the API server with 2 concurrent requests instead of *only* being able to 
hit 90+% with one transition.   That's not a great leap forward in my 
perception.

Phil

-----Original Message-----
From: Yun Mao [mailto:yun...@gmail.com] 
Sent: 03 March 2012 01:11
To: Day, Phil
Cc: openstack@lists.launchpad.net
Subject: Re: [Openstack] eventlet weirdness

First I agree that having blocking DB calls is no big deal given the way Nova 
uses mysql and reasonably powerful db server hardware.

However I'd like to point out that the math below is misleading (the average 
time for the nonblocking case is also miscalculated but it's not my point). The 
number that matters more in real life is throughput. For the blocking case it's 
3/30 = 0.1 request per second.
For the non-blocking case it's 3/27=0.11 requests per second. That means if 
there is a request coming in every 9 seconds constantly, the blocking system 
will eventually explode but the nonblocking system can still handle it. 
Therefore, the non-blocking one should be preferred.
Thanks,

Yun

>
> For example in the API server (before we made it properly multi-threaded) 
> with blocking db calls the server was essentially a serial processing queue - 
> each request was fully processed before the next.  With non-blocking db calls 
> we got a lot more apparent concurrencybut only at the expense of making all 
> of the requests equally bad.
>
> Consider a request takes 10 seconds, where after 5 seconds there is a call to 
> the DB which takes 1 second, and three are started at the same time:
>
> Blocking:
> 0 - Request 1 starts
> 10 - Request 1 completes, request 2 starts
> 20 - Request 2 completes, request 3 starts
> 30 - Request 3 competes
> Request 1 completes in 10 seconds
> Request 2 completes in 20 seconds
> Request 3 completes in 30 seconds
> Ave time: 20 sec
>
>
> Non-blocking
> 0 - Request 1 Starts
> 5 - Request 1 gets to db call, request 2 starts
> 10 - Request 2 gets to db call, request 3 starts
> 15 - Request 3 gets to db call, request 1 resumes
> 19 - Request 1 completes, request 2 resumes
> 23 - Request 2 completes,  request 3 resumes
> 27 - Request 3 completes
>
> Request 1 completes in 19 seconds  (+ 9 seconds) Request 2 completes 
> in 24 seconds (+ 4 seconds) Request 3 completes in 27 seconds (- 3 
> seconds) Ave time: 20 sec
>
> So instead of worrying about making db calls non-blocking we've been working 
> to make certain eventlets non-blocking - i.e. add sleep(0) calls to long 
> running iteration loops - which IMO has a much bigger impact on the 
> performance of the apparent latency of the system.>>>> Thanks for the 
> explanation. Let me see if I understand this.

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to     : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help   : https://help.launchpad.net/ListHelp

Reply via email to