Hello all.

Thanks a lot for the responses, they are appreciated.

I think I now understand the folly of my loop, and how that was negatively
impacting my "test".

I tried the suggestion Alex and Tom made to change my loop with a select()
and my results are now very close to the non-async version.

The main reason for looking at this API is not to support async in our
applications, that is being achieved architecturally in a PG agnostic way.
 It is to give our PG agnostic layer the ability to cancel queries.
(Admittedly the queries I mention in these emails are not candidates for
cancelling...).

Again, thanks so much for the help.
Michael.


On Wed, Oct 27, 2010 at 6:10 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> Michael Clark <codingni...@gmail.com> writes:
> > In doing some experiments I found that using
> > PQsendQueryParams/PQconsumeInput/PQisBusy/PQgetResult produces slower
> > results than simply calling PQexecParams.
>
> Well, PQconsumeInput involves at least one extra kernel call (to see
> whether data is available) so I don't know why this surprises you.
> The value of those functions is if your application can do something
> else useful while it's waiting.  If the data comes back so fast that
> you can't afford any extra cycles expended on the client side, then
> you don't have any use for those functions.
>
> However, if you do have something useful to do, the problem with
> this example code is that it's not doing that, it's just spinning:
>
> >     while ( ((consume_result = PQconsumeInput(self.db)) == 1) &&
> > ((is_busy_result = PQisBusy(self.db)) == 1) )
> >         ;
>
> That's a busy-wait loop, so it's no wonder you're eating cycles there.
> You want to sleep, or more likely do something else productive, when
> there is no data available from the underlying socket.  Usually the
> idea is to include libpq's socket in the set of files being watched
> by select() or poll(), and dispatch off to something that absorbs
> the data whenever you see some data is available to read.
>
>                        regards, tom lane
>

Reply via email to