On Wed, 2002-08-14 at 05:18, Richard Huxton wrote:
> On Tuesday 13 Aug 2002 9:39 pm, Wei Weng wrote:
> > I have a testing program that uses 30 concurrent connections
> > (max_connections = 32 in my postgresql.conf) and each does 100
> > insertions to a simple table with index.
> >
> > It took me approximately 2 minutes to finish all of them.
> >
> > But under the same environment(after "delete From test_table, and vacuum
> > analyze"), I then queue up all those 30 connections one after another
> > one (serialize) and it took only 30 seconds to finish.
> >
> > Why is it that the performance of concurrent connections is worse than
> > serializing them into one?
> 
> What was the limiting factor during the test? Was the CPU maxed, memory, disk 
> I/O?
No, none of the above was maxed. CPU usage that I paid attention to was
at most a 48%.

> 
> I take it the insert really *is* simple - no dependencies etc.
> 
> > I was testing them using our own (proprietary) scripting engine and the
> > extension library that supports postgresql serializes the queries by
> > simply locking when a query manipulates a PGconn object and unlocking
> > when it is done. (And similiarly, it creates a PGconn object on the
> > stack for each concurrent queries.)
> 
> I assume you've ruled the application end of things out.
What does this mean?

Thanks

-- 
Wei Weng
Network Software Engineer
KenCast Inc.



---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
    (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])

Reply via email to