On Wed, 2002-08-14 at 10:49, Richard Huxton wrote: > On Wednesday 14 Aug 2002 3:20 pm, Wei Weng wrote: > > On Wed, 2002-08-14 at 05:18, Richard Huxton wrote: > > > On Tuesday 13 Aug 2002 9:39 pm, Wei Weng wrote: > > [30 connections is much slower than 1 connection 30 times]
Yeah, but the problem is, say I have 20 users using select on the database at the same time, and each select takes 10 seconds to finish. I really can't queue them up (or the last user will reall have to wait for a long time), can I? > > > > What was the limiting factor during the test? Was the CPU maxed, memory, > > > disk I/O? > > > > No, none of the above was maxed. CPU usage that I paid attention to was > > at most a 48%. > > Something must be the limiting factor. One of > - CPU > - Memory > - Disk I/O > - Database (configuration, or design) > - Application > > If it's not CPU, is the system going into swap or are you seeing a lot of disk > activity? I did hear a lot of disk noise when I ran the test. How do I tell if the "system is going into swap"? Is there any system settings I can/should change to make this a little faster? > > > > I assume you've ruled the application end of things out. > > > > What does this mean? > > I mean if you don't actually run the queries, then 30 separate processes is > fine? > > If you can provide us with an EXPLAIN of the query and the relevant schema > definitions, we can rule out database design. > This is actually really simple. A table like -------------------- | foo | -------------------- |ID VARCHAR(40) | --> primary key |Name VARCHAR(100)| -------------------- And I did an INSERT INTO foo ('some-unique-guid-here', 'Test Name'); So I don't think it is any matter of the database. Thanks -- Wei Weng Network Software Engineer KenCast Inc. ---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives? http://archives.postgresql.org