Re: [PERFORM] postgres 7.4 at 100%

2004-06-30 Thread Duane Lee - EGOVX
Title: RE: [PERFORM] postgres 7.4 at 100%





Creating indexes on a table affects insert performance depending on the number of indexes that have to be populated.  From a query standpoint, indexes are a godsend in most cases.

Duane


-Original Message-
From: Chris Cheston [mailto:[EMAIL PROTECTED]]
Sent: Wednesday, June 30, 2004 12:19 AM
To: Gavin M. Roy
Cc: [EMAIL PROTECTED]
Subject: Re: [PERFORM] postgres 7.4 at 100%



Oh my, creating an index has absolutely reduced the times it takes to
query from around 700 ms to less than 1 ms!


Thanks so much for all your help.  You've saved me!


One question:


Why would I or would I not create multiple indexes in a table? I
created another index in the same table an it's improved performance
even more.


Thanks,
Chris


On Tue, 29 Jun 2004 09:03:24 -0700, Gavin M. Roy <[EMAIL PROTECTED]> wrote:
> 
> Is the from field nullable?  If not, try "create index calllogs_from on
> calllogs ( from );" and then do an explain analyze of your query.
> 
> Gavin
> 
> 
> 
> Chris Cheston wrote:
> 
> >ok i just vacuumed it and it's taking slightly longer now to execute
> >(only about 8 ms longer, to around 701 ms).
> >
> >Not using indexes for calllogs(from)... should I?  The values for
> >calllogs(from) are not unique (sorry if I'm misunderstanding your
> >point).
> >
> >Thanks,
> >
> >Chris
> >
> >On Tue, 29 Jun 2004 16:21:01 +0800, Christopher Kings-Lynne
> ><[EMAIL PROTECTED]> wrote:
> >
> >
> >>>live=# explain analyze SELECT id FROM calllogs WHERE from = 'you';
> >>>    QUERY PLAN
> >>>--
> >>> Seq Scan on calllogs  (cost=0.00..136.11 rows=24 width=4) (actual
> >>>time=0.30..574.72 rows=143485 loops=1)
> >>>   Filter: (from = 'you'::character varying)
> >>> Total runtime: 676.24 msec
> >>>(3 rows)
> >>>
> >>>
> >>Have you got an index on calllogs(from)?
> >>
> >>Have you vacuumed and analyzed that table recently?
> >>
> >>Chris
> >>
> >>
> >>
> >>
> >
> >---(end of broadcast)---
> >TIP 2: you can get off all lists at once with the unregister command
> >    (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
> >
> >
> 
>


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?


   http://www.postgresql.org/docs/faqs/FAQ.html





Re: [PERFORM] postgres 7.4 at 100%

2004-06-30 Thread Christopher Kings-Lynne
I see - thanks very much. I created an index for column 'oid' which I
was using in a WHERE.  So rule of thumb- create an index for column(s)
which I use in WHERE queries.
So to speak.  They can also sometimes assist in sorting.  The OID column 
is special.  I suggest adding a unique index to that column.  In 
postgresql it is _possible_ for the oid counter to wraparound, hence if 
you rely on oids (not necessarily a good idea), it's best to put a 
unique index on the oid column.

I _strongly_ suggest that you read this:
http://www.postgresql.org/docs/7.4/static/indexes.html
Chris
---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] postgres 7.4 at 100%

2004-06-30 Thread Chris Cheston
I see - thanks very much. I created an index for column 'oid' which I
was using in a WHERE.  So rule of thumb- create an index for column(s)
which I use in WHERE queries.

Thanks,
Chis

On Wed, 30 Jun 2004 15:30:52 +0800, Christopher Kings-Lynne
<[EMAIL PROTECTED]> wrote:
> 
> 
> > Why would I or would I not create multiple indexes in a table? I
> > created another index in the same table an it's improved performance
> > even more.
> 
> You create indexes when you need indexes.  Indexes are most helpful when
> they match the WHERE clause of your selects.
> 
> So, if you commonly do one query that selects on one column, and another
> query that selects on two other columns - then create one index on the
> first column and another index over the second two columns.
> 
> Chris
>

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] postgres 7.4 at 100%

2004-06-30 Thread Christopher Kings-Lynne

Why would I or would I not create multiple indexes in a table? I
created another index in the same table an it's improved performance
even more.
You create indexes when you need indexes.  Indexes are most helpful when 
they match the WHERE clause of your selects.

So, if you commonly do one query that selects on one column, and another 
query that selects on two other columns - then create one index on the 
first column and another index over the second two columns.

Chris
---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] postgres 7.4 at 100%

2004-06-30 Thread Chris Cheston
Oh my, creating an index has absolutely reduced the times it takes to
query from around 700 ms to less than 1 ms!

Thanks so much for all your help.  You've saved me!

One question:

Why would I or would I not create multiple indexes in a table? I
created another index in the same table an it's improved performance
even more.

Thanks,
Chris

On Tue, 29 Jun 2004 09:03:24 -0700, Gavin M. Roy <[EMAIL PROTECTED]> wrote:
> 
> Is the from field nullable?  If not, try "create index calllogs_from on
> calllogs ( from );" and then do an explain analyze of your query.
> 
> Gavin
> 
> 
> 
> Chris Cheston wrote:
> 
> >ok i just vacuumed it and it's taking slightly longer now to execute
> >(only about 8 ms longer, to around 701 ms).
> >
> >Not using indexes for calllogs(from)... should I?  The values for
> >calllogs(from) are not unique (sorry if I'm misunderstanding your
> >point).
> >
> >Thanks,
> >
> >Chris
> >
> >On Tue, 29 Jun 2004 16:21:01 +0800, Christopher Kings-Lynne
> ><[EMAIL PROTECTED]> wrote:
> >
> >
> >>>live=# explain analyze SELECT id FROM calllogs WHERE from = 'you';
> >>>QUERY PLAN
> >>>--
> >>> Seq Scan on calllogs  (cost=0.00..136.11 rows=24 width=4) (actual
> >>>time=0.30..574.72 rows=143485 loops=1)
> >>>   Filter: (from = 'you'::character varying)
> >>> Total runtime: 676.24 msec
> >>>(3 rows)
> >>>
> >>>
> >>Have you got an index on calllogs(from)?
> >>
> >>Have you vacuumed and analyzed that table recently?
> >>
> >>Chris
> >>
> >>
> >>
> >>
> >
> >---(end of broadcast)---
> >TIP 2: you can get off all lists at once with the unregister command
> >(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
> >
> >
> 
>

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html


Re: [PERFORM] postgres 7.4 at 100%

2004-06-29 Thread Gavin M. Roy
Is the from field nullable?  If not, try "create index calllogs_from on 
calllogs ( from );" and then do an explain analyze of your query.

Gavin
Chris Cheston wrote:
ok i just vacuumed it and it's taking slightly longer now to execute
(only about 8 ms longer, to around 701 ms).
Not using indexes for calllogs(from)... should I?  The values for
calllogs(from) are not unique (sorry if I'm misunderstanding your
point).
Thanks,
Chris
On Tue, 29 Jun 2004 16:21:01 +0800, Christopher Kings-Lynne
<[EMAIL PROTECTED]> wrote:
 

live=# explain analyze SELECT id FROM calllogs WHERE from = 'you';
   QUERY PLAN
--
Seq Scan on calllogs  (cost=0.00..136.11 rows=24 width=4) (actual
time=0.30..574.72 rows=143485 loops=1)
  Filter: (from = 'you'::character varying)
Total runtime: 676.24 msec
(3 rows)
 

Have you got an index on calllogs(from)?
Have you vacuumed and analyzed that table recently?
Chris
   

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
 


---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [PERFORM] postgres 7.4 at 100%

2004-06-29 Thread Tom Lane
Chris Cheston <[EMAIL PROTECTED]> writes:
> Wow, this simple query is taking 676.24 ms to execute! it only takes
> 18 ms on our other machine.

> This table has 150,000 rows. Is this normal?

> live=# explain analyze SELECT id FROM calllogs WHERE from = 'you';
> QUERY PLAN
> --
>  Seq Scan on calllogs  (cost=0.00..136.11 rows=24 width=4) (actual
> time=0.30..574.72 rows=143485 loops=1)
>Filter: (from = 'you'::character varying)
>  Total runtime: 676.24 msec
> (3 rows)

So the query is pulling 140K+ rows out of a table with 150K entries?
No chance that an index will help for that.  You're fortunate that the
thing did not try to use an index though, because it thinks there are
only 24 rows matching 'you', which is one of the more spectacular
statistical failures I've seen lately.  I take it you haven't ANALYZEd
this table in a long time?

It is hard to believe that your other machine can pull 140K+ rows in
18 msec, though.  Are you sure the table contents are the same in both
cases?

If they are, the only reason I can think of for the discrepancy is a
large amount of dead space in this copy of the table.  What does VACUUM
VERBOSE show for it, and how does that compare to what you see on the
other machine?  Try a CLUSTER or VACUUM FULL to see if you can shrink
the table's physical size (number of pages).

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] postgres 7.4 at 100%

2004-06-29 Thread Bruno Wolff III
On Tue, Jun 29, 2004 at 01:37:30 -0700,
  Chris Cheston <[EMAIL PROTECTED]> wrote:
> ok i just vacuumed it and it's taking slightly longer now to execute
> (only about 8 ms longer, to around 701 ms).
> 
> Not using indexes for calllogs(from)... should I?  The values for
> calllogs(from) are not unique (sorry if I'm misunderstanding your
> point).

If you are hoping for some other plan than a sequential scan through
all of the records you are going to need an index. You can have an
index on a column (or function) that isn't unique for all rows.

---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] postgres 7.4 at 100%

2004-06-29 Thread Chris Cheston
ok i just vacuumed it and it's taking slightly longer now to execute
(only about 8 ms longer, to around 701 ms).

Not using indexes for calllogs(from)... should I?  The values for
calllogs(from) are not unique (sorry if I'm misunderstanding your
point).

Thanks,

Chris

On Tue, 29 Jun 2004 16:21:01 +0800, Christopher Kings-Lynne
<[EMAIL PROTECTED]> wrote:
> 
> > live=# explain analyze SELECT id FROM calllogs WHERE from = 'you';
> > QUERY PLAN
> > --
> >  Seq Scan on calllogs  (cost=0.00..136.11 rows=24 width=4) (actual
> > time=0.30..574.72 rows=143485 loops=1)
> >Filter: (from = 'you'::character varying)
> >  Total runtime: 676.24 msec
> > (3 rows)
> 
> Have you got an index on calllogs(from)?
> 
> Have you vacuumed and analyzed that table recently?
> 
> Chris
> 
>

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [PERFORM] postgres 7.4 at 100%

2004-06-29 Thread Christopher Kings-Lynne
live=# explain analyze SELECT id FROM calllogs WHERE from = 'you';
QUERY PLAN
--
 Seq Scan on calllogs  (cost=0.00..136.11 rows=24 width=4) (actual
time=0.30..574.72 rows=143485 loops=1)
   Filter: (from = 'you'::character varying)
 Total runtime: 676.24 msec
(3 rows)
Have you got an index on calllogs(from)?
Have you vacuumed and analyzed that table recently?
Chris
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])


Re: [PERFORM] postgres 7.4 at 100%

2004-06-29 Thread Chris Cheston
Wow, this simple query is taking 676.24 ms to execute! it only takes
18 ms on our other machine.

This table has 150,000 rows. Is this normal?

no, the machine is not running software RAID.  Anyone have any ideas
next as to what I should do to debug this? I'm really wondering if the
Linux OS running SMP is the cause.

Thanks,
Chris

live=# explain analyze SELECT id FROM calllogs WHERE from = 'you';
QUERY PLAN
--
 Seq Scan on calllogs  (cost=0.00..136.11 rows=24 width=4) (actual
time=0.30..574.72 rows=143485 loops=1)
   Filter: (from = 'you'::character varying)
 Total runtime: 676.24 msec
(3 rows)

explain analyze for inserts is fast too.


On Mon, 28 Jun 2004 09:47:59 -0700, Josh Berkus <[EMAIL PROTECTED]> wrote:
> 
> Tom,
> 
> > So while he surely should not go back to 40, it seems there's another
> > factor involved here that we've not recognized yet.
> 
> I'd agree.  Actually, the first thing I'd do, were it my machine, is reboot it
> and run memtest86 overnight.CPU thrashing like that may indicate bad RAM.
> 
> If the RAM checks out, I'd like to see the EXPLAIN ANALYZE for some of the
> longest-running queries, and for those INSERTS.
> 
> Also, is the new machine running Software RAID?
> 
> --
> Josh Berkus
> Aglio Database Solutions
> San Francisco
> 
> ---(end of broadcast)---
> TIP 6: Have you searched our list archives?
> 
>   http://archives.postgresql.org
>

---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] postgres 7.4 at 100%

2004-06-28 Thread Josh Berkus
Frank,

> I understand tuning PG is almost an art form, yet it should be based on
> actual usage patterns, not just by system dimensions, don't you agree?
 
Well, it's both.   It's more that available RAM determines your *upper* limit; 
that is, on Linux, you don't really want to have more than 20% allocated to 
the shared_buffers or you'll be taking memory away from the kernel.   

Within that limit, data size, query complexity and volume, and whether or not 
you have long-running procedures tell you whether you're at the low end or 
the high end.

To futher complicate things, these calculations are all going to change with 
7.5.

-- 
-Josh Berkus
 Aglio Database Solutions
 San Francisco


---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [PERFORM] postgres 7.4 at 100%

2004-06-28 Thread Frank Knobbe
On Mon, 2004-06-28 at 14:40, Josh Berkus wrote:
> As one of the writers of that article, let me point out:
> 
> " -- Medium size data set and 256-512MB available RAM: 16-32MB (2048-4096) 
> -- Large dataset and lots of available RAM (1-4GB): 64-256MB (8192-32768) "
> 
> While this is probably a little conservative, it's still way bigger than 40.

I agree that 40 is a bit weak :)  Chris' system has only 512 MB of RAM
though. I thought the quick response "..for any kind of production
server, try 5000-1..." -- without considering how much memory he has
-- was a bit... uhm... eager.

Besides, if the shared memory is used to queue client requests,
shouldn't that memory be sized according to workload (i.e. amount of
clients, transactions per second, etc) instead of just taking a
percentage of the total amount of memory? If there only a few
connections, why waste shared memory on that when the memory could be
better used as file system cache to prevent PG from going to the disk so
often? 

I understand tuning PG is almost an art form, yet it should be based on
actual usage patterns, not just by system dimensions, don't you agree?

Regards,
Frank






signature.asc
Description: This is a digitally signed message part


Re: [PERFORM] postgres 7.4 at 100%

2004-06-28 Thread Josh Berkus
Frank,

> Doug said the same, yet the PG Tuning article recommends not make this
> too large as it is just temporary used by the query queue or so. (I
> guess the system would benefit using more memory for file system cache)

As one of the writers of that article, let me point out:

" -- Medium size data set and 256-512MB available RAM: 16-32MB (2048-4096) 
-- Large dataset and lots of available RAM (1-4GB): 64-256MB (8192-32768) "

While this is probably a little conservative, it's still way bigger than 40.

I would disagree with the folks who suggest 32,000 as a setting for you.   On 
Linux, that's a bit too large; I've never seen performance improvements with 
shared_buffers greater than 18% of *available* RAM.

-- 
-Josh Berkus
 Aglio Database Solutions
 San Francisco


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] postgres 7.4 at 100%

2004-06-28 Thread Josh Berkus
Tom,

> So while he surely should not go back to 40, it seems there's another
> factor involved here that we've not recognized yet.

I'd agree.  Actually, the first thing I'd do, were it my machine, is reboot it 
and run memtest86 overnight.CPU thrashing like that may indicate bad RAM.

If the RAM checks out, I'd like to see the EXPLAIN ANALYZE for some of the 
longest-running queries, and for those INSERTS.

Also, is the new machine running Software RAID?

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] postgres 7.4 at 100%

2004-06-27 Thread Tom Lane
Frank Knobbe <[EMAIL PROTECTED]> writes:
> On Sun, 2004-06-27 at 00:33, Christopher Kings-Lynne wrote:
>>> I upped effective_cache to 16000 KB and I could only up the
>>> shared_buffers to 3000.  Anything more and postgres would not start.

>> You need to greatly incrase the shared memory max setting on your
>> machine so that you can use at the very least, 1 shared buffers.

> Doug said the same, yet the PG Tuning article recommends not make this
> too large as it is just temporary used by the query queue or so.

The original report was that the guy had it set to 40 (!?), which is
clearly far below the minimum reasonable value.  But I'd not expect a
huge difference between 3000 and 1 --- in my experience, 1000 is
enough to get you over the "knee" of the performance curve and into
the domain of marginal improvements.

So while he surely should not go back to 40, it seems there's another
factor involved here that we've not recognized yet.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [PERFORM] postgres 7.4 at 100%

2004-06-27 Thread Frank Knobbe
On Sun, 2004-06-27 at 00:33, Christopher Kings-Lynne wrote:
> > I upped effective_cache to 16000 KB and I could only up the
> > shared_buffers to 3000.  Anything more and postgres would not start.
> 
> You need to greatly incrase the shared memory max setting on your 
> machine so that you can use at the very least, 1 shared buffers.


Doug said the same, yet the PG Tuning article recommends not make this
too large as it is just temporary used by the query queue or so. (I
guess the system would benefit using more memory for file system cache)

So who is correct? The tuning article or big-honking-shared-mem
proponents?

FWIW: I have a box with 512 MB RAM and see no different between 4096 and
32758 shared buffers...

Regards,
Frank



signature.asc
Description: This is a digitally signed message part


Re: [PERFORM] postgres 7.4 at 100%

2004-06-26 Thread Christopher Kings-Lynne
I upped effective_cache to 16000 KB and I could only up the
shared_buffers to 3000.  Anything more and postgres would not start.
You need to greatly incrase the shared memory max setting on your 
machine so that you can use at the very least, 1 shared buffers.

Chris
---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [PERFORM] postgres 7.4 at 100%

2004-06-26 Thread Chris Cheston
Hi all,

I upped effective_cache to 16000 KB and I could only up the
shared_buffers to 3000.  Anything more and postgres would not start.

Postmaster is still using lots of CPU.  pg_stat_activity shows only
query is happening at a time so the requests are probably queueing on
this one thread. Is this the right way to go?

Any other suggestions for me to figure out why Postmaster is using so much CPU?

Thanks in advance,
Chris

numnet=# select * from pg_stat_activity;
 datid | datname | procpid | usesysid | usename | 
  
 current_query
  
  |  query_start
---+-+-+--+-+-+---
 17144 | numnet  |   26120 |  103 | numnet  |   
  
  
  
  | 2004-06-26 18:01:24.02042-04
 17144 | numnet  |   26121 |  103 | numnet  |   
  
  
  
  | 2004-06-26 18:01:24.026025-04
 17144 | numnet  |   26122 |  103 | numnet  |   
  
  
  
  | 2004-06-26 18:01:24.030917-04
 17144 | numnet  |   26123 |  103 | numnet  |   
  
  
  
  | 2004-06-26 18:01:24.036266-04
 17144 | numnet  |   26124 |  103 | numnet  |   
  
  
  
  | 2004-06-26 18:01:24.041551-04
 17144 | numnet  |   26125 |  103 | numnet  |   
  
  
  
  | 2004-06-26 18:01:24.046449-04
 17144 | numnet  |   26126 |  103 | numnet  |   
  
  
  
  | 2004-06-26 18:01:24.051666-04
 17144 | numnet  |   26127 |  103 | numnet  |   
  
  
  
  | 2004-06-26 18:01:24.057398-04
 17144 | numnet  |   26128 |  103 | numnet  |   
  
  
  
  | 2004-06-26 18:01:24.06225-04
 17144 | numnet  |   26129 |  103 | numnet  | SELECT id,name,number,
systemid,pin,last,to,from,lastest,start,end,continue,type,status,duration
FROM logs WHERE (((from= 'me') and (to= 'you')) and (serverip=
'23.6.6.33
 17144 | numnet  |   26147 |  103 | numnet  |   
  
  
  
  | 2004-06-26 18:03:46.175789-04
(11 rows)




- Original Message -
From: Joshua D. Drake <[EMAIL PROTECTED]>
Date: Sat, 26 Jun 2004 07:11:49 -0700
Subject: Re: [PERFORM] postgres 7.4 at 100%
To: Doug McNaught <[EMAIL PROTECTED]>
Cc: [EMAIL PROTECTED]


Hello,

Not to mention upping your effective_cache.

Re: [PERFORM] postgres 7.4 at 100%

2004-06-26 Thread Joshua D. Drake




Hello,

Not to mention upping your effective_cache.

Doug McNaught wrote:

  Chris Cheston <[EMAIL PROTECTED]> writes:

  
  
shared_buffers = 40 # min 16, at least max_connections*2, 8KB each

  
  
This is ridiculously low for any kind of production server.  Try
something like 5000-1 for a start.

-Doug


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match
  



-- 
Command Prompt, Inc., home of Mammoth PostgreSQL - S/ODBC and S/JDBC
Postgresql support, programming shared hosting and dedicated hosting.
+1-503-667-4564 - [EMAIL PROTECTED] - http://www.commandprompt.com
PostgreSQL Replicator -- production quality replication for PostgreSQL




Re: [PERFORM] postgres 7.4 at 100%

2004-06-26 Thread Doug McNaught
Chris Cheston <[EMAIL PROTECTED]> writes:

> shared_buffers = 40 # min 16, at least max_connections*2, 8KB each

This is ridiculously low for any kind of production server.  Try
something like 5000-1 for a start.

-Doug


---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [PERFORM] postgres 7.4 at 100%

2004-06-26 Thread Chris Cheston
Hi Richard,
Thanks so much for replying.  Pls see below.
Thanks in advance for any advice,
Chris

> People are going to need more information. Are you talking about
> CPU/disk IO/memory?

## CPU is at 100%.

> 
> > My settings are default on both boxes I think.
> 
> Doubtful - PG crawls with the default settings. Check your old
> postgresql.conf file and compare. Also, read the tuning article at:
>   http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php
> 
> > There are only about 20 inserts per second, which is really low.
> > Anyone have any ideas as to something I have to do to Postgres 7.4 to
> > change it from the default so that it's not eating up all my CPU?  I
> > have no clue how to debug this...
> 
> What does top/vmstat/iostat show during heavy usage?
> 
TOP:

137 processes: 135 sleeping, 2 running, 0 zombie, 0 stopped
CPU states:  81.4% user  17.9% system   0.0% nice   0.0% iowait   0.5% idle
Mem:   507036k av,  500244k used,6792k free,   0k shrd,   68024k buff
   133072k active, 277368k inactive
Swap:  787176k av,   98924k used,  688252k free  232500k cached

  PID USER PRI  NI  SIZE  RSS SHARE STAT %CPU %MEM   TIME CPU COMMAND
20734 postgres  15   0  4028 4028  3244 R83.6  0.7 348:03   0 postmaster
21249 numnet 9   0 78060  76M  8440 S 7.5 15.3  32:51   0 myapp
18478 user 12   0  1224 1224   884 S 5.7  0.2  57:01   0 top

[EMAIL PROTECTED] root]# vmstat
   procs  memory  swap  io system  cpu
 r  b  w   swpd   free   buff  cache   si   sobibo   incs us sy id
 1  0  0  98924   5980  68024 23352843 622   2810 13  6 24

iostat:
Time: 03:18:18 AM  
Device:tpskB_read/skB_wrtn/skB_readkB_wrtn
dev3-0   11.00 0.80   142.40  4712

Time: 03:18:23 AM  
Device:tpskB_read/skB_wrtn/skB_readkB_wrtn
dev3-0   10.60 0.00   143.20  0716

or blocks:

Time: 03:20:58 AM  
Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
dev3-0   25.40 3.20   756.80 16   3784

Time: 03:21:03 AM  
Device:tps   Blk_read/s   Blk_wrtn/s   Blk_read   Blk_wrtn
dev3-0   30.20 3.20   841.60 16   4208

extended:

Device:rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/srkB/swkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda30.00  79.20  0.60 30.604.80  878.40 2.40   439.20
   28.31 0.361.15   0.45   1.40

avg-cpu:  %user   %nice%sys   %idle
  31.000.00   18.20   50.80

Device:rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/srkB/swkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda30.00  45.80  0.00 10.800.00  452.80 0.00   226.40
   41.93 0.080.74   0.37   0.40

avg-cpu:  %user   %nice%sys   %idle
  83.200.00   16.600.20

Device:rrqm/s wrqm/s   r/s   w/s  rsec/s  wsec/srkB/swkB/s
avgrq-sz avgqu-sz   await  svctm  %util
/dev/hda 0.00  28.20  0.00 10.100.00  315.20 0.00   157.60
   31.21 4294917.330.30  99.01 100.00
/dev/hda10.00   0.00  0.00  0.000.000.00 0.00 0.00
0.00 0.000.00   0.00   0.00
/dev/hda20.00   0.00  0.00  0.000.000.00 0.00 0.00
0.00 0.000.00   0.00   0.00
/dev/hda30.00  28.20  0.00 10.100.00  315.20 0.00   157.60
   31.21 0.030.30   0.20   0.20

My conf file:

[EMAIL PROTECTED] setup]# cat /var/lib/pgsql/data/postgresql.conf
# -
# PostgreSQL configuration file
# -
#
# This file consists of lines of the form:
#
#   name = value
#
# (The '=' is optional.) White space may be used. Comments are introduced
# with '#' anywhere on a line. The complete list of option names and
# allowed values can be found in the PostgreSQL documentation. The
# commented-out settings shown in this file represent the default values.
#
# Any option can also be given as a command line switch to the
# postmaster, e.g. 'postmaster -c log_connections=on'. Some options
# can be changed at run-time with the 'SET' SQL command.
#
# This file is read on postmaster startup and when the postmaster
# receives a SIGHUP. If you edit the file on a running system, you have 
# to SIGHUP the postmaster for the changes to take effect, or use 
# "pg_ctl reload".


#---
# CONNECTIONS AND AUTHENTICATION
#---

# - Connection Settings -

tcpip_socket = true
max_connections = 20
# note: increasing max_connections costs about 500 bytes of shared
# memory per connection slot, in addition to costs from shared_buffers
# and max_locks_per_transaction.
#superuser_reserved_connections = 2
#port = 5432
#unix

Re: [PERFORM] postgres 7.4 at 100%

2004-06-25 Thread Richard Huxton
Chris Cheston wrote:
Hi all,
I was running Postgres 7.3 and it was running at about 15% with my
application. On Postgres 7.4 on another box, it was running at 100%...
People are going to need more information. Are you talking about 
CPU/disk IO/memory?

My settings are default on both boxes I think.
Doubtful - PG crawls with the default settings. Check your old 
postgresql.conf file and compare. Also, read the tuning article at:
 http://www.varlena.com/varlena/GeneralBits/Tidbits/index.php

There are only about 20 inserts per second, which is really low. 
Anyone have any ideas as to something I have to do to Postgres 7.4 to
change it from the default so that it's not eating up all my CPU?  I
have no clue how to debug this...
What does top/vmstat/iostat show during heavy usage?
Help please  Should I downgrade to 7.3 to see what happens?  BTW
I'm running Postgres 7.3.2 on:
Linux box 2.4.25-040218 #1 SMP Wed Feb 18 17:59:29 CET 2004 i686 i686
i386 GNU/Linux
on a single processor P4 1.4GHz, 512 MB RAM.  Does the SMP kernel do
something with the single processor CPU? or should this not affect
psql?
Don't know about the SMP thing. Unlikely that one of the big 
distributions would mess that up much though.

--
  Richard Huxton
  Archonet Ltd
---(end of broadcast)---
TIP 8: explain analyze is your friend