[EMAIL PROTECTED] kirjutas L, 06.09.2003 kell 00:58: > Hi, > > i'm having _serious_ issues of postgres hogging up the CPU over time. A graph > showing this can be seen at http://andri.estpak.ee/cpu0.png . > > The database is running on Redhat 9 (stock 2.4.20-8 kernel), on a reiserfs > partition (~8% usage - no problem there), and this problem has been with > PostgreSQL 7.3.2 (both package provided by Redhat and self-rebuilt package) > and 7.3.4 (i used the 7.3.4 SRPM available at postgres ftp site). > > A VACUUM FULL is a remedy to this problem, but a simple VACUUM isn't.
Could it be that FSM is too small for your vacuum interval ? Also, you could try running REINDEX (instead of or in addition to plain VACUUM) and see if this is is an index issue. > This can be reproduced, I think, by a simple UPDATE command: > > database=# EXPLAIN ANALYZE UPDATE table SET random_int_field = 1, last_updated > = NOW() WHERE primary_key = 3772; > QUERY PLAN > --------------------------------------------------------------------------------------------------------------------------------- > Index Scan using table_pkey on table (cost=0.00..6.81 rows=1 width=83) > (actual time=0.09..0.10 rows=1 loops=1) > Index Cond: (primary_key = 3772) > Total runtime: 0.37 msec > > When I repeat this command using simple <up><enter>, I can see the "Total > runtime" time grow ever so slightly - creeping from 0.37 to 0.38, then 0.39 > etc. Would probably get higher if I had the patience. :) > > The table "table" used in this example has 2721 rows, so size isn't an issue here. Due to the MVCC the raw table size (file size) can be much bigger if you dont VACUUM often enough. > Any comments or suggestions are welcome. If more information is needed, let me > know and I'll post the needed details. 1. What types of queries do you run, and how often ? 2. How is your database tuned (postgresql.conf settings) ? 3. How much memory does your machine have ? BTW, are you sure that this is postgres that is using up the memory ? I've read that reiserfs is a CPU hog, so this may be something that does intensive disk access, so some IO stats would be useful as well as real data and index file sizes. You could also set up logging and then check if there are some pathological queries that run for several hour doing nested seqscans ;) ----------------------- Hannu ---------------------------(end of broadcast)--------------------------- TIP 7: don't forget to increase your free space map settings