On Sun, 6 Jul 2003, Martin Foster wrote:

> Shridhar Daithankar wrote:
> > 
> > It gives hint to psotgresql how much file system cache is available in the 
> > system. 
> > 
> > You have 1GB memory and your application requirement does not exceed 400MB. So 
> > OS can use roughly 600MB for file system cache. In that case you can set this 
> > parameter to 400MB cache to leave room for other application in FS cache.
> > 
> > IIRC, BSD needs sysctl tuning to make more memory available for FS cache other 
> > wise they max out at 300MB.
> > 
> > Roughly this setting should be (total memory -application 
> > requirement)*(0.7/0.8)
> > 
> > I guess that high kernel load you are seeing due to increased interaction 
> > between postgresql and OS when data is swapped to/fro in shared memory. If OS 
> > cache does well, postgresql should reduce this interaction as well.
> > 
> > 
> > BTW, since you have IDE disks, heavy disk activity can eat CPU as well. Is 
> > your disk bandwidth totally maxed out? Check with vmstat or whatever 
> > equivalent you have on BSD.
> > 
> >  Shridhar
> > 
> > 
> > ---------------------------(end of broadcast)---------------------------
> > TIP 6: Have you searched our list archives?
> > 
> >                http://archives.postgresql.org
> I changed the value of effective_cache_size seems interesting to 512. 
> The database restarted without any problems and load averages seem to be 
> a bit lower as a result.

I would try a few things.  First off, effective_cache_size is the size 
measured in 8k blocks, so 512 would be a setting of 4 Megs.  Probably a 
little low.  If you average 512Meg free, that would be a setting of 65536.

Note that the higer the effective_cache_size, the more the planner will 
favor index scans, and the lower, the more it will favor sequential scans.

Generally speaking, index scans cost in CPU terms, while seq scans cost in 
I/O time.

Since you're reporting low CPU usage, I'm guessing you're getting a lot of 
seq scans.

Do you have any type mismatches anywhere that could be the culprit?  
running vacuum and analyze regurlarly?  Any tables that are good 
candidates for clustering?

A common problem is a table like this:

create table test (info text, id int8 primary key);
insert into test values ('ted',1);
.. a few thousand more inserts;
vacuum full;
select * from test where id=1;

will result in a seq scan, always, because the 1 by itself is 
autoconverted to int4, which doesn't match int8 automatically.  This 

select * from test where id=1::int8

will cast the 1 to an int8 so the index can be used.

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?


Reply via email to