On Friday 16 April 2004 5:12 pm, Tom Lane wrote:
> Chris Kratz <[EMAIL PROTECTED]> writes:
> > ... Or if worse comes to worse to actually kill long running
> > processes without taking down the whole db as we have had to do on
> > occasion.
>
> A quick "kill -INT" suffices to issue a query cancel, which I think is
> what you want here.  You could also consider putting an upper limit on
> how long things can run by means of statement_timeout.

Wow, that's exactly what I've been looking for.  I thought I had scoured the 
manuals, but must have missed that one.  I need to think about the 
statement_timeout, the might be a good idea to use as well.

> Those are just band-aids though.  Not sure about the underlying problem.
> Ordinarily I'd guess that the big-hog queries are causing trouble by
> evicting everything the other queries need from cache.  But since your
> database fits in RAM, that doesn't seem to hold water.

That makes some sense, perhaps there is some other cache somewhere that is 
causing the problems.  I am doing some tuning and have set the following 
items in our postgresql.conf:

shared_buffers = 4096
max_fsm_relations = 1000
max_fsm_pages = 20000
sort_mem = 2048
effective_cache_size = 64000

I believe these are the only performance related items we've modified.  One 
thing I did today, since we seem to run about 600M of memory available for 
file caches.  The effective cache size used to be much lower, so perhaps that 
was causing some of the problems.

> What PG version are you running?

7.3.4 with grand hopes to move to 7.4 this summer.

>                       regards, tom lane

-- 
Chris Kratz
Systems Analyst/Programmer
VistaShare LLC
www.vistashare.com

---------------------------(end of broadcast)---------------------------
TIP 8: explain analyze is your friend

Reply via email to