On 6 Aug 2003 at 15:42, Yaroslav Mazurak wrote:
> >>sort_mem = 131072
> > This sort_mem value is *very* large - that's 131MB for *each sort* that gets
> > done. I'd suggest trying something in the range 1,000-10,000. What's probably
> > happening with the error above is that PG is allocating ridiculous amounts of
> > memory, the machines going into swap and everything eventually grinds to a
> > halt.
> What mean "each sort"? Each query with SORT clause or some internal
> (invisible to user) sorts too (I can't imagine: indexed search or
> whatever else)?
> I'm reduced sort_mem to 16M.
Good call. I would say start with 4M if you time to experiment.
> >>enable_seqscan = false
> > Don't tinker with these in a live system, they're only really for
> > testing/debugging.
> This is another strange behavior of PostgreSQL - he don't use some
> created indexes (seq_scan only) after ANALYZE too. OK, I'm turned on
> this option back.
At times it thinks correct as well. An index scan might be costly. It does not
hurt leaving this option on. If your performance improves by turning off this
option, usually the problem is somewhere else..
> >>effective_cache_size = 65536
> > So you typically get about 256MB cache usage in top/free?
> No, top shows 12-20Mb.
> I'm reduced effective_cache_size to 4K blocks (16M?).
Are you on linux?( I lost OP). Don't trust top. Use free to find out how much
true free memory you have.. Look at second line of free..
millihelen, n.: The amount of beauty required to launch one ship.
---------------------------(end of broadcast)---------------------------
TIP 2: you can get off all lists at once with the unregister command
(send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])