On Thursday 07 August 2003 08:05, Yaroslav Mazurak wrote: > Hi All! > > Richard Huxton wrote: > >>>On Wednesday 06 August 2003 08:34, Yaroslav Mazurak wrote: > >>>>sort_mem = 131072 > >>> > >>>This sort_mem value is *very* large - that's 131MB for *each sort* that > > It's not TOO large *for PostgreSQL*. When I'm inserting a large amount > of data into tables, sort_mem helps. Value of 192M speeds up inserting > significantly (verified :))!
And what about every other operation? > >> What mean "each sort"? Each query with SORT clause or some internal > >>(invisible to user) sorts too (I can't imagine: indexed search or > >>whatever else)? > >> > >> I'm reduced sort_mem to 16M. > > > > It means each sort - if you look at your query plan and see three "sort" > > clauses that means that query might allocate 48MB to sorting. Now, that's > > good because sorting items on disk is much slower. It's bad because > > that's 48MB less for everything else that's happening. > > OK, I'm preparing to fix this value. :) > IMHO this is PostgreSQL's lack of memory management. I think that > PostgreSQL can finally allocate enough memory by himself! :-E But this parameter controls how much memory can be allocated to sorts - I don't see how PG can figure out a reasonable maximum by itself. > >> This is another strange behavior of PostgreSQL - he don't use some > >>created indexes (seq_scan only) after ANALYZE too. OK, I'm turned on > >>this option back. > > > > Fair enough, we can work on those. With 7.3.x you can tell PG to examine > > some tables more thouroughly to get better plans. > > You might EXPLAIN ANALYZE? No - I meant altering the number of rows used to gather stats (ALTER TABLE...SET STATISTICS) - this controls how many rows PG looks at when deciding the "shape" of the data in the table. [snip] > > Don't forget that any memory PG is using the operating-system can't. The > > OS will cache frequently accessed disk blocks for you, so it's a question > > of finding the right balance. > > PostgreSQL is the primary task for me on this PC - I don't worry about > other tasks except OS. ;) You still want the OS to cache your database files. If you try and allocate too much memory to PG you will only hurt performance. > >> May set effective_cache_size to 192M (48K blocks) be better? I don't > >>understand exactly: effective_cache_size tells PostgreSQL about OS cache > >>size or about available free RAM? > > > > It needs to reflect how much cache the system is using - try the "free" > > command to see figures. > > I'm not found "free" utility on FreeBSD 4.7. :( Sorry - I don't know what the equivalent is in FreeBSD. > > If you could post the output of EXPLAIN ANALYSE rather than EXPLAIN, I'll > > take a look at it this evening (London time). There's also plenty of > > other people on this list who can help too. > > I'm afraid that this may be too long. :-((( > Yesterday I'm re-execute my query with all changes... after 700 (!) > minutes query failed with: "ERROR: Memory exhausted in > AllocSetAlloc(104)". I don't understand: result is actually 8K rows long > only, but > PostgreSQL failed! Why?!! Function showcalc is recursive, but in my > query used with level 1 depth only (I know exactly). I must say I'm puzzled as to how this can happen. In fact, if the last EXPLAIN output was accurate, it couldn't run out of memory, not with the settings you've got now. > Again: I think that this is PostgreSQL's lack of quality memory > management. :-( If it's allocating all that memory (do you see the memory usage going up in top) then there's something funny going on now. Well sir, I can only think of two options now: 1. simplify the query until it works and then build it back up again - that should identify where the problem is. 2. If you can put together a pg_dump with a small amount of sample data, I can take a look at it here. -- Richard Huxton Archonet Ltd ---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives? http://archives.postgresql.org