Richard Huxton wrote:
On Wednesday 06 August 2003 08:34, Yaroslav Mazurak wrote:
sort_mem = 131072
This sort_mem value is *very* large - that's 131MB for *each sort* that
It's not TOO large *for PostgreSQL*. When I'm inserting a large amount of data into tables, sort_mem helps. Value of 192M speeds up inserting significantly (verified :))!
What mean "each sort"? Each query with SORT clause or some internal (invisible to user) sorts too (I can't imagine: indexed search or whatever else)?
I'm reduced sort_mem to 16M.
It means each sort - if you look at your query plan and see three "sort" clauses that means that query might allocate 48MB to sorting. Now, that's good because sorting items on disk is much slower. It's bad because that's 48MB less for everything else that's happening.
OK, I'm preparing to fix this value. :)
IMHO this is PostgreSQL's lack of memory management. I think that PostgreSQL can finally allocate enough memory by himself! :-E
This is another strange behavior of PostgreSQL - he don't use some created indexes (seq_scan only) after ANALYZE too. OK, I'm turned on this option back.
Fair enough, we can work on those. With 7.3.x you can tell PG to examine some tables more thouroughly to get better plans.
You might EXPLAIN ANALYZE?
effective_cache_size = 65536
So you typically get about 256MB cache usage in top/free?
No, top shows 12-20Mb. I'm reduced effective_cache_size to 4K blocks (16M?).
Cache size is in blocks of 8KB (usually) - it's a way of telling PG what the chances are of disk blocks being already cached by Linux.
PostgreSQL is running on FreeBSD, memory block actually is 4Kb, but in most cases documentation says about 8Kb... I don't know exactly about real disk block size, but suspect that it's 4Kb. :)
I think this is a important remark. Can "JOIN" significantly reduce performance of SELECT statement relative to ", WHERE"? OK, I'm changed VIEW to this text:
It can sometimes. What it means is that PG will follow whatever order you write the joins in. If you know joining a to b to c is the best order, that can be a good thing. Unfortunately, it means the planner can't make a better guess based on its statistics.
At this moment this don't helps. :(
Well the cost estimates look much more plausible. You couldn't post EXPLAIN ANALYSE could you? That actually runs the query.
Now (2K shared_buffers blocks, 16K effective_cache_size blocks, 16Mb sort_mem) PostgreSQL uses much less memory, about 64M... it's not good, I want using all available RAM if possible - PostgreSQL is the main task on this PC.
Don't forget that any memory PG is using the operating-system can't. The OS will cache frequently accessed disk blocks for you, so it's a question of finding the right balance.
PostgreSQL is the primary task for me on this PC - I don't worry about other tasks except OS. ;)
May set effective_cache_size to 192M (48K blocks) be better? I don't understand exactly: effective_cache_size tells PostgreSQL about OS cache size or about available free RAM?
It needs to reflect how much cache the system is using - try the "free" command to see figures.
I'm not found "free" utility on FreeBSD 4.7. :(
If you could post the output of EXPLAIN ANALYSE rather than EXPLAIN, I'll take a look at it this evening (London time). There's also plenty of other people on this list who can help too.
I'm afraid that this may be too long. :-(((
Yesterday I'm re-execute my query with all changes... after 700 (!) minutes query failed with: "ERROR: Memory exhausted in AllocSetAlloc(104)".
I don't understand: result is actually 8K rows long only, but PostgreSQL failed! Why?!! Function showcalc is recursive, but in my query used with level 1 depth only (I know exactly).
Again: I think that this is PostgreSQL's lack of quality memory management. :-(
- Richard Huxton
With best regards Yaroslav Mazurak.
---------------------------(end of broadcast)--------------------------- TIP 5: Have you checked our extensive FAQ?