On Thu, 2003-12-04 at 09:12, Rob Fielding wrote:
> > I've tweaked shared buffers to 8192, pushed sort memory to 2048, vacuum
> > memory to 8192, and effective cache size to 10000.
> > /proc/sys/kernel/shmmax is set to 1600000000 and /proc/sys/fs/file-max
> > is set to 65536. Ulimit -n 3192.
> Your sharedmemory is too high, and not even being used effectivey. Your
> other settings are too low.
> Ball park guessing here, but I'd say first read (and understand) this:
I've read it many times, understanding is slower :-)
> Then make shared memory about 10-20% available ram, and set:
> ((shmmax/1024) - ( 14.2 * max_connections ) - 250 ) / 8.2 = shared_buffers
> decrease random_page_cost to 0.3 and wack up sort mem by 16 times,
> effective cache size to about 50% RAM (depending on your other settings)
> and try that for starters.
Following this, I've done:
15 % of that
divide by 8.2
performance is unchanged for the 18M job -- pg continues to use ~
285-300M, system load and memory usage stay the same. I killed that,
deleted from the affected tables, inserted a 6M job, and started a
vacuumdb --anaylze. It's been running for 20 minutes now...
getting the SQL query better optimized for PG is on my todo list, but
not something I can do right now -- this application is designed to be
cross-platform with MS-SQL, PG, and Oracle so tweaking SQL is a touchy
The pgavd conversation is intriguing, but I don't really understand the
role of vacuuming. Would this be a correct statement: "PG needs to
regularly re-evaluate the database in order to adjust itself?" I'm
imagining that it continues to treat the table as a small one until
vacuum informs it that the table is now large?
Jack Coates, Lyris Technologies Applications Engineer
510-549-4350 x148, [EMAIL PROTECTED]
"Interoperability is the keyword, uniformity is a dead end."
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?