From: Hervé Piedvache [EMAIL PROTECTED]
Sent: Tuesday, July 13, 2004 11:42 PM
effective_cache_size = 500
looks like madness to me.
my (modest) understanding of this, is that
you are telling postgres to assume a 40Gb disk
cache (correct me if I am wrong).
btw, how much effect does this
gnari wrote:
is there a recommended procedure to estimate
the best value for effective_cache_size on a
dedicated DB server ?
Rule of thumb(On linux): on a typically loaded machine, observe cache memory of
the machine and allocate good chunk of it as effective cache.
To define good chunck of it,
Le mercredi 14 Juillet 2004 12:13, Shridhar Daithankar a écrit :
gnari wrote:
is there a recommended procedure to estimate
the best value for effective_cache_size on a
dedicated DB server ?
Rule of thumb(On linux): on a typically loaded machine, observe cache
memory of the machine and
Hervé Piedvache wrote:
In my case it's a PostgreSQL dedicated server ...
effective_cache_size = 500
For me I give to the planner the information that the kernel is able to cache
500 disk page in RAM
That is what? 38GB of RAM?
free
total used free shared
Herve'
I forgot to ask about your hardware. How much RAM, and what's your disk
setup? CPU?
sort_mem = 512000
Huh? Sort_mem is in K. The above says that you've allocated 512MB sort
mem. Is this process the *only* thing going on on the machine?
vacuum_mem = 409600
Again, 409.6MB
Josh,
Le mercredi 14 Juillet 2004 18:28, Josh Berkus a écrit :
I forgot to ask about your hardware. How much RAM, and what's your disk
setup? CPU?
8 Gb of RAM
Bi - Intel Xeon 2.00GHz
Hard drive in SCSI RAID 5
/dev/sdb6 101G 87G 8.7G 91% /usr/local/pgsql/data
/dev/sda7
Josh,
Le mercredi 14 Juillet 2004 18:28, Josh Berkus a écrit :
checkpoint_segments = 3
You should probably increase this if you have the disk space. For massive
insert operations, I've found it useful to have as much as 128 segments
(although this means about 1.5GB disk space)
Other
Should I be concerned that my vacuum process has taken upwards of 100 +
minutes to complete? I dropped all indexes before starting and also
increased the vacuum_mem before starting.
Looking at the output below, it appears that a vacuum full hasn't been done
on this table for quite sometime.
Answered my own question. I gave up the vacuum full after 150 mins. I was
able to export to a file, vacuum full the empty table, and reimport in less
than 10 mins. I suspect the empty item pointers and the sheer number of
removable rows was causing an issue.
On Thu, Jul 08, 2004 at 12:19:13PM +0200, Steinar H. Gunderson wrote:
I'm trying to find out why one of my queries is so slow -- I'm primarily
using PostgreSQL 7.2 (Debian stable), but I don't really get much better
performance with 7.4 (Debian unstable). My prototype table looks like this:
I
Patrick,
Answered my own question. I gave up the vacuum full after 150 mins. I was
able to export to a file, vacuum full the empty table, and reimport in less
than 10 mins. I suspect the empty item pointers and the sheer number of
removable rows was causing an issue.
Yeah. If you've a
Steinar,
- The subquery scan o12 phase outputs 1186 rows, yet 83792 are sorted.
Where
do the other ~82000 rows come from? And why would it take ~100ms to sort
the
rows at all? (In earlier tests, this was _one full second_ but somehow
that
seems to have improved, yet without really
A long time ago, in a galaxy far, far away, [EMAIL PROTECTED] (Patrick Hatcher) wrote:
Answered my own question. I gave up the vacuum full after 150 mins. I was
able to export to a file, vacuum full the empty table, and reimport in less
than 10 mins. I suspect the empty item pointers and the
Christopher Browne [EMAIL PROTECTED] writes:
A long time ago, in a galaxy far, far away, [EMAIL PROTECTED] (Patrick Hatcher)
wrote:
Answered my own question. I gave up the vacuum full after 150 mins. I was
able to export to a file, vacuum full the empty table, and reimport in less
than 10
Josh Berkus [EMAIL PROTECTED] writes:
- The subquery scan o12 phase outputs 1186 rows, yet 83792 are sorted.
Where
do the other ~82000 rows come from?
I'm puzzled by the 83792 rows as well. I've a feeling that Explain
Analyze is failing to output a step.
No, it's not missing anything.
15 matches
Mail list logo