Merlin Moncure [EMAIL PROTECTED] writes:
One thing that might interest you is that the penalty in 8.1 for
stats_command_string=true in this type of access pattern is very high: I
was experimenting to see if the new cpu efficiency gave me enough of a
budget to start using this. This more than
That seems quite peculiar; AFAICS the pgstat code shouldn't be any
slower than before. At first I thought it might be because we'd
increased PGSTAT_ACTIVITY_SIZE, but actually that happened before
8.0 release, so it shouldn't be a factor in this comparison.
Just FYI the last time I looked at
On Mon, 8 Aug 2005, Tom Lane wrote:
What that sounds like to me is a machine with inadequate disk I/O bandwidth.
Your earlier comment that checkpoint drives the machine into the ground
fits right into that theory, too. You said there is almost no IO-wait
but are you sure you are measuring that
Kari Lavikka [EMAIL PROTECTED] writes:
However, those configuration changes didn't have significant effect
to
oprofile results. AtEOXact_CatCache consumes even more cycles.
I believe I've fixed that for 8.1.
Relative to 8.0, I am seeing a dramatic, almost miraculous reduction in
CPU load
Merlin Moncure [EMAIL PROTECTED] writes:
Relative to 8.0, I am seeing a dramatic, almost miraculous reduction in
CPU load times in 8.1devel. This is for ISAM style access patterns over
the parse/bind interface. (IOW one record at a time, 90% read, 10%
write).
Relative to commercial
Cool --- we've done a fair amount of work on squeezing out internal
inefficiencies during this devel cycle, but it's always hard to
predict
just how much anyone will notice in the real world.
Care to do some oprofile or gprof profiles to see where it's still
bad?
Since release of 8.0, we
I think I have a solution for you.
You have posted that you presently have these RAID volumes and behaviors:
sda: data (10 spindles, raid10)
sdb: xlog clog (2 spindles, raid1)
sdc: os and other stuff
Usually iostat (2 second interval) says:
avg-cpu: %user %nice %sys %iowait %idle
Kari Lavikka [EMAIL PROTECTED] writes:
samples %symbol name
13513390 16.0074 AtEOXact_CatCache
That seems quite odd --- I'm not used to seeing that function at the
top
of a profile. What is the workload being profiled, exactly?
He is running a commit_delay of 8. Could
Kari Lavikka [EMAIL PROTECTED] writes:
Disk configurations looks something like this:
sda: data (10 spindles, raid10)
sdb: xlog clog (2 spindles, raid1)
sdc: os and other stuff
That's definitely wrong. Put clog on the data disk. The entire point
of giving xlog its own spindle is
Hello,
we recently upgraded our dual Xeon Dell to a brand new Sun v40z with 4
opterons, 16GB of memory and MegaRAID with enough disks. OS is Debian
Sarge amd64, PostgreSQL is 8.0.3. Size of database is something like 80GB
and our website performs about 600 selects and several updates/inserts
Hi,
On Thu, 28 Jul 2005, Kari Lavikka wrote:
-8 Relevant rows from postgresql.conf 8-
shared_buffers = 15000 # min 16, at least max_connections*2, 8KB each
work_mem = 1536 # min 64, size in KB
As an aside, I'd increase work_mem -- but
effective_cache_size = 100 # typically 8KB each
I have this setting on postgresql 7.4.8 on FreeBSD with 4 GB RAM:
effective_cache_size = 27462
So eventhough your machine runs Debian and you have four times as much
RAM as mine your effective_cache_size is 36 times larger. You could
try
Kari Lavikka wrote:
shared_buffers = 15000
you can play around with this one but in my experience it doesn't make
much difference anymore (it used to).
work_mem = 1536 # min 64, size in KB
this seems low. are you sure you are not getting sorts swapped to disk?
fsync
On 7/28/05 2:21 AM, Kari Lavikka [EMAIL PROTECTED] wrote:
There's a new profiling tool called oprofile:
http://oprofile.sourceforge.net/download/
that can be run without instrumenting the binaries beforehand. To actually
find out what the code is doing during these stalls, oprofile can show
14 matches
Mail list logo