On Thu, Jan 22, 2009 at 7:44 PM, Greg Smith gsm...@gregsmith.com wrote:
The next fine-tuning bit I'd normally apply in this situation is to see if
increasing checkpoint_completion_target from the default (0.5) to 0.9 does
anything to flatten out that response time graph. I've seen a modest
On Thu, 5 Feb 2009, Mark Wong wrote:
One of the problems is that my scripts are listing the OS of the main
driver system, as opposed to the db system.
That's not a fun problem to deal with. Last time I ran into it, I ended
up writing a little PL/PerlU function that gathered all the
On Thu, Jan 22, 2009 at 10:10 PM, Mark Wong mark...@gmail.com wrote:
On Thu, Jan 22, 2009 at 7:44 PM, Greg Smith gsm...@gregsmith.com wrote:
On Thu, 22 Jan 2009, Mark Wong wrote:
I'm also capturing the PostgreSQL parameters as suggested so we can
see what's set in the config file, default,
On Mon, Dec 22, 2008 at 12:59 AM, Greg Smith gsm...@gregsmith.com wrote:
On Sat, 20 Dec 2008, Mark Wong wrote:
Here are links to how the throughput changes when increasing
shared_buffers: http://pugs.postgresql.org/node/505 My first glance takes
tells me that the system performance is quite
On Thu, 22 Jan 2009, Mark Wong wrote:
I'm also capturing the PostgreSQL parameters as suggested so we can
see what's set in the config file, default, command line etc. It's
the Settings link in the System Summary section on the report web
page.
Those look good, much easier to pick out the
Mark Wong mark...@gmail.com wrote:
It appears to peak around 220 database connections:
http://pugs.postgresql.org/node/514
Interesting. What did you use for connection pooling?
My tests have never stayed that flat as the connections in use
climbed. I'm curious why we're seeing such
On Tue, Jan 13, 2009 at 7:40 AM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Mark Wong mark...@gmail.com wrote:
It appears to peak around 220 database connections:
http://pugs.postgresql.org/node/514
Interesting. What did you use for connection pooling?
It's a fairly dumb but custom
On Mon, Dec 22, 2008 at 7:27 AM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Mark Wong mark...@gmail.com wrote:
The DL380 G5 is an 8 core Xeon E5405 with 32GB of
memory. The MSA70 is a 25-disk 15,000 RPM SAS array, currently
configured as a 25-disk RAID-0 array.
number of connections
Hi Mark,
Good to see you producing results again.
On Sat, 2008-12-20 at 16:54 -0800, Mark Wong wrote:
Here are links to how the throughput changes when increasing shared_buffers:
http://pugs.postgresql.org/node/505
Only starnge thing here is the result at 22528MB. It's the only normal
one
Mark Wong escribió:
Hrm, tracking just the launcher process certainly doesn't help. Are
the spawned processed short lived? I take a snapshot of
/proc/pid/io data every 60 seconds.
The worker processes can be short-lived, but if they are, obviously they
are not vacuuming the large tables.
On Sun, Dec 21, 2008 at 10:56 PM, Gregory Stark st...@enterprisedb.com wrote:
Mark Wong mark...@gmail.com writes:
On Dec 20, 2008, at 5:33 PM, Gregory Stark wrote:
Mark Wong mark...@gmail.com writes:
To recap, dbt2 is a fair-use derivative of the TPC-C benchmark. We
are using a 1000
On Sat, 20 Dec 2008, Mark Wong wrote:
Here are links to how the throughput changes when increasing
shared_buffers: http://pugs.postgresql.org/node/505 My first glance
takes tells me that the system performance is quite erratic when
increasing the shared_buffers.
If you smooth that curve out
Mark Wong mark...@gmail.com writes:
I'm not sure how bad that is for the benchmarks. The only effect that comes
to
mind is that it might exaggerate the effects of some i/o intensive operations
that under normal conditions might not cause any noticeable impact like wal
log file switches or
Mark Wong mark...@gmail.com writes:
Thanks for the input.
In a more constructive vein:
1) autovacuum doesn't seem to be properly tracked. It looks like you're just
tracking the autovacuum process and not the actual vacuum subprocesses
which it spawns.
2) The response time graphs would
Mark Wong mark...@gmail.com wrote:
The DL380 G5 is an 8 core Xeon E5405 with 32GB of
memory. The MSA70 is a 25-disk 15,000 RPM SAS array, currently
configured as a 25-disk RAID-0 array.
number of connections (250):
Moving forward, what other parameters (or combinations of) do people
On Mon, Dec 22, 2008 at 12:59 AM, Greg Smith gsm...@gregsmith.com wrote:
On Sat, 20 Dec 2008, Mark Wong wrote:
Here are links to how the throughput changes when increasing
shared_buffers: http://pugs.postgresql.org/node/505 My first glance takes
tells me that the system performance is quite
On Mon, Dec 22, 2008 at 2:56 AM, Gregory Stark st...@enterprisedb.com wrote:
Mark Wong mark...@gmail.com writes:
Thanks for the input.
In a more constructive vein:
1) autovacuum doesn't seem to be properly tracked. It looks like you're just
tracking the autovacuum process and not the
On Mon, Dec 22, 2008 at 7:27 AM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
Mark Wong mark...@gmail.com wrote:
The DL380 G5 is an 8 core Xeon E5405 with 32GB of
memory. The MSA70 is a 25-disk 15,000 RPM SAS array, currently
configured as a 25-disk RAID-0 array.
number of connections
On Mon, 22 Dec 2008, Mark Wong wrote:
The shared_buffers are the default, 24MB. The database parameters are
saved, probably unclearly, here's an example link:
http://207.173.203.223/~markwkm/community6/dbt2/baseline.1000.1/db/param.out
That's a bit painful to slog through to find what was
Mark Wong mark...@gmail.com writes:
On Dec 20, 2008, at 5:33 PM, Gregory Stark wrote:
Mark Wong mark...@gmail.com writes:
To recap, dbt2 is a fair-use derivative of the TPC-C benchmark. We
are using a 1000 warehouse database, which amounts to about 100GB of
raw text data.
Really? Do you
Hi all,
So after a long hiatus after running this OLTP workload at the OSDL,
many of you know the community has had some equipment donated by HP: a
DL380 G5 and an MSA70 disk array. We are currently using the hardware
to do some tuning exercises to show the effects of various GUC
parameters. I
Mark Wong mark...@gmail.com writes:
To recap, dbt2 is a fair-use derivative of the TPC-C benchmark. We
are using a 1000 warehouse database, which amounts to about 100GB of
raw text data.
Really? Do you get conforming results with 1,000 warehouses? What's the 95th
percentile response time?
22 matches
Mail list logo