On Sat, Jul 22, 2006 at 10:26:53AM -0700, Craig A. James wrote:
This causes massive file-system activity and flushes all files that the
kernel has cached. If you run this between each Postgres test (let it run
for a couple minutes), it gives you an apples-to-apples comparison between
Michael Stone wrote:
On Sat, Jul 22, 2006 at 10:26:53AM -0700, Craig A. James wrote:
This causes massive file-system activity and flushes all files that
the kernel has cached. If you run this between each Postgres test
(let it run for a couple minutes), it gives you an apples-to-apples
Robert Lor [EMAIL PROTECTED] writes:
Here is the break down between exclusive shared LWLocks. Do the
numbers look reasonable to you?
Yeah, those seem plausible, although the hold time for
CheckpointStartLock seems awfully high --- about 20 msec
per transaction. Are you using a nonzero
At EnterpriseDB we make extensive use of the OSDB's OLTP Benchmark. We also use the Java based benchamrk called BenchmarkSQL from SourceForge. Both of these benchmarks are update intensive OLTP tests that closely mimic the Traqnsaction Processing COuncil's TPC-C benchmark.
Postgres also ships
Tatsuo Ishii [EMAIL PROTECTED] writes:
Interesting. We (some Japanese companies including SRA OSS,
Inc. Japan) did some PG scalability testing using a Unisys's big 16
(physical) CPU machine and found PG scales up to 8 CPUs. However
beyond 8 CPU PG does not scale anymore. The result can be
Tom Lane wrote:
Yeah, those seem plausible, although the hold time for
CheckpointStartLock seems awfully high --- about 20 msec
per transaction. Are you using a nonzero commit_delay?
I didn't change commit_delay which defaults to zero.
Regards,
-Robert
---(end
Robert Lor [EMAIL PROTECTED] writes:
Tom Lane wrote:
Yeah, those seem plausible, although the hold time for
CheckpointStartLock seems awfully high --- about 20 msec
per transaction. Are you using a nonzero commit_delay?
I didn't change commit_delay which defaults to zero.
Hmmm ... AFAICS
Tom Lane wrote:
Hmmm ... AFAICS this must mean that flushing the WAL data to disk
at transaction commit time takes (most of) 20 msec on your hardware.
Which still seems high --- on most modern disks that'd be at least two
disk revolutions, maybe more. What's the disk hardware you're testing