Tom Lane wrote:
It's certainly true that hint-bit updates cost something, but
quantifying how much isn't easy.
Maybe we can instrument the code with DTrace probes to quantify the
actual costs. I'm not familiar with the code, but if I know where to
place the probes, I can easily do a quick
Tom Lane wrote:
Hmm, the problem would be trying to figure out what percentage of writes
could be blamed solely on hint-bit updates and not any other change to
the page. I don't think that the bufmgr currently keeps enough state to
know that, but you could probably modify it easily enough,
Tom Lane wrote:
That path would be taking CLogControlLock ... so you're off by at least
one. Compare the script to src/include/storage/lwlock.h.
Indeed, the indexing was off by one due to the removal of
BuffMappingLock in src/include/storage/lwlock.h between 8.1 and 8.2
which was not
Tom Lane wrote:
Yeah, those seem plausible, although the hold time for
CheckpointStartLock seems awfully high --- about 20 msec
per transaction. Are you using a nonzero commit_delay?
I didn't change commit_delay which defaults to zero.
Regards,
-Robert
---(end
Tom Lane wrote:
Hmmm ... AFAICS this must mean that flushing the WAL data to disk
at transaction commit time takes (most of) 20 msec on your hardware.
Which still seems high --- on most modern disks that'd be at least two
disk revolutions, maybe more. What's the disk hardware you're testing
Tom Lane wrote:
Tatsuo Ishii [EMAIL PROTECTED] writes:
18% in s_lock is definitely bad :-(. Were you able to determine which
LWLock(s) are accounting for the contention?
Sorry for the delay. Finally I got the oprofile data. It's
huge(34MB). If you are interested, I can put
Tom Lane wrote:
Those numbers look a bit suspicious --- I'd expect to see some of the
LWLocks being taken in both shared and exclusive modes, but you don't
show any such cases. You sure your script is counting correctly?
I'll double check to make sure no stupid mistakes were made!
Also,
Tom Lane wrote:
Also, it'd be interesting to count time spent holding shared lock
separately from time spent holding exclusive.
Tom,
Here is the break down between exclusive shared LWLocks. Do the
numbers look reasonable to you?
Regards,
-Robert
bash-3.00# time ./Tom_lwlock_acquire.d
contact Josh Berkus.
Regards,
Robert Lor
Sun Microsystems, Inc.
01-510-574-7189
---(end of broadcast)---
TIP 6: explain analyze is your friend
Arjen van der Meijden wrote:
I can already confirm very good scalability (with our workload) on
postgresql on that machine. We've been testing a 32thread/16G-version
and it shows near-linear scaling when enabling 1, 2, 4, 6 and 8 cores
(with all four threads enabled).
The threads are a
Bruce Momjian wrote On 04/13/06 01:39 AM,:
Yes, if someone wants to give us a clear answer on which wal_sync method
is best on all versions of Solaris, we can easily make that change.
We're doing tests to see how various parameters in postgresql.conf
affect performance on Solaris and will
Chris Mair wrote:
Ok, so I did a few runs for each of the sync methods, keeping all the
rest constant and got this:
open_datasync 0.7
fdatasync 4.6
fsync 4.5
fsync_writethrough not supported
open_sync 0.6
in arbitrary units - higher is faster.
Quite
Tom is right. Unless your workload can generate lots of simultaneous
queries, you will not reap the full benefit of the Sun Fire T2000
system. I have tested 8.1.3 with an OLTP workload on an 8 cores system.
With 1500-2000 client connections, the CPU was only about 30% utilized.
The UltraSPARC
capabilities (e.g. DTrace) specifically for PostgreSQL. I'll be posting
a Solaris performance tuning guide in a few weeks.
Regards,
Robert Lor
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe
14 matches
Mail list logo