Re: [PERFORM] CLOG Patch
I tried with CLOG 24 also and I got linear performance upto 1250 users after which it started to tank. 32 got us to 1350 users before some other bottleneck overtook it. Based on what Tom said earlier, it might then make sense to make it a tunable with the default of 8 but something one can change for high number of users. Thanks. Regards, Jignesh Simon Riggs wrote: On Fri, 2007-08-03 at 16:09 -0400, Jignesh K. Shah wrote: This patch seems to work well (both with 32 and 64 value but not with 16 and the default 8). Could you test at 24 please also? Tom has pointed out the additional cost of setting this higher, even in workloads that don't benefit from the I/O-induced contention reduction. Is there a way we can integrate this in 8.3? I just replied to Josh's thread on -hackers about this. This will improve out of box performance quite a bit for high number of users (atleat 30% in my OLTP test) Yes, thats good. Will this have a dramatic effect on a particular benchmark, or for what reason might we need this? Tom has questioned the use case here, so I think it would be good to explain a little more for everyone. Thanks. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] CLOG Patch
On Fri, 2007-08-10 at 13:54 -0400, Jignesh K. Shah wrote: I tried with CLOG 24 also and I got linear performance upto 1250 users after which it started to tank. 32 got us to 1350 users before some other bottleneck overtook it. Jignesh, Thanks for testing that. It's not very clear to everybody why an extra 100 users is useful and it would certainly help your case if you can explain. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 6: explain analyze is your friend
[PERFORM] CLOG Patch
Hi Simon, This patch seems to work well (both with 32 and 64 value but not with 16 and the default 8). Is there a way we can integrate this in 8.3? This will improve out of box performance quite a bit for high number of users (atleat 30% in my OLTP test) Regards, Jignesh Simon Riggs wrote: On Thu, 2007-07-26 at 11:27 -0400, Jignesh K. Shah wrote: However at 900 Users where the big drop in throughput occurs: It gives a different top consumer of time: postgres`LWLockAcquire+0x1c8 postgres`SimpleLruReadPage+0x1ac postgres`TransactionIdGetStatus+0x14 postgres`TransactionLogFetch+0x58 TransactionIdGetStatus doesn't directly call SimpleLruReadPage(). Presumably the compiler has been rearranging things?? Looks like you're out of clog buffers. It seems like the clog buffers aren't big enough to hold clog pages for long enough and the SELECT FOR SHARE processing is leaving lots of additional read locks that are increasing the number of clog requests for older xids. Try the enclosed patch. Index: src/include/access/clog.h === RCS file: /projects/cvsroot/pgsql/src/include/access/clog.h,v retrieving revision 1.19 diff -c -r1.19 clog.h *** src/include/access/clog.h 5 Jan 2007 22:19:50 - 1.19 --- src/include/access/clog.h 26 Jul 2007 15:44:58 - *** *** 29,35 /* Number of SLRU buffers to use for clog */ ! #define NUM_CLOG_BUFFERS 8 extern void TransactionIdSetStatus(TransactionId xid, XidStatus status); --- 29,35 /* Number of SLRU buffers to use for clog */ ! #define NUM_CLOG_BUFFERS 64 extern void TransactionIdSetStatus(TransactionId xid, XidStatus status); ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly