Re: [PERFORM] planner/optimizer question
Tom Lane wrote: Manfred Koizar [EMAIL PROTECTED] writes: Yes, the visible-to-all flag would be set as a by-product of an index scan, if the heap tuple is found to be visible to all active transactions. This update is non-critical Oh really? I think you need to think harder about the transition conditions. Dead-to-all is reasonably safe to treat as a hint bit because *it does not ever need to be undone*. Visible-to-all does not have that property. Yes, really :-) When a tuple is inserted the visible-to-all flag is set to false. The effect of this is that every index scan that finds this tuple has to visit the heap to verify visibility. If it turns out the tuple is not only visible to the current transaction, but to all current transactions, the visible-to-all flag can be set to true. This is non-critical, because if it is set to false scans will not miss the tuple, they will just visit the heap to verify visibility. The moment the heap tuple is updated/deleted the visible-to-all flag needs to be set to false again in all indexes. This is critical, and the I/O and (dead)lock costs of unsetting the visible-to-all flag are unknown and might be big enough to ofset any advantage on the selects. But I believe that for applications with a load, select, drop usage pattern (warehouses, archives etc.) having this visible-to-all flag would be a clear winner. Jochem -- I don't get it immigrants don't work and steal our jobs - Loesje ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [PERFORM] planner/optimizer question
Jochem van Dieten [EMAIL PROTECTED] writes: The moment the heap tuple is updated/deleted the visible-to-all flag needs to be set to false again in all indexes. This is critical, Exactly. This gets you out of the hint-bit semantics and into a ton of interesting problems, such as race conditions. (Process A determines that tuple X is visible-to-all, and goes to update the index tuple. Before it can reacquire lock on the index page, process B updates the heap tuple and visits the index to clear the flag bit. Once A obtains lock it will set the flag bit. Oops.) Basically what you are buying into with such a thing is multiple copies of critical state. It may be only one bit rather than several words, but updating it is no less painful than if it were a full copy of the tuple's commit status. regards, tom lane ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [PERFORM] Wierd context-switching issue on Xeon
No, don't go away and be quiet. Keep testing, it may be that under normal operation the context switching goes up but under the conditions that you were seeing the high CS it may not be as bad. As others have mentioned the real solution to this is to rewrite the buffer management so that the lock isn't quite as coarse grained. Dave On Sat, 2004-05-01 at 00:03, Robert Creager wrote: When grilled further on (Thu, 29 Apr 2004 11:21:51 -0700), Josh Berkus [EMAIL PROTECTED] confessed: spins_per_delay was not beneficial. Instead, try increasing them, one step at a time: (take baseline measurement at 100) 250 500 1000 1500 2000 3000 5000 ... until you find an optimal level. Then report the results to us! Some results. The patch mentioned is what Dave Cramer posted to the Performance list on 4/21. A Perl script monitored vmstat 1 for 120 seconds and generated max and average values. Unfortunately, I am not present on site, so I cannot physically change the device under test to increase the db load to where it hit about 10 days ago. That will have to wait till the 'real' work week on Monday. Context switches - avgmax Default 7.4.1 code : 10665 69470 Default patch - 10 : 17297 21929 patch at 100 : 26825 87073 patch at 1000 : 37580 110849 Now granted, the db isn't showing the CS swap problem in a bad way (at all), but should the numbers be trending the way they are with the patched code? Or will these numbers potentially change dramatically when I can load up the db? And, presuming I can re-produce what I was seeing previously (200K CS/s), you folks want me to carry on with more testing of the patch and report the results? Or just go away and be quiet... The information is provided from a HP Proliant DL380 G3 with 2x 2.4 GHZ Xenon's (with HT enabled) 2 GB ram, running 2.4.22-26mdkenterprise kernel, RAID controller w/128 Mb battery backed cache RAID 1 on 2x 15K RPM drives for WAL drive, RAID 0+1 on 4x 10K RPM drives for data. The only job this box has is running this db. Cheers, Rob -- Dave Cramer 519 939 0336 ICQ # 14675561 ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send unregister YourEmailAddressHere to [EMAIL PROTECTED])