Re: [HACKERS] Freezing tuples on pages dirtied by vacuum
On Jul 21, 2006, at 9:03 AM, Tom Lane wrote: One possibility is that early freeze is at 1B transactions and we push forced-freeze back to 1.5B transactions (the current forced-freeze at 1B transactions seems rather aggresive anyway, now that the server will refuse to issue new commands rather than lose data due to wraparound). No, the freeze-at-1B rule is the maximum safe delay. Read the docs. But we could do early freeze at 0.5B and forced freeze at 1B and probably still get the effect you want. However, I remain unconvinced that this is a good idea. You'll be adding very real cycles to regular vacuum processing (to re-scan tuples already examined) in hopes of obtaining a later savings that is really pretty hypothetical. Where is your evidence that writes caused solely by tuple freezing are a performance issue? I didn't think vacuum would be a CPU-bound process, but is there any way to gather that evidence right now? What about adding some verbage to vacuum verbose that reports how many pages were dirtied to freeze tuples? It seems to be useful info to have, and would help establish if it's worth worrying about. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Freezing tuples on pages dirtied by vacuum
"Jim C. Nasby" <[EMAIL PROTECTED]> writes: > For clobbering xmin too early, we could make it so that only tuples > older than some threashold would be subject to 'early freezing'. OK, that might be acceptable. > One > possibility is that early freeze is at 1B transactions and we push > forced-freeze back to 1.5B transactions (the current forced-freeze at 1B > transactions seems rather aggresive anyway, now that the server will > refuse to issue new commands rather than lose data due to wraparound). No, the freeze-at-1B rule is the maximum safe delay. Read the docs. But we could do early freeze at 0.5B and forced freeze at 1B and probably still get the effect you want. However, I remain unconvinced that this is a good idea. You'll be adding very real cycles to regular vacuum processing (to re-scan tuples already examined) in hopes of obtaining a later savings that is really pretty hypothetical. Where is your evidence that writes caused solely by tuple freezing are a performance issue? regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Freezing tuples on pages dirtied by vacuum
On Wed, Jul 19, 2006 at 07:45:24PM -0400, Tom Lane wrote: > "Jim C. Nasby" <[EMAIL PROTECTED]> writes: > > ISTM that as soon as vacuum dirties a page, it might as well update all > > tuples it can (any where Xmin < GetOldestXmin()), since that won't take > > much time compared to the cost of writing the page out. > > Perhaps not, but what it will do is destroy data that you might wish you > had later. Check the archives and note how often we ask people for xmin > values when trying to debug a problem. I don't think it's a good idea > for aggressive freezing of tuples to be the default behavior. Moreover, > I can't see that there'd be any real gain from having done it --- it > doesn't look to me like it would save any vacuum-to-prevent-wraparound > operations, since nothing would happen at non-dirty pages. For any tables that see even a trivial rate of updates spread through the table, odds are that all tuples will end up frozen well before 1B transactions have passed. Yes, you'll still need to vacuum every 1B transactions, but that vacuum wouldn't need to dirty any pages just to freeze tuples. For clobbering xmin too early, we could make it so that only tuples older than some threashold would be subject to 'early freezing'. One possibility is that early freeze is at 1B transactions and we push forced-freeze back to 1.5B transactions (the current forced-freeze at 1B transactions seems rather aggresive anyway, now that the server will refuse to issue new commands rather than lose data due to wraparound). BTW, the freeze limits for vacuum and autovac are currently defined in different places; should I submit a patch to refactor that into one place? (Presumably vacuum.c) -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Freezing tuples on pages dirtied by vacuum
"Jim C. Nasby" <[EMAIL PROTECTED]> writes: > ISTM that as soon as vacuum dirties a page, it might as well update all > tuples it can (any where Xmin < GetOldestXmin()), since that won't take > much time compared to the cost of writing the page out. Perhaps not, but what it will do is destroy data that you might wish you had later. Check the archives and note how often we ask people for xmin values when trying to debug a problem. I don't think it's a good idea for aggressive freezing of tuples to be the default behavior. Moreover, I can't see that there'd be any real gain from having done it --- it doesn't look to me like it would save any vacuum-to-prevent-wraparound operations, since nothing would happen at non-dirty pages. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
[HACKERS] Freezing tuples on pages dirtied by vacuum
Currently, the loop in vacuumlazy.c that scans through the tuples on a page checks each tuple to see if it needs to be frozen (is it's Xmin older than half-way to wrap-around). ISTM that as soon as vacuum dirties a page, it might as well update all tuples it can (any where Xmin < GetOldestXmin()), since that won't take much time compared to the cost of writing the page out. This would help prevent the need to dirty the page in the distant future for no reason other than to freeze tuples. Granted, the old code/checks would still have to stay in place to ensure that tuples were vacuumed before they got too old, but that's not much overhead compared to writing the page to disk. Comments? If people think this is a good idea I should be able to come up with a patch. -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster