Good thoughts. Thanks, Ron!
On Wed, Jan 28, 2026 at 10:02 AM Ron Johnson <[email protected]> wrote: > > More thoughts: > 1. In cases where records are huge (bytea storing images) I added an inner > hourly loop. > 2. Disable autovaccum on the table you're purging, then run pg_repack on it > and re-enable autovacuum. > 3. pg_repack --no-order is a lot faster than having it order by the PK. (You > might want it ordered by an indexed date field, though.) > > On Wed, Jan 28, 2026 at 5:57 AM Gus Spier <[email protected]> wrote: >> >> Thanks to all. >> >> I'll give the bash loop method a try and let you know how it works out. >> >> Regards to all, >> Gus >> >> >> On Wed, Jan 28, 2026 at 2:32 AM Olivier Gautherot >> <[email protected]> wrote: >> > >> > Hi Gus! >> > >> > This reminds me of a costly mistake I made and you want to avoid: it was a >> > mission critical database (say physical safety, real people) and the >> > vacuum froze the DB for 24 hours, until I finally took it offline. >> > >> > If you can take it offline (and you have a couple of hours) >> > - disconnect the DB >> > - drop indexes (that's the killer) >> > - remove unnecessary data >> > - vaccuum manually (or better, copy the relevant data to a new table and >> > rename it - this will save the DELETE above and will defragment the table) >> > - rebuild indexes >> > - connect the DB >> > >> > The better solution would be partitioning: >> > - choose a metrics (for instance a timestamp) >> > - create partition tables for the period you want to keep >> > - copy the relevant data to the partitions and create partial indexes >> > - take the DB off line >> > - update the last partition with the latest data (should be a fast update) >> > - truncate the original table >> > - connect partitions >> > - connect the DB >> > >> > In the future, deleting historic data will be a simple DROP TABLE. >> > >> > Hope it helps >> > -- >> > Olivier Gautherot >> > Tel: +33 6 02 71 92 23 >> > >> > >> > El mié, 28 de ene de 2026, 5:06 a.m., Tom Lane <[email protected]> >> > escribió: >> >> >> >> Ron Johnson <[email protected]> writes: >> >> > Hmm. Must have been START TRANSACTION which I remember causing issues >> >> > in DO >> >> > blocks. >> >> >> >> Too lazy to test, but I think we might reject that. The normal rule >> >> in a procedure is that the next command after a COMMIT automatically >> >> starts a new transaction, so you don't need an explicit START. >> >> >> >> regards, tom lane >> >> >> >> > > > > -- > Death to <Redacted>, and butter sauce. > Don't boil me, I'm still alive. > <Redacted> lobster!
