I have a very frequently updated table with 240 million rows (and growing).
Every three hours 1.5 million rows are inserted and 1.5 million are
deleted. When I moved the cluster to a SSD this bulk insert (using copy)
time was cut from 22 minutes to 2.3 minutes. The deletion time was also
improved. I plan to make this bulk update every two hours or every hour.

Although the performance now (after SSD) is compatible with a more frequent
update I have read some horror stories about SSD death due to limited NAND
endurance combined with write amplification. As SSDs are expensive I would
like to push its death as far into the future as possible. Hence my
question: What really happens to the disk file in a delete and subsequent
vacuum? I guess there are two disk writes, one to mark the row as deleted
and the other when vacuuming to mark it as available to overwrite. If
instead of deleting and vacuuming I partition the table creating and
dropping tables at each bulk insert/delete would I be minimizing the SSD
wearing?

Regards, Clodoaldo

Reply via email to