Le 09/05/2024 à 16:58, Robert Haas a écrit :
As I see it, a lot of the lack of agreement up until now is people
just not understanding the math. Since I think I've got the right idea
about the math, I attribute this to other people being confused about
what is going to happen and would tend to phrase it as: some people
don't understand how catastrophically bad it will be if you set this
value too low.

FWIW, I do agree with your math. I found your demonstration convincing. 500000 was selected with the wet finger.

Using the formula I suggested earlier:

vacthresh = Min(vac_base_thresh + vac_scale_factor * reltuples, vac_base_thresh + vac_scale_factor * sqrt(reltuples) * 1000);

your table of 2.56 billion tuples will be vacuumed if there are
more than 10 million dead tuples (every 28 minutes).

If we want to stick with the simple formula, we should probably choose a very high default, maybe 100 million, as you suggested earlier.

However, it would be nice to have the visibility map updated more frequently than every 100 million dead tuples. I wonder if this could be decoupled from the vacuum process?


Reply via email to