po 6. 10. 2025 v 6:46 odesílatel David Rowley <[email protected]> napsal:

> On Mon, 6 Oct 2025 at 17:19, wenhui qiu <[email protected]> wrote:
> > I really can't agree more. Many default values are just too
> conservative, and the documentation doesn't provide best practices.,i think
> reduce to 1.x,Or add a tip in the document, providing a recommended value
> for different SSDs.
>
> Did you read Tomas's email or just the subject line?  I think if
> you're going to propose to move it in the opposite direction as to
> what Tomas found to be the more useful direction, then that at least
> warrants providing some evidence to the contrary of what Tomas has
> shown or stating that you think his methodology for his calculation is
> flawed because...
>
> I suspect all you've done here is propagate the typical advice people
> give out around here.  It appears to me that Tomas went to great
> lengths to not do that.
>

+1

The problem will be in estimation of the effect of cache. It can be pretty
wide range.

I have a access to not too small eshop in Czech Republic (but it is not
extra big) - It uses today classic stack - Java (ORM), Elastic, Postgres.
The database size is cca 1.9T, shared buffers are 32GB (it handles about
10-20K logged users at one time).

The buffer cache hit ratio is 98.42%. The code is well optimized. This
ratio is not calculated with file system cache.

I believe so for different applications  (OLAP) or less well optimized the
cache hit ratio can be much much worse.

Last year I had an experience with customers that had Postgres in clouds,
and common (not extra expensive) discs are not great parameters today. It
is a question if one ratio like random page cost / seq page cost can well
describe dynamic throttling (or dynamic behavior of current clouds io)
where customers frequently touch limits.

Regards

Pavel



> David
>
>
>

Reply via email to