The default blocksize is currently 8k, which is not necessary optimal for
all setup, especially with SSDs where the latency is much lower than HDD.
I don't think that really follows.
The rationale, which may be proven false, is that with a SSD the latency
penalty for reading and writing randomly vs sequentially is much lower
than for HDD, so there is less insentive to group stuff in larger chunks
on that account.
There is a case for different values with significant impact on performance
(up to a not-to-be-sneezed-at 10% on a pgbench run on SSD, see
http://www.cybertec.at/postgresql-block-sizes-getting-started/), and ISTM
that the ability to align PostgreSQL block size to the underlying FS/HW
block size would be nice.
I don't think that benchmark is very meaningful. Way too small scale,
way to short runtime (there'll be barely any checkpoints, hot pruning,
vacuum at all).
These benchs have the merit to exist, to be consistent (the smaller the
blocksize, the better the performance), and ISTM that the performance
results suggest that this is worth investigating.
Possibly the "small" scale means that data fit in memory, so the
benchmarks as run emphasize write performance linked to the INSERT/UPDATE.
What would you suggest as meaningful for scale and run time, say on a
dual-core 8GB memory 256GB SSD laptop?
More advanced features, but with much more impact on the code, would be to
be able to change the size at database/table level.
That'd be pretty horrible because the size of pages in shared_buffers
wouldn't be uniform anymore.
Yep, I also thought of that, so I'm not planing to investigate.
Sent via pgsql-hackers mailing list (firstname.lastname@example.org)
To make changes to your subscription: