Hello, A few days before, I got a question as described in the subject line on a discussion with my colleague.
In general, larger i/o size per system call gives us wider bandwidth on sequential read, than multiple system calls with smaller i/o size. Probably, people knows this heuristics. On the other hand, PostgreSQL always reads database files by BLCKSZ (= usually, 8KB) when referenced block was not on the shared buffer, however, it doesn't seem to me it can pull maximum performance of modern storage system. I'm not certain whether we had discussed this kind of ideas, or not. So, I'd like to see the reason why we stick on the fixed length i/o size, if similar ideas were rejected before. An idea that I'd like to investigate is, PostgreSQL allocates a set of continuous buffers to fit larger i/o size when block is referenced due to sequential scan, then invokes consolidated i/o request on the buffer. It probably make sense if we can expect upcoming block references shall be on the neighbor blocks; that is typical sequential read workload. Of course, we shall need to solve some complicated stuff, like prevention of fragmentation on shared buffers, or enhancement of internal APIs of storage manager to accept larger i/o size. Furthermore, it seems to me this idea has worth to investigate. Any comments please. Thanks, -- KaiGai Kohei <kai...@kaigai.gr.jp> -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers