> > Yep. To take a degenerate case, suppose that you had many small WAL > records, say 64 bytes each, so more than 100 per 8K block. If you > flush those one by one, you're going to rewrite that block 100 times. > If you flush them all at once, you write that block once. > > But even when the range is more than the minimum write size (8K for > WAL), there are still wins. Writing 16K or 24K or 32K submitted as a > single request can likely be done in a single revolution of the disk > head. But if you write 8K and wait until it's done, and then write > another 8K and wait until that's done, the second request may not > arrive until after the disk head has passed the position where the > second block needs to go. Now you have to wait for the drive to spin > back around to the right position. > > The details of course vary with the hardware in use, but there are > very few I/O operations where batching smaller requests into larger > chunks doesn't help to some degree. Of course, the optimal transfer > size does vary considerably based on the type of I/O and the specific > hardware in use.
This makes a lot of sense. I was always under the impression that batching small requests into larger requests bears the overhead of I/O latency. But, it seems to be the other way round, which I have now understood. Thanks a ton, Regards, Atri -- Regards, Atri l'apprenant -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers