Hi, Luke,

Luke Lonergan wrote:

>> Do you think that adding some posix_fadvise() calls to the backend to
>> pre-fetch some blocks into the OS cache asynchroneously could improve
>> that situation?
> 
> Nope - this requires true multi-threading of the I/O, there need to be
> multiple seek operations running simultaneously.  The current executor
> blocks on each page request, waiting for the I/O to happen before requesting
> the next page.  The OS can't predict what random page is to be requested
> next.

I thought that posix_fadvise() with POSIX_FADV_WILLNEED was exactly
meant for this purpose?

My idea was that the executor could posix_fadvise() the blocks it will
need in the near future, and later, when it actually issues the blocking
read, the block is there already. This could even give speedups in the
single-spindle case, as the I/O scheduler could already fetch the next
blocks while the executor processes the current one.

But there must be some details in the executor that prevent this.

> We can implement multiple scanners (already present in MPP), or we could
> implement AIO and fire off a number of simultaneous I/O requests for
> fulfillment.

AIO is much more intrusive to implement, so I'd preferrably look
whether posix_fadvise() could improve the situation.

Thanks,
Markus

---------------------------(end of broadcast)---------------------------
TIP 3: Have you checked our extensive FAQ?

               http://www.postgresql.org/docs/faq

Reply via email to