On 1/17/14, 2:24 PM, Gregory Smith wrote:
I am skeptical that the database will take over very much of this work and perform better 
than the Linux kernel does.  My take is that our most useful role would be providing test 
cases kernel developers can add to a performance regression suite.  Ugly "we never 
though that would happen" situations seems at the root of many of the kernel 
performance regressions people here get nailed by.

FWIW, there are some scenarios where we could potentially provide additional 
info to the kernel scheduler; stuff that we know that it never will.

For example, if we have a limit clause we can (sometimes) provide a rough 
estimate of how many pages we'll need to read from a relation.

Probably more useful is the case of index scans; if we pre-read more data from 
the index we could hand the kernel a list of base relation blocks that we know 
we'll need.

There's some other things that have been mentioned, such as cases where files 
will only be accessed sequentially.

Outside of that though, the kernel is going to be in a way better position to 
schedule IO than we will ever be. Not only does it understand the underlying 
hardware, it can also see everything else that's going on.
--
Jim C. Nasby, Data Architect                       j...@nasby.net
512.569.9461 (cell)                         http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to