On 12/8/13 1:49 PM, Heikki Linnakangas wrote:
On 12/08/2013 08:14 PM, Greg Stark wrote:
The whole accounts table is 1.2GB and contains 10 million rows. As
expected with rows_per_block set to 1 it reads 240MB of that
containing nearly 2 million rows (and takes nearly 20s -- doing a full
table scan for select count(*) only takes about 5s):

One simple thing we could do, without or in addition to changing the algorithm, 
is to issue posix_fadvise() calls for the blocks we're going to read. It should 
at least be possible to match the speed of a plain sequential scan that way.

Hrm... maybe it wouldn't be very hard to use async IO here either? I'm thinking 
it wouldn't be very hard to do the stage 2 work in the callback routine...
--
Jim C. Nasby, Data Architect                       j...@nasby.net
512.569.9461 (cell)                         http://jim.nasby.net


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to