On Oct 12, 2010, at 11:58 AM, Tom Lane wrote:

> Jesper Krogh <jes...@krogh.cc> writes:
>> On 2010-10-12 19:07, Tom Lane wrote:
>>> Anyway, if anyone is hot to make COUNT(*) faster, that's where to look.
> 
>> Just having 32 bytes bytes of "payload" would more or less double
>> you time to count if I read you test results correctly?. .. and in the
>> situation where diskaccess would be needed .. way more.
> 
>> Dividing by pg_relation_size by the amout of tuples in our production
>> system I end up having no avg tuple size less than 100bytes.
> 
> Well, yeah.  I deliberately tested with a very narrow table so as to
> stress the per-row CPU costs as much as possible.  With any wider table
> you're just going to be I/O bound.


On a wimpy disk, I/O bound for sure.   But my disks go 1000MB/sec.  No query 
can go fast enough for them.  The best I've gotten is 800MB/sec, on a wide row 
(average 800 bytes).  Most tables go 300MB/sec or so.  And with 72GB of RAM, 
many scans are in-memory anyway.

A single SSD with supercapacitor will go about 500MB/sec by itself next spring. 
  I will easily be able to build a system with 2GB/sec I/O for under $10k.



-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to