Greg Smith <[EMAIL PROTECTED]> writes:

> On Mon, 22 Sep 2008, Gregory Stark wrote:
>
>> Hm, I'm disappointed with the 48-drive array here. I wonder why it maxed out
>> at only 10x the bandwidth of one drive. I would expect more like 24x or more.
>
> The ZFS RAID-Z implementation doesn't really scale that linearly.  It's rather
> hard to get the full bandwidth out of a X4500 with any single process, and I
> haven't done any filesystem tuning to improve things--everything is at the
> defaults.

Well random access i/o will fall pretty far short of the full bandwidth.
Actually this is a major issue, our sequential_page_cost vs random_page_cost
dichotomy doesn't really work when we're prefetching pages.

In my experiments an array capable of supplying about 1.4GB/s in sequential
i/o could only muster about 40MB/s of random i/o with prefetching and only
about 5MB/s without.

For this machine we would have quite a dilemma setting random_page_cost -- do
we set it to 280 or 35?

Perhaps access paths which expect to be able to prefetch most of their
accesses should use random_page_cost / effective_spindle_count for their i/o
costs?

But then if people don't set random_page_cost high enough they could easily
find themselves with random fetches being costed as less expensive than
sequential fetches. And I have a feeling it'll be a hard sell to get people to
set random_page_cost in the double digits let alone triple digits.

-- 
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com
  Get trained by Bruce Momjian - ask me about EnterpriseDB's PostgreSQL 
training!

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to