Quoting Mertol Ozyoney :
Hi;
You may be hitting a bottleneck at your HBA. Try using multiple HBA's or
drive channels
Mertol
I'm pretty sure it's not a HBA issue. As I commented, my per-disk
write throughput stayed pretty consistent for 4, 8 and 12 disk pools
and varied between 80 and 90
Quoting Bob Friesenhahn :
On Mon, 31 Aug 2009, en...@businessgrade.com wrote:
Hi. I've been doing some simple read/write tests using filebench on
a mirrored pool. Essentially, I've been scaling up the number of
disks in the pool before each test between 4, 8 and 12. I've
noticed that f
There is around a zillion possible reasons for this. In my experience,
most folks don't or can't create enough load. Make sure you have
enough threads creating work. Other than that, the scientific method
would suggest creating experiments, making measurements, running
regressions, etc.
-- richa
ginal Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of
en...@businessgrade.com
Sent: Monday, August 31, 2009 5:16 PM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] ZFS read performance scalability
Hi. I've been doing some simple
On Mon, 31 Aug 2009, en...@businessgrade.com wrote:
Hi. I've been doing some simple read/write tests using filebench on a
mirrored pool. Essentially, I've been scaling up the number of disks in the
pool before each test between 4, 8 and 12. I've noticed that for individual
disks, ZFS write per
Hi. I've been doing some simple read/write tests using filebench on a
mirrored pool. Essentially, I've been scaling up the number of disks
in the pool before each test between 4, 8 and 12. I've noticed that
for individual disks, ZFS write performance scales very well between
4, 8 and 12 dis