The test case was build 38, Solaris 11, a 2 GB file, initially created
with 1 MB SW, and a recsize of 8 KB, on a pool with two raid-z 5+1,
accessed with 24 threads of 8 KB RW, for 500,000 ops or 40 seconds which
ever came first. The result at the pool level was 78% of the
Robert Milkowski wrote:
ps. however I'm really concerned with ZFS behavior when a pool is
almost full, there're lot of write transactions to that pool and
server is restarted forcibly or panics. I observed that file systems
on that pool will mount in 10-30 minutes each during zfs mount -a, and
Hi,
Note that these are page cache rates and that if the application pushes harder
and exposes the supporting device rates there is another world of performance
to be observed. This is where ZFS gets to be a challenge as the relationship
between the application level I/O and the pool level is
Hi Eric,
Thanks for the information.
I am aware of the recsize option and its intended use. However, when I
was exploring it to confirm the expected behavior, what I found was the
opposite!
The test case was build 38, Solaris 11, a 2 GB file, initially
created with 1 MB SW, and a recsize
Hi.
This time some RAID5/RAID-Z benchmarks.
This time I connected 3510 head unit with one link to the same server as 3510
JBODs are connected (using second link). snv_44 is used, server is v440.
I also tried changing max pending IO requests for HW raid5 lun and checked with
DTrace that
(www.good.com)
-Original Message-
From: Robert Milkowski [mailto:[EMAIL PROTECTED]
Sent: Tuesday, August 08, 2006 10:15 AM Eastern Standard Time
To: zfs-discuss@opensolaris.org
Subject:[zfs-discuss] Re: 3510 HW RAID vs 3510 JBOD ZFS SOFTWARE RAID
Hi.
This time some RAID5
Hello Luke,
Tuesday, August 8, 2006, 4:48:38 PM, you wrote:
LL Does snv44 have the ZFS fixes to the I/O scheduler, the ARC and the
prefetch logic?
LL These are great results for random I/O, I wonder how the sequential I/O
looks?
LL Of course you'll not get great results for sequential I/O on
Robert,
On 8/8/06 9:11 AM, Robert Milkowski [EMAIL PROTECTED] wrote:
1. UFS, noatime, HW RAID5 6 disks, S10U2
70MB/s
2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1)
87MB/s
3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2
130MB/s
4. ZFS, atime=off, SW RAID-Z 6
Hello Luke,
Tuesday, August 8, 2006, 6:18:39 PM, you wrote:
LL Robert,
LL On 8/8/06 9:11 AM, Robert Milkowski [EMAIL PROTECTED] wrote:
1. UFS, noatime, HW RAID5 6 disks, S10U2
70MB/s
2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1)
87MB/s
3. ZFS, atime=off, SW
Luke Lonergan wrote:
Robert,
On 8/8/06 9:11 AM, Robert Milkowski [EMAIL PROTECTED] wrote:
1. UFS, noatime, HW RAID5 6 disks, S10U2
70MB/s
2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1)
87MB/s
3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2
130MB/s
4. ZFS,
Robert,
LL Most of my ZFS experiments have been with RAID10, but there were some
LL massive improvements to seq I/O with the fixes I mentioned - I'd expect
that
LL this shows that they aren't in snv44.
So where did you get those fixes?
From the fine people who implemented them!
As Mark
On Tue, Aug 08, 2006 at 06:11:09PM +0200, Robert Milkowski wrote:
filebench/singlestreamread v440
1. UFS, noatime, HW RAID5 6 disks, S10U2
70MB/s
2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1)
87MB/s
3. ZFS, atime=off, SW RAID-Z 6 disks, S10U2
Hello Matthew,
Tuesday, August 8, 2006, 7:25:17 PM, you wrote:
MA On Tue, Aug 08, 2006 at 06:11:09PM +0200, Robert Milkowski wrote:
filebench/singlestreamread v440
1. UFS, noatime, HW RAID5 6 disks, S10U2
70MB/s
2. ZFS, atime=off, HW RAID5 6 disks, S10U2 (the same lun as in #1)
13 matches
Mail list logo