What kind of drives are we talking about? Even SATA drives are
available according to application type (desktop, enterprise server,
home PVR, surveillance PVR, etc). Then there are drives with SAS &
fiber channel interfaces. Then you've got Winchester platters vs SSD
vs hybrids. But even before considering that and all the other system
factors, throughput for direct attached storage can vary greatly not
only from interface type and storage tech but even small on drive
controller firmware differences could potentially introduce variances.
That's why server manufacturers like HP, DELL, et al prefer that you
replace failed drives with one of theirs instead of something off the
shelf because they usually have firmware that's been fine tuned in
house or in conjunction with the manufacturer.


On Dec 11, 2011, at 8:25 AM, Edward Ned Harvey
<opensolarisisdeadlongliveopensola...@nedharvey.com> wrote:

>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Nathan Kroenert
>>
>> That reminds me of something I have been wondering about... Why only 12x
>> faster? If we are effectively reading from memory - as compared to a
>> disk reading at approximately 100MB/s (which is about an average PC HDD
>> reading sequentially), I'd have thought it should be a lot faster than
> 12x.
>>
>> Can we really only pull stuff from cache at only a little over one
>> gigabyte per second if it's dedup data?
>
> Actually, cpu's and memory aren't as fast as you might think.  In a system
> with 12 disks, I've had to write my own "dd" replacement, because "dd
> if=/dev/zero bs=1024k" wasn't fast enough to keep the disks busy.  Later, I
> wanted to do something similar, using unique data, and it was simply
> impossible to generate random data fast enough.  I had to tweak my "dd"
> replacement to write serial numbers, which still wasn't fast enough, so I
> had to tweak my "dd" replacement to write a big block of static data,
> followed by a serial number, followed by another big block (always smaller
> than the disk block, so it would be treated as unique when hitting the
> pool...)
>
> 1 typical disk sustains 1Gbit/sec.  In theory, 12 should be able to sustain
> 12 Gbit/sec.  According to Nathan's email, the memory bandwidth might be 25
> Gbit, of which, you probably need to both read & write, thus making it
> effectively 12.5 Gbit...  I'm sure the actual bandwidth available varies by
> system and memory type.
>
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to