> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Nathan Kroenert
> 
> That reminds me of something I have been wondering about... Why only 12x
> faster? If we are effectively reading from memory - as compared to a
> disk reading at approximately 100MB/s (which is about an average PC HDD
> reading sequentially), I'd have thought it should be a lot faster than
12x.
> 
> Can we really only pull stuff from cache at only a little over one
> gigabyte per second if it's dedup data?

Actually, cpu's and memory aren't as fast as you might think.  In a system
with 12 disks, I've had to write my own "dd" replacement, because "dd
if=/dev/zero bs=1024k" wasn't fast enough to keep the disks busy.  Later, I
wanted to do something similar, using unique data, and it was simply
impossible to generate random data fast enough.  I had to tweak my "dd"
replacement to write serial numbers, which still wasn't fast enough, so I
had to tweak my "dd" replacement to write a big block of static data,
followed by a serial number, followed by another big block (always smaller
than the disk block, so it would be treated as unique when hitting the
pool...)

1 typical disk sustains 1Gbit/sec.  In theory, 12 should be able to sustain
12 Gbit/sec.  According to Nathan's email, the memory bandwidth might be 25
Gbit, of which, you probably need to both read & write, thus making it
effectively 12.5 Gbit...  I'm sure the actual bandwidth available varies by
system and memory type.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to