Anton Rang writes:
On May 31, 2006, at 8:56 AM, Roch Bourbonnais - Performance
Engineering wrote:
I'm not taking a stance on this, but if I keep a controler
full of 128K I/Os and assuming there are targetting
contiguous physical blocks, how different is that to
Anton Rang wrote:
It's also worth noting that the customers for whom streaming is a real
issue tend to be those who are willing to spend a lot of money for
reliability (think replicating the whole system+storage) rather than
compromising performance; for them, simply the checksumming overhead
Anton wrote:
(For what it's worth, the current 128K-per-I/O policy of ZFS really
hurts its performance for large writes. I imagine this would not be
too difficult to fix if we allowed multiple 128K blocks to be
allocated as a group.)
I'm not taking a stance on this, but if I keep a
On May 31, 2006, at 8:56 AM, Roch Bourbonnais - Performance
Engineering wrote:
I'm not taking a stance on this, but if I keep a controler
full of 128K I/Os and assuming there are targetting
contiguous physical blocks, how different is that to issuing
a very large I/O ?
There are
On Wed, 2006-05-31 at 10:48, Anton Rang wrote:
We generally take one interrupt for each I/O
(if the CPU is fast enough), so instead of taking one
interrupt for 8 MB (for instance), we take 64.
Hunh. Gigabit ethernet devices typically implement some form of
interrupt blanking or
On Tue, 2006-05-30 at 14:59 -0500, Anton Rang wrote:
On May 30, 2006, at 2:16 PM, Richard Elling wrote:
[assuming we're talking about disks and not hardware RAID arrays...]
It'd be interesting to know how many customers plan to use raw disks,
and how their performance relates to hardware
Hello Anton,
Tuesday, May 30, 2006, 9:59:09 PM, you wrote:
AR On May 30, 2006, at 2:16 PM, Richard Elling wrote:
[assuming we're talking about disks and not hardware RAID arrays...]
AR It'd be interesting to know how many customers plan to use raw disks,
AR and how their performance relates