On Jan 4, 2007, at 3:25 AM, [EMAIL PROTECTED] wrote:
Is there some reason why a small read on a raidz2 is not
statistically very
likely to require I/O on only one device? Assuming a non-degraded
pool of
course.
ZFS stores its checksums for RAIDZ/RAIDZ2 in such a way that all
disks must
On Jan 4, 2007, at 10:26 AM, Roch - PAE wrote:
All filesystems will incur a read-modify-write when
application is updating portion of a block.
For most Solaris file systems it is the page size, rather than
the block size, that affects read-modify-write; hence 8K (SPARC)
or 4K
On Dec 19, 2006, at 7:14 AM, Mike Seda wrote:
Anton B. Rang wrote:
I have a Sun SE 3511 array with 5 x 500 GB SATA-I disks in a RAID
5. This
2 TB logical drive is partitioned into 10 x 200GB slices. I gave
4 of these slices to a Solaris 10 U2 machine and added each of
them to a concat
On Oct 17, 2006, at 12:43 PM, Matthew Ahrens wrote:
Jeremy Teo wrote:
Heya Anton,
On 10/17/06, Anton B. Rang [EMAIL PROTECTED] wrote:
No, the reason to try to match recordsize to the write size is so
that a small write does not turn into a large read + a large
write. In configurations
On Aug 9, 2006, at 8:18 AM, Roch wrote:
So while I'm feeling optimistic :-) we really ought to be
able to do this in two I/O operations. If we have, say, 500K
of data to write (including all of the metadata), we should
be able to allocate a contiguous 500K block on disk and
On Aug 11, 2006, at 12:38 PM, Jonathan Adams wrote:
The problem is that you don't know the actual *contents* of the
parent block
until *all* of its children have been written to their final
locations.
(This is because the block pointer's value depends on the final
location)
But I know
On May 31, 2006, at 8:56 AM, Roch Bourbonnais - Performance
Engineering wrote:
I'm not taking a stance on this, but if I keep a controler
full of 128K I/Os and assuming there are targetting
contiguous physical blocks, how different is that to issuing
a very large I/O ?
There are
Ok so lets consider your 2MB read. You have the option of
setting in in one contiguous place on the disk or split it
into 16 x 128K chunks, somewhat spread all over.
Now you issue a read to that 2MB of data.
As you noted, you either have to wait for the head to find
the 2MB block and stream
On May 12, 2006, at 11:59 AM, Richard Elling wrote:
CPU cycles and memory bandwidth (which both can be in short
supply on a database server).
We can throw hardware at that :-) Imagine a machine with lots
of extra CPU cycles [ ... ]
Yes, I've heard this story before, and I won't believe it