Since transactions in ZFS are committed until the ueberblock is written,
this boils down to:
How is the ueberblock committed atomically in a RAID-Z configuration?
RAID-Z isn't even necessary to have this issue; all you need is a disk
that doesn't actually guarantee atomicity of
[EMAIL PROTECTED] writes:
But you're not answering my question:
How can RAID-Z preserve transactional semantics when a single
FS block write requires writing to multiple physical devices?
Since transactions in ZFS are committed until the ueberblock is written,
this boils down to:
Jeff Bonwick [EMAIL PROTECTED] writes:
Since transactions in ZFS are committed until the ueberblock is written,
this boils down to:
How is the ueberblock committed atomically in a RAID-Z configuration?
RAID-Z isn't even necessary to have this issue; all you need is a disk
that
Scott Roberts [EMAIL PROTECTED] writes:
I've been reading through the documentation on ZFS, and was hoping I
could get some clarification and make sure I'm reading everything
right.
I'm looking to build a NAS box, using sata drives in a double parity
configuration (i.e. raidz2). This is
Just out of curiosity, what is the progress on allowing the addition of
drives to an existing RAIDZ (whether pool or udev). Particularly in the
case of udevs, the ability to add additional drives to expand a udev is
really useful when adding more JBODs to an existing setup...
--
Erik Trimble
There are two questions here.
1. Can you add a redundant set of vdevs to a pool. Answer: yes.
2. What is the best way for Scott to grow his archive into his disks.
The answer to this is what I discuss below.
David Dyer-Bennet wrote:
Scott Roberts [EMAIL PROTECTED] writes:
I've been reading
On Wed, Jul 12, 2006 at 02:45:40PM -0700, Darren Dunham wrote:
There may be several parity sectors per row so adding another column doesn't
work.
But presumably it would be possible to use additional columns for future
writes?
I guess that could be made to work, but then the data on the
David Dyer-Bennet [EMAIL PROTECTED] writes:
It does, as you say, take up another whole parity disk (or two in your
raidz2 case). And requires add-ons to be in units bigger than just
one drive.
I've seen people wondering if ZFS was a scam because the claims just
seemed too good to be true.
It uses extra space in the middle of the write, in order to hold the
new data, but once
the write is complete, the space occupied by the old version is now
free for use.
ckl
On Jul 12, 2006, at 8:05 PM, Robert Chen wrote:
I still could not understand why Copy on Write does not waste file
Today I attempted to upgrade to S10_U2 and migrate some mirrored UFS SVM
partitions to ZFS.
I used Live Upgrade to migrate from U1 to U2 and that went without a
hitch on my SunBlade 2000. And the initial conversion of one side of the
UFS mirrors to a ZFS pool and subsequent data migration
Today I attempted to upgrade to S10_U2 and migrate some mirrored UFS SVM
partitions to ZFS.
I used Live Upgrade to migrate from U1 to U2 and that went without a
hitch on my SunBlade 2000. And the initial conversion of one side of the
UFS mirrors to a ZFS pool and subsequent data migration
11 matches
Mail list logo