boldly plowing forwards I request a few disks/vdevs to be mirrored
all at the same time :
bash-3.2# zpool status zfs0
pool: zfs0
state: ONLINE
scrub: resilver completed with 0 errors on Thu Feb 1 04:17:58 2007
config:
NAME STATE READ WRITE CKSUM
zfs0
Neil Perrin wrote:
No it's not the final version or even the latest!
The current on disk format version is 3. However, it hasn't
diverged much and the znode/acl stuff hasn't changed.
and it will get updated as part of zfs-crypto, I just haven't done so
yet because I'm not finished designing
I am trying to understand if zfs checksums apply at a file or a block level.
We know that zfs provides end to end checksum integrity, and I assumed that
when I write a file to a zfs filesystem, the checksum was calculated at a file
level, as opposed to say, a block level. However, I have
I hope there will be consideration given to providing compatibility with UFS
quotas
(except that inode limits would be ignored). At least to the point of having
edquota(1m)
quot(1m)
quota(1m)
quotactl(7i)
repquota(1m)
rquotad(1m)
and possibly quotactl(7i) work with zfs (with the exception
On 2/1/07, Nathan Essex [EMAIL PROTECTED] wrote:
I am trying to understand if zfs checksums apply at a file or a block level.
We know that zfs provides end to end checksum integrity, and I assumed that
when I write a file to a zfs filesystem, the checksum was calculated at a file
level, as
ZFS checksums are at the block level.
Nathan Essex wrote On 02/01/07 08:27,:
I am trying to understand if zfs checksums apply at a file or a block level.
We know that zfs provides end to end checksum integrity, and I assumed that
when I write a file to a zfs filesystem, the checksum was
Nathan Essex wrote:
Thank You, so that means that even if I use something that writes raw i/o to a
zfs emulated volume, I still get the checksum protection, and hence data
corruption protection.
yes it does.
Also consider how BAD performance could be if it were actually
calculated on a per
Neil Perrin wrote:
ZFS checksums are at the block level.
This has been causing some confusion lately, so perhaps we could say:
ZFS checksums are at the file system block level, not to be confused with
the disk block level or transport block level.
-- richard
[EMAIL PROTECTED] said:
That is the part of your setup that puzzled me. You took the same 7 disk
raid5 set and split them into 9 LUNS. The Hitachi likely splits the virtual
disk into 9 continuous partitions so each LUN maps back to different parts
of the 7 disks. I speculate that ZFS thinks
On Wed, 31 Jan 2007 [EMAIL PROTECTED] wrote:
I understand all the math involved with RAID 5/6 and failure rates,
but its wise to remember that even if the probabilities are small
they aren't zero. :)
Agreed. Another thing I've seen, is that if you have an A/C (Air
Conditioning) event in the
On Thu, 1 Feb 2007, Tom Buskey wrote:
[i]
I got an Addonics eSata card. Sata 3.0. PCI *or* PCI-X. Works right off the
bat w/ 10u3. No firmware update needed. It was $130. But I don't pull out my
hair and I can use it if I upgrade my server for pci-x
[/i]
And I'm finding the throughput
On Feb 1, 2007, at 10:51 AM, Richard Elling wrote:
FYI,
here is an interesting blog on using ZFS with a dozen USB drives
from Constantin.
http://blogs.sun.com/solarium/entry/solaris_zfs_auf_12_usb
My German is somewhat rusty, but I see that Google Translate does a
respectable
The ZFS On-Disk specification and other ZFS documentation describe the labeling
scheme used for the vdevs that comprise a ZFS pool. A label entry contains,
among other things, an array of uberblocks, one of which will point to the
active object set of the pool it is a part of at a given
[EMAIL PROTECTED] wrote on 02/01/2007 01:17:15 PM:
The ZFS On-Disk specification and other ZFS documentation describe
the labeling scheme used for the vdevs that comprise a ZFS pool. A
label entry contains, among other things, an array of uberblocks,
one of which will point to the
Recreation of the active uberblock would occur, for example, if we
took a snapshot of the pool and changes were then made anywhere in the
pool.
The uberblock is updated quite often, not just on snapshots.
Since a new uberblock is required in this snapshot scenario,
and since it appears that
I found this article (http://www.cuddletech.com/blog/pivot/entry.php?id=729)
but I have 2 questions. I am trying the steps on Opensolaris build 54.
Since you create the filesystem with newfs, isn't that really a ufs filesystem
running on top of zfs? Also I haven't been able to do anything in
As far as I know the recalled on paper number of snapshots you can have in a
filesystem is 2^48.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
I found this article
(http://www.cuddletech.com/blog/pivot/entry.php?id=729) but I have 2
questions. I am trying the steps on Opensolaris build 54.
Since you create the filesystem with newfs, isn't that really a ufs
filesystem running on top of zfs?
In this case, yes. I wonder if you could
In this case, yes. I wonder if you could create a second zfs pool on
the volume. (Starting such pools at boot time might be problematic
though!). The idea is that you have sparse raw storage available to
you. The example placed a UFS filesystem on it, but you could do
otherwise.
Followup
On 2/1/07, Al Hopper [EMAIL PROTECTED] wrote:
On Thu, 1 Feb 2007, Tom Buskey wrote:
[i]
I got an Addonics eSata card. Sata 3.0. PCI *or* PCI-X. Works right off the
bat w/ 10u3. No firmware update needed. It was $130. But I don't pull out my hair
and I can use it if I upgrade my server for
I had followed with interest the turn off NV cache flushing thread, in
regard to doing ZFS-backed NFS on our low-end Hitachi array:
http://www.mail-archive.com/zfs-discuss@opensolaris.org/msg05000.html
In short, if you have non-volatile cache, you can configure the array
to ignore the ZFS
Richard Elling wrote:
Neil Perrin wrote:
ZFS checksums are at the block level.
This has been causing some confusion lately, so perhaps we could say:
ZFS checksums are at the file system block level, not to be confused with
the disk block level or transport block level.
Saying that ZFS
22 matches
Mail list logo