On 2010-03-15 16:50, Khyron:
Yeah, this threw me. A 3 disk RAID-Z2 doesn't make sense, because at a
redundancy level, RAID-Z2 looks like RAID 6. That is, there are 2 levels of
parity for the data. Out of 3 disks, the equivalent of 2 disks will be used
to store redundancy (parity) data and
Roch wrote:
I don't know what 'enters the txg' exactly is but ZFS disk-block
allocation is done in the ZIO pipeline at the latest
possible time.
Thanks Roch,
I stand corrected in my assumptions.
Cheers,
Henk
___
zfs-discuss mailing list
Mario Goebbels wrote:
An introduction to btrfs, from somebody who used to work on ZFS:
http://www.osnews.com/story/21920/A_Short_History_of_btrfs
*very* interesting article.. Not sure why James didn't directly link to
it, but courteous of Valerie Aurora (formerly Henson)
if this could be a case of bug 6634371 (not so atomic
64 bit operations on 32bit cpu)? Time to get a new laptop...
Cheers,
Henk Langeveld
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Gary Mills wrote:
I've been watching the ZFS ARC cache on our IMAP server while the
backups are running, and also when user activity is high. The two
seem to conflict. Fast response for users seems to depend on their
data being in the cache when it's needed. Most of the disk I/O seems
to be
Dick Hoogendijk wrote:
I'm quite new to ZFS. It is so very easy to create new filesystems
using zfs create zpool/fs that sometimes I doubt what to do: create a
directory (like on ufs) or do a zfs create..
Can somebody give some advise on -when- to use a normal directory
and -when- it is
kristof wrote:
Some time ago I experienced the same issue.
Only 1 target could be connected from an esx host. Others were shown as
alternative paths to that target.
If I'm reminding correctly I thought I read on a forum it has something to do
with the disks serial number.
Steffen
William Fretts-Saxton wrote:
Unfortunately, I don't know the record size of the writes. Is it as
simple as looking @ the size of a file, before and after a client
request, and noting the difference in size?
and
The I/O is actually done by RRD4J, [...] a Java version of 'rrdtool'
If it
J.P. King wrote:
Wow, that a neat idea, and crazy at the same time. But the mknod's minor
value can be 0-262143 so it probably would be doable with some loss of
memory and efficiency. But maybe not :) (I would need one lofi dev per
filesystem right?)
Definitely worth remembering if I need to
Richard Elling wrote:
It all depends on the configuration. For a single disk system, copies
should generally be faster than mirroring. For multiple disks, the
performance should be similar as copies are spread out over different
disks.
Here's a crazy idea: could we use zfs on dvd for s/w
HL And to clear things - meta data are updated also in a spirit of COW -
HL so metadata are written to new locations and then uber block is
HL atomically updated pointing to new meta data
Victor Latushkin wrote:
Well, to add to this, uber-blocks are also updated in COW fashion -
there is a
I'll make an attempt to keep it simple, and tell what is true in 'most'
cases. For some values of 'most' ;-)
The words used are at times confusing. Block mostly refers to
a logical filesystem block, which can be variable in size.
There's also checksum and parity, which are completely
? Is this correct? Or am I completely off course?
Correct.
Henk Langeveld wonderful character based diagrams describes what is
basically a standard RAID-5 layout on 4 disks. How is RAID-Z any
different from RAID-5? (except for the ability to stripe different sizes
which gives allows RAID-Z to never
/
Regards,
--
Henk Langeveld [EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
. First you mark the disk you want to
evict as read-only, than start a rewrite scrub. When done, your disk
is free of data and can be taken out.
--
Henk Langeveld [EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http
When ZFS was first announced, one argument was how ZFS complexity and
code size was actually significantly less than for instance, UFS+SVM.
Over a year has passed, and I wonder how code size has grown since, with
all of the features that have been added.
Has anyone kept track of this? Would it
in combination with
a larger corpus representative for a particular source (author, style)
will result in smaller files for samples that better match the source.
--
Henk Langeveld [EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss
flama wrote:
Hi people, Is possible detach a device from a stripe zfs without to
destroy the pool?. Zfs is similar to doms in tru64, and it have un
detach device from stripe, and it realloc the space of the datasets
in free disks.
No. Currently Zfs can only replace or add disks. It is not
Andrew [EMAIL PROTECTED] wrote:
Since ZFS is COW, can I have a read-only pool (on a central file
server, or on a DVD, etc) with a separate block-differential pool on
my local hard disk to store writes?
This way, the pool in use can be read-write, even if the main pool
itself is read-only,
Eric Schrock wrote:
One thing I would pay attention to is the future world of native ZFS
root. On a thumper, you only have two drives which are bootable from
the BIOS. For any application in which reliability is important, you
would have these two drives mirrored as your root filesystem.
20 matches
Mail list logo