Re: [zfs-discuss] 3 disk RAID-Z2 pool

2010-03-20 Thread Henk Langeveld
On 2010-03-15 16:50, Khyron: Yeah, this threw me. A 3 disk RAID-Z2 doesn't make sense, because at a redundancy level, RAID-Z2 looks like RAID 6. That is, there are 2 levels of parity for the data. Out of 3 disks, the equivalent of 2 disks will be used to store redundancy (parity) data and

Re: [zfs-discuss] article on btrfs, comparison with zfs

2009-08-05 Thread Henk Langeveld
Roch wrote: I don't know what 'enters the txg' exactly is but ZFS disk-block allocation is done in the ZIO pipeline at the latest possible time. Thanks Roch, I stand corrected in my assumptions. Cheers, Henk ___ zfs-discuss mailing list

Re: [zfs-discuss] article on btrfs, comparison with zfs

2009-08-01 Thread Henk Langeveld
Mario Goebbels wrote: An introduction to btrfs, from somebody who used to work on ZFS: http://www.osnews.com/story/21920/A_Short_History_of_btrfs *very* interesting article.. Not sure why James didn't directly link to it, but courteous of Valerie Aurora (formerly Henson)

[zfs-discuss] opensolaris crash in vn_rele()

2009-04-21 Thread Henk Langeveld
if this could be a case of bug 6634371 (not so atomic 64 bit operations on 32bit cpu)? Time to get a new laptop... Cheers, Henk Langeveld ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Efficient backup of ZFS filesystems?

2009-04-09 Thread Henk Langeveld
Gary Mills wrote: I've been watching the ZFS ARC cache on our IMAP server while the backups are running, and also when user activity is high. The two seem to conflict. Fast response for users seems to depend on their data being in the cache when it's needed. Most of the disk I/O seems to be

Re: [zfs-discuss] zfs create or normal directories

2008-06-07 Thread Henk Langeveld
Dick Hoogendijk wrote: I'm quite new to ZFS. It is so very easy to create new filesystems using zfs create zpool/fs that sometimes I doubt what to do: create a directory (like on ufs) or do a zfs create.. Can somebody give some advise on -when- to use a normal directory and -when- it is

Re: [zfs-discuss] iSCSI targets mapped to a VMWare ESX server

2008-04-10 Thread Henk Langeveld
kristof wrote: Some time ago I experienced the same issue. Only 1 target could be connected from an esx host. Others were shown as alternative paths to that target. If I'm reminding correctly I thought I read on a forum it has something to do with the disks serial number. Steffen

Re: [zfs-discuss] ZFS Performance Issue

2008-02-09 Thread Henk Langeveld
William Fretts-Saxton wrote: Unfortunately, I don't know the record size of the writes. Is it as simple as looking @ the size of a file, before and after a client request, and noting the difference in size? and The I/O is actually done by RRD4J, [...] a Java version of 'rrdtool' If it

Re: [zfs-discuss] Trial x4500, zfs with NFS and quotas.

2007-12-13 Thread Henk Langeveld
J.P. King wrote: Wow, that a neat idea, and crazy at the same time. But the mknod's minor value can be 0-262143 so it probably would be doable with some loss of memory and efficiency. But maybe not :) (I would need one lofi dev per filesystem right?) Definitely worth remembering if I need to

Re: [zfs-discuss] ditto blocks

2007-05-24 Thread Henk Langeveld
Richard Elling wrote: It all depends on the configuration. For a single disk system, copies should generally be faster than mirroring. For multiple disks, the performance should be similar as copies are spread out over different disks. Here's a crazy idea: could we use zfs on dvd for s/w

Re: [zfs-discuss] Trying to understand zfs RAID-Z

2007-05-18 Thread Henk Langeveld
HL And to clear things - meta data are updated also in a spirit of COW - HL so metadata are written to new locations and then uber block is HL atomically updated pointing to new meta data Victor Latushkin wrote: Well, to add to this, uber-blocks are also updated in COW fashion - there is a

Re: [zfs-discuss] Trying to understand zfs RAID-Z

2007-05-17 Thread Henk Langeveld
I'll make an attempt to keep it simple, and tell what is true in 'most' cases. For some values of 'most' ;-) The words used are at times confusing. Block mostly refers to a logical filesystem block, which can be variable in size. There's also checksum and parity, which are completely

Re: [zfs-discuss] Trying to understand zfs RAID-Z

2007-05-17 Thread Henk Langeveld
? Is this correct? Or am I completely off course? Correct. Henk Langeveld wonderful character based diagrams describes what is basically a standard RAID-5 layout on 4 disks. How is RAID-Z any different from RAID-5? (except for the ability to stripe different sizes which gives allows RAID-Z to never

Re: [zfs-discuss] Disk Failure Rates and Error Rates -- ( Off topic: Jim Gray lost at sea)

2007-02-11 Thread Henk Langeveld
/ Regards, -- Henk Langeveld [EMAIL PROTECTED] ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] rewrite-style scrubbing...

2007-02-07 Thread Henk Langeveld
. First you mark the disk you want to evict as read-only, than start a rewrite scrub. When done, your disk is free of data and can be taken out. -- Henk Langeveld [EMAIL PROTECTED] ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

[zfs-discuss] question: zfs code size statistics

2007-01-18 Thread Henk Langeveld
When ZFS was first announced, one argument was how ZFS complexity and code size was actually significantly less than for instance, UFS+SVM. Over a year has passed, and I wonder how code size has grown since, with all of the features that have been added. Has anyone kept track of this? Would it

Re: [zfs-discuss] zfs internal working - compression question

2006-12-30 Thread Henk Langeveld
in combination with a larger corpus representative for a particular source (author, style) will result in smaller files for samples that better match the source. -- Henk Langeveld [EMAIL PROTECTED] ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] zfs+stripe detach

2006-11-10 Thread Henk Langeveld
flama wrote: Hi people, Is possible detach a device from a stripe zfs without to destroy the pool?. Zfs is similar to doms in tru64, and it have un detach device from stripe, and it realloc the space of the datasets in free disks. No. Currently Zfs can only replace or add disks. It is not

Re: [zfs-discuss] Feature proposal: differential pools

2006-07-27 Thread Henk Langeveld
Andrew [EMAIL PROTECTED] wrote: Since ZFS is COW, can I have a read-only pool (on a central file server, or on a DVD, etc) with a separate block-differential pool on my local hard disk to store writes? This way, the pool in use can be read-write, even if the main pool itself is read-only,

Re: [zfs-discuss] Big JBOD: what would you do?

2006-07-19 Thread Henk Langeveld
Eric Schrock wrote: One thing I would pay attention to is the future world of native ZFS root. On a thumper, you only have two drives which are bootable from the BIOS. For any application in which reliability is important, you would have these two drives mirrored as your root filesystem.