Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-05-05 Thread Joerg Schilling
Erik Trimble erik.trim...@oracle.com wrote: rsync is indeed slower than star; so far as I can tell, this is due almost exclusively to the fact that rsync needs to build an in-memory table of all work being done *before* it starts to copy. After that, it copies at about the same rate as

Re: [zfs-discuss] Faster copy from UFS to ZFS

2011-05-05 Thread Joerg Schilling
Ian Collins i...@ianshome.com wrote: *ufsrestore works fine on ZFS filesystems (although I haven't tried it with any POSIX ACLs on the original ufs filesystem, which would probably simply get lost). star -copy -no-fsync is typically 30% faster that ufsdump | ufsrestore. Does it

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-05 Thread Edward Ned Harvey
From: Garrett D'Amore [mailto:garr...@nexenta.com] We have customers using dedup with lots of vm images... in one extreme case they are getting dedup ratios of over 200:1! I assume you're talking about a situation where there is an initial VM image, and then to clone the machine, the

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-05 Thread Edward Ned Harvey
From: Erik Trimble [mailto:erik.trim...@oracle.com] Using the standard c_max value of 80%, remember that this is 80% of the TOTAL system RAM, including that RAM normally dedicated to other purposes. So long as the total amount of RAM you expect to dedicate to ARC usage (for all ZFS uses,

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-05 Thread Constantin Gonzalez
Hi, On 05/ 5/11 03:02 PM, Edward Ned Harvey wrote: From: Garrett D'Amore [mailto:garr...@nexenta.com] We have customers using dedup with lots of vm images... in one extreme case they are getting dedup ratios of over 200:1! I assume you're talking about a situation where there is an initial

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-05 Thread Garrett D'Amore
On Thu, 2011-05-05 at 09:02 -0400, Edward Ned Harvey wrote: From: Garrett D'Amore [mailto:garr...@nexenta.com] We have customers using dedup with lots of vm images... in one extreme case they are getting dedup ratios of over 200:1! I assume you're talking about a situation where there

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-05 Thread Joerg Moellenkamp
I assume you're talking about a situation where there is an initial VM image, and then to clone the machine, the customers copy the VM, correct? If that is correct, have you considered ZFS cloning instead? When I said dedup wasn't good for VM's, what I'm talking about is: If there is data

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-05 Thread Karl Wagner
so there's an ARC entry referencing each individual DDT entry in the L2ARC?! I had made the assumption that DDT entries would be grouped into at least minimum block sized groups (8k?), which would have lead to a much more reasonable ARC requirement. seems like a bad design to me, which leads

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-05 Thread Garrett D'Amore
We have customers using dedup with lots of vm images... in one extreme case they are getting dedup ratios of over 200:1! You don't need dedup or sparse files for zero filling. Simple zle compression will eliminate those for you far more efficiently and without needing massive amounts of ram.

Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-05 Thread Giovanni Tirloni
On Wed, May 4, 2011 at 9:04 PM, Brandon High bh...@freaks.com wrote: On Wed, May 4, 2011 at 2:25 PM, Giovanni Tirloni gtirl...@sysdroid.com wrote: The problem we've started seeing is that a zfs send -i is taking hours to send a very small amount of data (eg. 20GB in 6 hours) while a zfs

Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-05 Thread Paul Kraus
On Thu, May 5, 2011 at 2:17 PM, Giovanni Tirloni gtirl...@sysdroid.com wrote: What I find it curious is that it only happens with incrementals. Full send's go as fast as possible (monitored with mbuffer). I was just wondering if other people have seen it, if there is a bug (b111 is quite old),

Re: [zfs-discuss] Quick zfs send -i performance questions

2011-05-05 Thread Brandon High
On Thu, May 5, 2011 at 11:17 AM, Giovanni Tirloni gtirl...@sysdroid.com wrote: What I find it curious is that it only happens with incrementals. Full send's go as fast as possible (monitored with mbuffer). I was just wondering if other people have seen it, if there is a bug (b111 is quite old),

[zfs-discuss] Permanently using hot spare?

2011-05-05 Thread Ray Van Dolson
Have a failed drive on a ZFS pool (three RAIDZ2 vdevs, one hot spare). The hot spare kicked in and all is well. Is it possible to just make that hot spare disk -- already silvered into the pool -- as a permanent part of the pool? We could then throw in a new disk and mark it as a spare and avoid

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-05 Thread Brandon High
On Wed, May 4, 2011 at 8:23 PM, Edward Ned Harvey opensolarisisdeadlongliveopensola...@nedharvey.com wrote: Generally speaking, dedup doesn't work on VM images.  (Same is true for ZFS or netapp or anything else.)  Because the VM images are all going to have their own filesystems internally with

Re: [zfs-discuss] multipl disk failures cause zpool hang

2011-05-05 Thread TianHong Zhao
Thanks for the information. I think you’re right that spa_sync thread is blocked in zio_wait while holding scl_lock which blocks all zpool related command (such as zpool status). Question is why zio_wait is blocked forever ? if the underlying device is offline, could zio service just

Re: [zfs-discuss] Permanently using hot spare?

2011-05-05 Thread TianHong Zhao
Just detach the faulty disk, then the spare will become the normal disk once it's finished resilvering. #zfs detach pool fault_device_name Then you need to the new spare : #zfs add pool new_spare_device There seems to be a new feature in illumos project to support a zpool property like spare

Re: [zfs-discuss] Permanently using hot spare?

2011-05-05 Thread Ian Collins
On 05/ 6/11 09:53 AM, Ray Van Dolson wrote: Have a failed drive on a ZFS pool (three RAIDZ2 vdevs, one hot spare). The hot spare kicked in and all is well. Is it possible to just make that hot spare disk -- already silvered into the pool -- as a permanent part of the pool? We could then throw

Re: [zfs-discuss] multipl disk failures cause zpool hang

2011-05-05 Thread TianHong Zhao
Thanks again. No, I don’t see any bio functions, but you have shed very useful lights on the issue. My test platform is b147, the pool disks are from a storage system via a Qlogic fiber HBA. My test case is : 1. zpool set failmode=continue pool1 2. dd if=/dev/zero

Re: [zfs-discuss] Permanently using hot spare?

2011-05-05 Thread Ray Van Dolson
On Thu, May 05, 2011 at 03:13:06PM -0700, TianHong Zhao wrote: Just detach the faulty disk, then the spare will become the normal disk once it's finished resilvering. #zfs detach pool fault_device_name Then you need to the new spare : #zfs add pool new_spare_device There seems to be a

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-05 Thread Richard Elling
On May 5, 2011, at 2:58 PM, Brandon High wrote: On Wed, May 4, 2011 at 8:23 PM, Edward Ned Harvey Or if you're intimately familiar with both the guest host filesystems, and you choose blocksizes carefully to make them align. But that seems complicated and likely to fail. Using a 4k

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-05 Thread Richard Elling
On May 5, 2011, at 6:02 AM, Edward Ned Harvey wrote: Is this a zfs discussion list, or a nexenta sales promotion list? Obviously, this is a Nextenta sales promotion list. And Oracle. And OSX. And BSD. And Linux. And anyone who needs help or can offer help with ZFS technology :-) This list has

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-05 Thread Edward Ned Harvey
From: Karl Wagner [mailto:k...@mouse-hole.com] so there's an ARC entry referencing each individual DDT entry in the L2ARC?! I had made the assumption that DDT entries would be grouped into at least minimum block sized groups (8k?), which would have lead to a much more reasonable ARC

Re: [zfs-discuss] Summary: Dedup and L2ARC memory requirements

2011-05-05 Thread Richard Elling
On May 4, 2011, at 7:56 PM, Edward Ned Harvey wrote: This is a summary of a much longer discussion Dedup and L2ARC memory requirements (again) Sorry even this summary is long. But the results vary enormously based on individual usage, so any rule of thumb metric that has been bouncing

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-05 Thread Edward Ned Harvey
From: Brandon High [mailto:bh...@freaks.com] On Wed, May 4, 2011 at 8:23 PM, Edward Ned Harvey opensolarisisdeadlongliveopensola...@nedharvey.com wrote: Generally speaking, dedup doesn't work on VM images.  (Same is true for ZFS or netapp or anything else.)  Because the VM images are all

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-05 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Edward Ned Harvey If you have to use the 4k recordsize, it is likely to consume 32x more memory than the default 128k recordsize of ZFS. At this rate, it becomes increasingly difficult to

Re: [zfs-discuss] Deduplication Memory Requirements

2011-05-05 Thread Brandon High
On Thu, May 5, 2011 at 8:50 PM, Edward Ned Harvey opensolarisisdeadlongliveopensola...@nedharvey.com wrote: If you have to use the 4k recordsize, it is likely to consume 32x more memory than the default 128k recordsize of ZFS.  At this rate, it becomes increasingly difficult to get a