Re: [zfs-discuss] Performance of the ZIL

2010-05-06 Thread Pasi Kärkkäinen
On Wed, May 05, 2010 at 11:32:23PM -0400, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Robert Milkowski if you can disable ZIL and compare the performance to when it is off it will give you an estimate of

Re: [zfs-discuss] why both dedup and compression?

2010-05-06 Thread Peter Tribble
On Thu, May 6, 2010 at 2:06 AM, Richard Jahnel rich...@ellipseinc.com wrote: I've googled this for a bit, but can't seem to find the answer. What does compression bring to the party that dedupe doesn't cover already? Compression will reduce the storage requirements for non-duplicate data. As

Re: [zfs-discuss] Performance of the ZIL

2010-05-06 Thread Ragnar Sundblad
On 6 maj 2010, at 08.17, Pasi Kärkkäinen wrote: On Wed, May 05, 2010 at 11:32:23PM -0400, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Robert Milkowski if you can disable ZIL and compare the performance to

[zfs-discuss] cannot create snapshot: dataset is busy

2010-05-06 Thread Brandon High
I'm unable to snapshot a dataset, receiving the error dataset is busy. Google and some bug reports suggest it's from a zil that hasn't been completely replayed, and that mounting and unmounting the dataset will fix it. Which is great, except it's a zvol. Any other way to fix it? There's no data

Re: [zfs-discuss] cannot create snapshot: dataset is busy

2010-05-06 Thread Brandon High
On Thu, May 6, 2010 at 1:31 AM, Brandon High bh...@freaks.com wrote: Any other way to fix it? There's no data in the zvol that I can't easily reproduce if it needs to be destroyed. I did a rollback to the most recent snapshot, which seems to have worked. -B -- Brandon High : bh...@freaks.com

[zfs-discuss] Heads Up: zil_disable has expired, ceased to be, ...

2010-05-06 Thread Robert Milkowski
With the put back of: [PSARC/2010/108] zil synchronicity zfs datasets now have a new 'sync' property to control synchronous behaviour. The zil_disable tunable to turn synchronous requests into asynchronous requests (disable the ZIL) has been removed. For systems that use that switch on upgrade

Re: [zfs-discuss] Does Opensolaris support thin reclamation?

2010-05-06 Thread Andras Spitzer
Please find this thread for further info about this topic : http://www.opensolaris.org/jive/thread.jspa?threadID=120824start=0tstart=0 In short, ZFS doesn't support thin reclamation today, although we have RFE open to implement it somewhere in the future. Regards, sendai -- This message

Re: [zfs-discuss] Mirroring USB Drive with Laptop for Backup purposes

2010-05-06 Thread Matt Keenan
Based on comments, some people say nay, some say yah. so I decided to give it a spin, and see how I get on. To make my mirror bootable I followed instructions posted here : http://www.taiter.com/blog/2009/04/opensolaris-200811-adding-disk.html I plan to do a quick write up myself of my

Re: [zfs-discuss] Heads Up: zil_disable has expired, ceased to be, ...

2010-05-06 Thread Pawel Jakub Dawidek
On Thu, May 06, 2010 at 11:28:37AM +0100, Robert Milkowski wrote: With the put back of: [PSARC/2010/108] zil synchronicity zfs datasets now have a new 'sync' property to control synchronous behaviour. The zil_disable tunable to turn synchronous requests into asynchronous requests

Re: [zfs-discuss] Heads Up: zil_disable has expired, ceased to be, ...

2010-05-06 Thread Darren J Moffat
On 06/05/2010 12:24, Pawel Jakub Dawidek wrote: I read that this property is not inherited and I can't see why. If what I read is up-to-date, could you tell why? It is inherited, this changed as a result of the PSARC review. -- Darren J Moffat ___

Re: [zfs-discuss] Another MPT issue - kernel crash

2010-05-06 Thread James C. McPherson
On 5/05/10 10:42 PM, Bruno Sousa wrote: Hi all, I have faced yet another kernel panic that seems to be related to mpt driver. This time i was trying to add a new disk to a running system (snv_134) and this new disk was not being detected...following a tip i ran the lsitool to reset the bus and

Re: [zfs-discuss] [indiana-discuss] image-update doesn't work anymore (bootfs not supported on EFI)

2010-05-06 Thread Christian Thalinger
On Wed, 2010-05-05 at 09:45 -0600, Evan Layton wrote: No that doesn't appear like an EFI label. So it appears that ZFS is seeing something there that it's interpreting as an EFI label. Then the command to set the bootfs property is failing due to that. To restate the problem the BE can't be

Re: [zfs-discuss] Heads Up: zil_disable has expired, ceased to be, ...

2010-05-06 Thread Robert Milkowski
On 06/05/2010 12:24, Pawel Jakub Dawidek wrote: I read that this property is not inherited and I can't see why. If what I read is up-to-date, could you tell why? It is inherited. Sorry for the confusion but there was a discussion if it should or should not be inherited, then we propose

Re: [zfs-discuss] Heads Up: zil_disable has expired, ceased to be, ...

2010-05-06 Thread Robert Milkowski
On 06/05/2010 13:12, Robert Milkowski wrote: On 06/05/2010 12:24, Pawel Jakub Dawidek wrote: I read that this property is not inherited and I can't see why. If what I read is up-to-date, could you tell why? It is inherited. Sorry for the confusion but there was a discussion if it should or

Re: [zfs-discuss] Heads Up: zil_disable has expired, ceased to be, ...

2010-05-06 Thread Pawel Jakub Dawidek
On Thu, May 06, 2010 at 01:15:41PM +0100, Robert Milkowski wrote: On 06/05/2010 13:12, Robert Milkowski wrote: On 06/05/2010 12:24, Pawel Jakub Dawidek wrote: I read that this property is not inherited and I can't see why. If what I read is up-to-date, could you tell why? It is inherited.

Re: [zfs-discuss] Performance of the ZIL

2010-05-06 Thread Edward Ned Harvey
From: Pasi Kärkkäinen [mailto:pa...@iki.fi] In neither case do you have data or filesystem corruption. ZFS probably is still OK, since it's designed to handle this (?), but the data can't be OK if you lose 30 secs of writes.. 30 secs of writes that have been ack'd being done to the

Re: [zfs-discuss] Performance of the ZIL

2010-05-06 Thread Edward Ned Harvey
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Ragnar Sundblad But if you have an application, protocol and/or user that demands or expects persistant storage, disabling ZIL of course could be fatal in case of a crash. Examples are mail

Re: [zfs-discuss] Performance of the ZIL

2010-05-06 Thread Ross Walker
On May 6, 2010, at 8:34 AM, Edward Ned Harvey solar...@nedharvey.com wrote: From: Pasi Kärkkäinen [mailto:pa...@iki.fi] In neither case do you have data or filesystem corruption. ZFS probably is still OK, since it's designed to handle this (?), but the data can't be OK if you lose 30

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Bob Friesenhahn
On Wed, 5 May 2010, Edward Ned Harvey wrote: In the L2ARC (cache) there is no ability to mirror, because cache device removal has always been supported. You can't mirror a cache device, because you don't need it. How do you know that I don't need it? The ability seems useful to me. Bob --

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Tomas Ögren
On 06 May, 2010 - Bob Friesenhahn sent me these 0,6K bytes: On Wed, 5 May 2010, Edward Ned Harvey wrote: In the L2ARC (cache) there is no ability to mirror, because cache device removal has always been supported. You can't mirror a cache device, because you don't need it. How do you know

[zfs-discuss] ZFS - USB 3.0 SSD disk

2010-05-06 Thread Bruno Sousa
Hi all, It seems like the market has yet another type of ssd device, this time a USB 3.0 portable SSD device by OCZ. Going on the specs it seems to me that if this device has a good price it might be quite useful for caching purposes on ZFS based storage. Take a look at

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Robert Milkowski
On 06/05/2010 15:31, Tomas Ögren wrote: On 06 May, 2010 - Bob Friesenhahn sent me these 0,6K bytes: On Wed, 5 May 2010, Edward Ned Harvey wrote: In the L2ARC (cache) there is no ability to mirror, because cache device removal has always been supported. You can't mirror a cache

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Brandon High
On Wed, May 5, 2010 at 8:47 PM, Michael Sullivan michael.p.sulli...@mac.com wrote: While it explains how to implement these, there is no information regarding failure of a device in a striped L2ARC set of SSD's.  I have been hard pressed to find this information anywhere, short of testing it

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Michael Sullivan
Everyone, Thanks for the help. I really appreciate it. Well, I actually walked through the source code with an associate today and we found out how things work by looking at the code. It appears that L2ARC is just assigned in round-robin fashion. If a device goes offline, then it goes to

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Marc Nicholas
Hi Michael, What makes you think striping the SSDs would be faster than round-robin? -marc On Thu, May 6, 2010 at 1:09 PM, Michael Sullivan michael.p.sulli...@mac.com wrote: Everyone, Thanks for the help. I really appreciate it. Well, I actually walked through the source code with an

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Michael Sullivan
Hi Marc, Well, if you are striping over multiple devices the you I/O should be spread over the devices and you should be reading them all simultaneously rather than just accessing a single device. Traditional striping would give 1/n performance improvement rather than 1/1 where n is the

Re: [zfs-discuss] why both dedup and compression?

2010-05-06 Thread Michael Sullivan
This is interesting, but what about iSCSI volumes for virtual machines? Compress or de-dupe? Assuming the virtual machine was made from a clone of the original iSCSI or a master iSCSI volume. Does anyone have any real world data this? I would think the iSCSI volumes would diverge quite a bit

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Giovanni Tirloni
On Thu, May 6, 2010 at 1:18 AM, Edward Ned Harvey solar...@nedharvey.comwrote: From the information I've been reading about the loss of a ZIL device, What the heck? Didn't I just answer that question? I know I said this is answered in ZFS Best Practices Guide.

[zfs-discuss] dedup ration for iscsi-shared zfs dataset

2010-05-06 Thread eXeC001er
Hi. How can i get this info? Thanks. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Bob Friesenhahn
On Fri, 7 May 2010, Michael Sullivan wrote: Well, if you are striping over multiple devices the you I/O should be spread over the devices and you should be reading them all simultaneously rather than just accessing a single device.  Traditional striping would give 1/n performance improvement

Re: [zfs-discuss] Best practice for full stystem backup - equivelent of ufsdump/ufsrestore

2010-05-06 Thread Cindy Swearingen
Hi Bob, You can review the latest Solaris 10 and OpenSolaris release dates here: http://www.oracle.com/ocom/groups/public/@ocom/documents/webcontent/059542.pdf Solaris 10 release, CY2010 OpenSolaris release, 1st half CY2010 Thanks, Cindy On 05/05/10 18:03, Bob Friesenhahn wrote: On Wed, 5

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Brandon High
On Thu, May 6, 2010 at 11:08 AM, Michael Sullivan michael.p.sulli...@mac.com wrote: The round-robin access I am referring to, is the way the L2ARC vdevs appear to be accessed.  So, any given object will be taken from a single device rather than from several devices simultaneously, thereby

Re: [zfs-discuss] dedup ration for iscsi-shared zfs dataset

2010-05-06 Thread Brandon High
On Thu, May 6, 2010 at 11:31 AM, eXeC001er execoo...@gmail.com wrote: How can i get this info? $ man zpool $ zpool list NAMESIZE ALLOC FREECAP DEDUP HEALTH ALTROOT rpool 111G 15.5G 95.5G13% 1.00x ONLINE - tank 7.25T 3.16T 4.09T43% 1.12x ONLINE - $ zpool get

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Robert Milkowski
On 06/05/2010 19:08, Michael Sullivan wrote: Hi Marc, Well, if you are striping over multiple devices the you I/O should be spread over the devices and you should be reading them all simultaneously rather than just accessing a single device. Traditional striping would give 1/n performance

Re: [zfs-discuss] why both dedup and compression?

2010-05-06 Thread Erik Trimble
On Fri, 2010-05-07 at 03:10 +0900, Michael Sullivan wrote: This is interesting, but what about iSCSI volumes for virtual machines? Compress or de-dupe? Assuming the virtual machine was made from a clone of the original iSCSI or a master iSCSI volume. Does anyone have any real world data

Re: [zfs-discuss] Heads Up: zil_disable has expired, ceased to be, ...

2010-05-06 Thread Wes Felter
On 5/6/10 5:28 AM, Robert Milkowski wrote: sync=disabled Synchronous requests are disabled. File system transactions only commit to stable storage on the next DMU transaction group commit which can be many seconds. Is there a way (short of DTrace) to write() some data and get notified when

Re: [zfs-discuss] Heads Up: zil_disable has expired, ceased to be, ...

2010-05-06 Thread Nicolas Williams
On Thu, May 06, 2010 at 03:30:05PM -0500, Wes Felter wrote: On 5/6/10 5:28 AM, Robert Milkowski wrote: sync=disabled Synchronous requests are disabled. File system transactions only commit to stable storage on the next DMU transaction group commit which can be many seconds. Is there a

Re: [zfs-discuss] dedup ration for iscsi-shared zfs dataset

2010-05-06 Thread Cindy Swearingen
Hi-- Even though the dedup property can be set on a file system basis, dedup space usage is accounted for from the pool level by using zpool list command. My non-expert opinion is that it would be near impossible to report space usage for dedup and non-dedup file systems at the file system

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread Richard Elling
On May 6, 2010, at 11:08 AM, Michael Sullivan wrote: Well, if you are striping over multiple devices the you I/O should be spread over the devices and you should be reading them all simultaneously rather than just accessing a single device. Traditional striping would give 1/n performance

Re: [zfs-discuss] Heads Up: zil_disable has expired, ceased to be, ...

2010-05-06 Thread Robert Milkowski
On 06/05/2010 21:45, Nicolas Williams wrote: On Thu, May 06, 2010 at 03:30:05PM -0500, Wes Felter wrote: On 5/6/10 5:28 AM, Robert Milkowski wrote: sync=disabled Synchronous requests are disabled. File system transactions only commit to stable storage on the next DMU transaction

Re: [zfs-discuss] Does ZFS use large memory pages?

2010-05-06 Thread Rob
Hi Gary, I would not remove this line in /etc/system. We have been combatting this bug for a while now on our ZFS file system running JES Commsuite 7. I would be interested in finding out how you were able to pin point the problem. We seem to have no worries with the system currently, but

Re: [zfs-discuss] Loss of L2ARC SSD Behaviour

2010-05-06 Thread BM
On Fri, May 7, 2010 at 4:57 AM, Brandon High bh...@freaks.com wrote: I believe that the L2ARC behaves the same as a pool with multiple top-level vdevs. It's not typical striping, where every write goes to all devices. Writes may go to only one device, or may avoid a device entirely while using