Re: [zfs-discuss] ZFS dedup report tool

2009-12-10 Thread Bruno Sousa
Hi, Couldn't agree more..but i just asked if there was such a tool :) Bruno Richard Elling wrote: On Dec 9, 2009, at 11:07 AM, Bruno Sousa wrote: Hi, Despite the fact that i agree in general with your comments, in reality it all comes to money.. So in this case, if i could prove that ZFS

[zfs-discuss] Encryption

2009-12-10 Thread Matthew Carras
So far I'm using file container encryption using TrueCrypt on the client, but I would seriously like native encryption support on Solaris itself, especially in ZFS. From http://hub.opensolaris.org/bin/view/Project+zfs-crypto/ I see it's hopefully coming in Q1 2010? Are there any alternatives

Re: [zfs-discuss] Encryption

2009-12-10 Thread Darren J Moffat
Matthew Carras wrote: So far I'm using file container encryption using TrueCrypt on the client, but I would seriously like native encryption support on Solaris itself, especially in ZFS. From http://hub.opensolaris.org/bin/view/Project+zfs-crypto/ I see it's hopefully coming in Q1 2010?

Re: [zfs-discuss] will deduplication know about old blocks?

2009-12-10 Thread Darren J Moffat
Cyril Plisko wrote: On Thu, Dec 10, 2009 at 12:37 AM, James Lever j...@jamver.id.au wrote: On 10/12/2009, at 5:36 AM, Adam Leventhal wrote: The dedup property applies to all writes so the settings for the pool of origin don't matter, just those on the destination pool. Just a quick related

Re: [zfs-discuss] will deduplication know about old blocks?

2009-12-10 Thread Cyril Plisko
BTW, are there any implications of having dedup=on on rpool/dump ? I know that the compression is turned off explicitly for rpool/dump. It will be ignored because when you write to the dump ZVOL it doesn't go through the normal ZIO pipeline so the deduplication code is never run in that

[zfs-discuss] ZFS incremental receives fail

2009-12-10 Thread Andrew Robert Nicols
We've been using ZFS for about two years now and make a lot of use of zfs send/receive to send our data from one X4500 to another. This has been working well for the past 18 months that we've been doing the sends. I recently upgraded the receiving thumper to Solaris 10 u8 and since then, I've

Re: [zfs-discuss] will deduplication know about old blocks?

2009-12-10 Thread Darren J Moffat
Cyril Plisko wrote: BTW, are there any implications of having dedup=on on rpool/dump ? I know that the compression is turned off explicitly for rpool/dump. It will be ignored because when you write to the dump ZVOL it doesn't go through the normal ZIO pipeline so the deduplication code is never

Re: [zfs-discuss] ZFS incremental receives fail

2009-12-10 Thread Andrew Robert Nicols
On Thu, Dec 10, 2009 at 09:50:43AM +, Andrew Robert Nicols wrote: We've been using ZFS for about two years now and make a lot of use of zfs send/receive to send our data from one X4500 to another. This has been working well for the past 18 months that we've been doing the sends. I

[zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-10 Thread Nathan
http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery Is there a way except for buying enterprise (RAID specific) drives for a array to use normal drives? Does anyone have any success stories regarding a particular model? The TLER cannot be edited on newer drives from Western Digital

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-10 Thread Darren J Moffat
Nathan wrote: http://en.wikipedia.org/wiki/Time-Limited_Error_Recovery Is there a way except for buying enterprise (RAID specific) drives for a array to use normal drives? Does anyone have any success stories regarding a particular model? The TLER cannot be edited on newer drives from

Re: [zfs-discuss] Changing ZFS drive pathing

2009-12-10 Thread Mike Johnston
Thanks for the info Alexander... I will test this out. I'm just wondering what it's going to see after I install Power Path. Since each drive will have 4 paths, plus the Power Path... after doing a zfs import how will I force it to use a specific path? Thanks again! Good to know that this can

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-10 Thread Nathan
Sorry I probably didn't make myself exactly clear. Basically drives without particular TLER settings drop out of RAID randomly. * Error Recovery - This is called various things by various manufacturers (TLER, ERC, CCTL). In a Desktop drive, the goal is to do everything possible to recover the

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-10 Thread Nathan
http://www.stringliterals.com/?p=77 This guy talks about it too under Hard Drives. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] iSCSI PGR

2009-12-10 Thread Lo Zio
Hi, I created a ZFS volume and shared it as iSCSI with shareiscsi=on. My windows 2008 servers can map it with no problems, but cluster validation fails saying that it Persisten reservation is not supported. Here http://hub.opensolaris.org/bin/view/Project+iscsitgt/ I can read PGR is supported.

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-10 Thread Mark Grant
Yeah, this is my main concern with moving from my cheap Linux server with no redundancy to ZFS RAID on OpenSolaris; I don't really want to have to pay twice as much to buy the 'enterprise' disks which appear to be exactly the same drives with a flag set in the firmware to limit read retries,

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-10 Thread Andrew Gabriel
Mark Grant wrote: Yeah, this is my main concern with moving from my cheap Linux server with no redundancy to ZFS RAID on OpenSolaris; I don't really want to have to pay twice as much to buy the 'enterprise' disks which appear to be exactly the same drives with a flag set in the firmware to

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-10 Thread Erik Trimble
Mark Grant wrote: Yeah, this is my main concern with moving from my cheap Linux server with no redundancy to ZFS RAID on OpenSolaris; I don't really want to have to pay twice as much to buy the 'enterprise' disks which appear to be exactly the same drives with a flag set in the firmware to

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-10 Thread Mark Grant
From what I remember the problem with the hardware RAID controller is that the long delay before the drive responds causes the drive to be dropped from the RAID and then if you get another error on a different drive while trying to repair the RAID then that disk is also marked failed and your

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-10 Thread Richard Elling
On Dec 10, 2009, at 8:36 AM, Mark Grant wrote: From what I remember the problem with the hardware RAID controller is that the long delay before the drive responds causes the drive to be dropped from the RAID and then if you get another error on a different drive while trying to repair the

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-10 Thread Mark Grant
Thanks, sounds like it should handle all but the worst faults OK then; I believe the maximum retry timeout is typically set to about 60 seconds in consumer drives. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

[zfs-discuss] quotas on zfs at solaris 10 update 9 (10/09)

2009-12-10 Thread Len Zaifman
We have just update a major file server to solaris 10 update 9 so that we can control user and group disk usage on a single filesystem. We were using qfs and one nice thing about samquota was that it told you your soft limit, your hard limit and your usage on disk space and on the number of

Re: [zfs-discuss] ZFS incremental receives fail

2009-12-10 Thread Brandon High
On Thu, Dec 10, 2009 at 1:50 AM, Andrew Robert Nicols andrew.nic...@luns.net.uk wrote: The last snapshot received was named thumperpool/m...@200911301000 and since then we've been completely unable to receive any snapshots -- even if I've literally just snapshotted, removed back to the previous

Re: [zfs-discuss] quotas on zfs at solaris 10 update 9 (10/09)

2009-12-10 Thread Dennis Clarke
We have just update a major file server to solaris 10 update 9 so that we can control user and group disk usage on a single filesystem. We were using qfs and one nice thing about samquota was that it told you your soft limit, your hard limit and your usage on disk space and on the number of

[zfs-discuss] Confusion regarding 'zfs send'

2009-12-10 Thread Brandon High
I'm playing around with snv_128 on one of my systems, and trying to see what kinda of benefits enabling dedup will give me. The standard practice for reprocessing data that's already stored to add compression and now dedup seems to be a send / receive pipe similar to: zfs send -R old

[zfs-discuss] Force file read even with checksum error

2009-12-10 Thread Stefano Pini
Hi guys, I have a pool made with three luns striped. After some scsi retryable messages, happened during a storage activity, zpool status start to report one checksum error on one file only. The zpool scrub find it but don't solve it, and when I try to read the file, I get an I/O error. Again,

Re: [zfs-discuss] hard drive choice, TLER/ERC/CCTL

2009-12-10 Thread Richard Bruce
Mark Grant wrote: I don't think ZFS does any timing out. It's up to the drivers underneath to timeout and send an error back to ZFS - only they know what's reasonable for a given disk type and bus type. I think that is the issue. By my reading, many (if not most) consumer drives don't

Re: [zfs-discuss] Confusion regarding 'zfs send'

2009-12-10 Thread Matthew Ahrens
Brandon High wrote: I'm playing around with snv_128 on one of my systems, and trying to see what kinda of benefits enabling dedup will give me. The standard practice for reprocessing data that's already stored to add compression and now dedup seems to be a send / receive pipe similar to:

Re: [zfs-discuss] Confusion regarding 'zfs send'

2009-12-10 Thread Brandon High
On Thu, Dec 10, 2009 at 2:15 PM, Tom Erickson tom.erick...@sun.com wrote: After upgrading your pool on the receive side and doing 'zfs receive' once to initialize the new behavior, you can thereafter set a property locally and it will not be overwritten by 'zfs receive'. Maybe I don't

Re: [zfs-discuss] Confusion regarding 'zfs send'

2009-12-10 Thread Brandon High
On Thu, Dec 10, 2009 at 2:53 PM, Matthew Ahrens matthew.ahr...@sun.com wrote: Well, changing the compression property doesn't really interrupt service, but I can understand not wanting to have even a few blocks with the wrong I was thinking of sharesmb or sharenfs settings when I wrote that.

Re: [zfs-discuss] panic when rebooting from snapshot

2009-12-10 Thread Craig S. Bell
You may be interested in PSARC 2009/670: Read-Only Boot from ZFS Snapshot. Here's the description from: http://arc.opensolaris.org/caselog/PSARC/2009/670/20091208_joep.vesseur Allow for booting from a ZFS snapshot. The boot image will be read-only. Early in boot a clone of the root is

Re: [zfs-discuss] ZFS incremental receives fail

2009-12-10 Thread Edward Ned Harvey
We've been using ZFS for about two years now and make a lot of use of zfs send/receive to send our data from one X4500 to another. This has been working well for the past 18 months that we've been doing the sends. I recently upgraded the receiving thumper to Solaris 10 u8 and since then,

Re: [zfs-discuss] ZFS pool unusable after attempting to destroy a dataset with dedup enabled

2009-12-10 Thread Jack Kielsmeier
My import is still going (I hope, as I can't confirm since my system appears to be totally locked except for the little blinking console cursor), been well over a day. I'm less hopeful now, but will still let it do it's thing for another couple of days. -- This message posted from

[zfs-discuss] Doing ZFS rollback with preserving later created clones/snapshot?

2009-12-10 Thread Alexander Skwar
Hi. Is it possible on Solaris 10 5/09, to rollback to a ZFS snapshot, WITHOUT destroying later created clones or snapshots? Example: --($ ~)-- sudo zfs snapshot rpool/r...@01 --($ ~)-- sudo zfs snapshot rpool/r...@02 --($ ~)-- sudo zfs clone rpool/r...@02 rpool/ROOT-02 --($ ~)-- LC_ALL=C

Re: [zfs-discuss] ZIL corrupt, not recoverable even with logfix

2009-12-10 Thread m...@bruningsystems.com
Hi James, I just spent about a week recovering about 10TB of file data for someone who encountered a (somewhat?) similar problem to what you are seeing. If you are still having problems with this, please contact me off-list. Regards, max James Risner wrote: It was created on AMD64 FreeBSD