Re: [zfs-discuss] How do I protect my zfs pools?

2009-11-02 Thread Ian Collins
Donald Murray, P.Eng. wrote: What steps are _you_ taking to protect _your_ pools? Replication and tape backup. How are you protecting your enterprise data? Replication and tape backup. How often are you losing an entire pool and restoring from backups? Never (since I started using ZFS

[zfs-discuss] dedupe is in

2009-11-02 Thread David Magda
Deduplication was committed last night by Mr. Bonwick: Log message: PSARC 2009/571 ZFS Deduplication Properties 6677093 zfs should have dedup capability http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010683.html Via c0t0d0s0.org.

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Alex Lam S.L.
Terrific! Can't wait to read the man pages / blogs about how to use it... Alex. On Mon, Nov 2, 2009 at 12:21 PM, David Magda dma...@ee.ryerson.ca wrote: Deduplication was committed last night by Mr. Bonwick: Log message: PSARC 2009/571 ZFS Deduplication Properties 6677093 zfs should have

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread C. Bergström
Why didn't one of the developers from green-bytes do the commit? :P /sarcasm ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Cyril Plisko
On Mon, Nov 2, 2009 at 2:25 PM, Alex Lam S.L. alexla...@gmail.com wrote: Terrific! Can't wait to read the man pages / blogs about how to use it... Alex, you may wish to check PSARC 2009/571 materials [1] for a sneak preview :) [1] http://arc.opensolaris.org/caselog/PSARC/2009/571/ Alex.

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Victor Latushkin
David Magda wrote: Deduplication was committed last night by Mr. Bonwick: Log message: PSARC 2009/571 ZFS Deduplication Properties 6677093 zfs should have dedup capability http://mail.opensolaris.org/pipermail/onnv-notify/2009-November/010683.html And PSARC 2009/479 zpool recovery

Re: [zfs-discuss] Kernel panic on zfs import (hardware failure)

2009-11-02 Thread Donald Murray, P.Eng.
Hey, On Sat, Oct 31, 2009 at 5:03 PM, Victor Latushkin victor.latush...@sun.com wrote: Donald Murray, P.Eng. wrote: Hi, I've got an OpenSolaris 2009.06 box that will reliably panic whenever I try to import one of my pools. What's the best practice for recovering (before I resort to nuking

Re: [zfs-discuss] How do I protect my zfs pools?

2009-11-02 Thread Donald Murray, P.Eng.
Hey, On Sun, Nov 1, 2009 at 8:48 PM, Donald Murray, P.Eng. donaldm...@gmail.com wrote: Hi, I may have lost my first zpool, due to ... well, we're not yet sure. The 'zpool import tank' causes a panic -- one which I'm not even able to capture via savecore. Looks like I've found the root

Re: [zfs-discuss] marvell88sx2 driver build126

2009-11-02 Thread Orvar Korvar
I have the same card and might have seen the same problem. Yesterday I upgraded to b126 and started to migrate all my data to 8 disc raidz2 connected to such a card. And suddenly ZFS reported checksum errors. I thought the drives were faulty. But you suggest the problem could have been the

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Jeff Bonwick
Terrific! Can't wait to read the man pages / blogs about how to use it... Just posted one: http://blogs.sun.com/bonwick/en_US/entry/zfs_dedup Enjoy, and let me know if you have any questions or suggestions for follow-on posts. Jeff ___ zfs-discuss

[zfs-discuss] dedup question

2009-11-02 Thread Breandan Dezendorf
Does dedup work at the pool level or the filesystem/dataset level? For example, if I were to do this: bash-3.2$ mkfile 100m /tmp/largefile bash-3.2$ zfs set dedup=off tank bash-3.2$ zfs set dedup=on tank/dir1 bash-3.2$ zfs set dedup=on tank/dir2 bash-3.2$ zfs set dedup=on tank/dir3 bash-3.2$ cp

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Mike Gerdts
On Mon, Nov 2, 2009 at 7:20 AM, Jeff Bonwick jeff.bonw...@sun.com wrote: Terrific! Can't wait to read the man pages / blogs about how to use it... Just posted one: http://blogs.sun.com/bonwick/en_US/entry/zfs_dedup Enjoy, and let me know if you have any questions or suggestions for

Re: [zfs-discuss] dedup question

2009-11-02 Thread Enda O'Connor
it works at a pool wide level with the ability to exclude at a dataset level, or the converse, if set to off at top level dataset can then set lower level datasets to on, ie one can include and exclude depending on the datasets contents. so largefile will get deduped in the example below.

Re: [zfs-discuss] dedup question

2009-11-02 Thread Breandan Dezendorf
On Mon, Nov 2, 2009 at 9:41 AM, Enda O'Connor enda.ocon...@sun.com wrote: it works at a pool wide level with the ability to exclude at a dataset level, or the converse, if set to off at top level dataset can then set lower level datasets to on, ie one can include and exclude depending on the

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Tristan Ball
This is truly awesome news! What's the best way to dedup existing datasets? Will send/recv work, or do we just cp things around? Regards, Tristan Jeff Bonwick wrote: Terrific! Can't wait to read the man pages / blogs about how to use it... Just posted one:

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Ross
Double WOHOO! Thanks Victor! -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Victor Latushkin
On 02.11.09 18:38, Ross wrote: Double WOHOO! Thanks Victor! Thanks should go to Tim Haley, Jeff Bonwick and George Wilson ;-) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Ross Smith
Ok, thanks everyone then (but still thanks to Victor for the heads up) :-) On Mon, Nov 2, 2009 at 4:03 PM, Victor Latushkin victor.latush...@sun.com wrote: On 02.11.09 18:38, Ross wrote: Double WOHOO!  Thanks Victor! Thanks should go to Tim Haley, Jeff Bonwick and George Wilson ;-)

Re: [zfs-discuss] dedup question

2009-11-02 Thread Victor Latushkin
Enda O'Connor wrote: it works at a pool wide level with the ability to exclude at a dataset level, or the converse, if set to off at top level dataset can then set lower level datasets to on, ie one can include and exclude depending on the datasets contents. so largefile will get deduped in

Re: [zfs-discuss] Sun Flash Accelerator F20

2009-11-02 Thread Eric Sproul
Matthias Appel wrote: I am using 2x Gbit Ethernet an 4 Gig of RAM, 4 Gig of RAM for the iRAM should be more than sufficient (0.5 times RAM and 10s worth of IO) I am aware that this RAM is non-ECC so I plan to mirror the ZIL device. Any considerations for this setupWill it work as I

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Dennis Clarke
Terrific! Can't wait to read the man pages / blogs about how to use it... Just posted one: http://blogs.sun.com/bonwick/en_US/entry/zfs_dedup Enjoy, and let me know if you have any questions or suggestions for follow-on posts. Looking at FIPS-180-3 in sections 4.1.2 and 4.1.3 I was

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Mike Gerdts
On Mon, Nov 2, 2009 at 11:58 AM, Dennis Clarke dcla...@blastwave.org wrote: Terrific! Can't wait to read the man pages / blogs about how to use it... Just posted one: http://blogs.sun.com/bonwick/en_US/entry/zfs_dedup Enjoy, and let me know if you have any questions or suggestions for

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Nicolas Williams
On Mon, Nov 02, 2009 at 12:58:32PM -0500, Dennis Clarke wrote: Looking at FIPS-180-3 in sections 4.1.2 and 4.1.3 I was thinking that the major leap from SHA256 to SHA512 was a 32-bit to 64-bit step. ZFS doesn't have enough room in blkptr_t for 512-bi hashes. Nico --

Re: [zfs-discuss] dedup question

2009-11-02 Thread Jeremy Kitchen
On Nov 2, 2009, at 9:07 AM, Victor Latushkin wrote: Enda O'Connor wrote: it works at a pool wide level with the ability to exclude at a dataset level, or the converse, if set to off at top level dataset can then set lower level datasets to on, ie one can include and exclude depending on

Re: [zfs-discuss] dedup question

2009-11-02 Thread Cyril Plisko
On Mon, Nov 2, 2009 at 9:01 PM, Jeremy Kitchen kitc...@scriptkitchen.com wrote: forgive my ignorance, but what's the advantage of this new dedup over the existing compression option?  Wouldn't full-filesystem compression naturally de-dupe? No, the compression works on the block level. If

Re: [zfs-discuss] dedup question

2009-11-02 Thread roland
forgive my ignorance, but what's the advantage of this new dedup over the existing compression option? it may provide another space saving advantage. depending on your data, the savings can be very significant. Wouldn't full-filesystem compression naturally de-dupe? no. compression doesn`t

Re: [zfs-discuss] CR6894234 -- improved sgid directory compatibility with non-Solaris NFS clients

2009-11-02 Thread Paul B. Henson
On Thu, 29 Oct 2009 casper@sun.com wrote: Do you have the complete NFS trace output? My reading of the source code says that the file will be created with the proper gid so I am actually believing that the client over corrects the attributes after creating the file/directory. Just

Re: [zfs-discuss] sub-optimal ZFS performance

2009-11-02 Thread Miles Nordin
hj == Henrik Johansson henr...@henkis.net writes: hj A überquota property for the whole pool would have been nice hj [to get out-of-space errors instead of fragmentation] just make an empty filesystem with a reservation. That's what I do. NAME

Re: [zfs-discuss] CR6894234 -- improved sgid directory compatibility with non-Solaris NFS clients

2009-11-02 Thread Paul B. Henson
On Sat, 31 Oct 2009, Al Hopper wrote: Kudos to you - nice technical analysis and presentation, Keep lobbying your point of view - I think interoperability should win out if it comes down to an arbitrary decision. Thanks; but so far that doesn't look promising. Right now I've got a cron job

Re: [zfs-discuss] dedup question

2009-11-02 Thread Victor Latushkin
Jeremy Kitchen wrote: On Nov 2, 2009, at 9:07 AM, Victor Latushkin wrote: Enda O'Connor wrote: it works at a pool wide level with the ability to exclude at a dataset level, or the converse, if set to off at top level dataset can then set lower level datasets to on, ie one can include and

Re: [zfs-discuss] dedup question

2009-11-02 Thread Nicolas Williams
On Mon, Nov 02, 2009 at 11:01:34AM -0800, Jeremy Kitchen wrote: forgive my ignorance, but what's the advantage of this new dedup over the existing compression option? Wouldn't full-filesystem compression naturally de-dupe? If you snapshot/clone as you go, then yes, dedup will do little

Re: [zfs-discuss] dedup question

2009-11-02 Thread Mike Gerdts
On Mon, Nov 2, 2009 at 2:16 PM, Nicolas Williams nicolas.willi...@sun.com wrote: On Mon, Nov 02, 2009 at 11:01:34AM -0800, Jeremy Kitchen wrote: forgive my ignorance, but what's the advantage of this new dedup over the existing compression option?  Wouldn't full-filesystem compression

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Daniel Streicher
Okay, nice to hear ZFS can now use dedup. But how can I update my current OpenSolaris (2009.06) or Solaris 10 (5/09) to use this. Or have I wait for a new stable release of Solaris 10 / OpenSolaris? -- Daniel -- This message posted from opensolaris.org

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread James Lever
On 03/11/2009, at 7:32 AM, Daniel Streicher wrote: But how can I update my current OpenSolaris (2009.06) or Solaris 10 (5/09) to use this. Or have I wait for a new stable release of Solaris 10 / OpenSolaris? For OpenSolaris, you change your repository and switch to the development

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Alex Lam S.L.
Looks great - and by the time OpenSolaris build has it, I will have a brand new laptop to put it on ;-) One question though - I have a file server at home with 4x750GB on raidz1. When I upgrade to the latest build and set dedup=on, given that it does not have an offline mode, there is no way to

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Nigel Smith
ZFS dedup will be in snv_128, but putbacks to snv_128 will not likely close till the end of this week. The OpenSolaris dev repository was updated to snv_126 last Thursday: http://mail.opensolaris.org/pipermail/opensolaris-announce/2009-October/001317.html So it looks like about 5 weeks before

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Enda O'Connor
James Lever wrote: On 03/11/2009, at 7:32 AM, Daniel Streicher wrote: But how can I update my current OpenSolaris (2009.06) or Solaris 10 (5/09) to use this. Or have I wait for a new stable release of Solaris 10 / OpenSolaris? For OpenSolaris, you change your repository and switch to the

[zfs-discuss] Location of ZFS documentation (source)?

2009-11-02 Thread Alex Blewitt
The man pages documentation from the old Apple port (http://github.com/alblue/mac-zfs/tree/master/zfs_documentation/man8/) don't seem to have a corresponding source file in the onnv-gate repository (http://hub.opensolaris.org/bin/view/Project+onnv/WebHome) although I've found the text on-line

Re: [zfs-discuss] Location of ZFS documentation (source)?

2009-11-02 Thread Cindy Swearingen
Hi Alex, I'm checking with some folks on how we handled this handoff for the previous project. I'll get back to you shortly. Thanks, Cindy On 11/02/09 16:07, Alex Blewitt wrote: The man pages documentation from the old Apple port

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Darren J Moffat
Mike Gerdts wrote: On Mon, Nov 2, 2009 at 7:20 AM, Jeff Bonwick jeff.bonw...@sun.com wrote: Terrific! Can't wait to read the man pages / blogs about how to use it... Just posted one: http://blogs.sun.com/bonwick/en_US/entry/zfs_dedup Enjoy, and let me know if you have any questions or

Re: [zfs-discuss] Solaris disk confusion ?

2009-11-02 Thread Marion Hakanson
zfs...@jeremykister.com said: # format -e c12t1d0 selecting c12t1d0 [disk formatted] /dev/dsk/c3t11d0s0 is part of active ZFS pool dbzpool. Please see zpool(1M). It is true that c3t11d0 is part of dbzpool. But why is solaris upset about c3t11 when i'm working with c12t1 ?? So i checked

Re: [zfs-discuss] dedup question

2009-11-02 Thread Craig S. Bell
I just stumbled across a clever visual representation of deduplication: http://loveallthis.tumblr.com/post/166124704 It's a flowchart of the lyrics to Hey Jude. =-) Nothing is compressed, so you can still read all of the words. Instead, all of the duplicates have been folded together.

Re: [zfs-discuss] dedupe is in

2009-11-02 Thread Craig S. Bell
Great stuff, Jeff and company. You all rock. =-) A potential topic for the follow-up posts: auto-ditto, and the philosophy behind choosing a default threshold for creating a second copy. -- This message posted from opensolaris.org ___ zfs-discuss

Re: [zfs-discuss] automate zpool scrub

2009-11-02 Thread Craig S. Bell
On a related note, it looks like Constantin is developing a nice SMF service for auto scrub: http://blogs.sun.com/constantin/entry/new_opensolaris_zfs_auto_scrub This is an adaptation of the well-tested auto snapshot service. Amongst other advantages, this approach means that you don't have

[zfs-discuss] More Dedupe Questions...

2009-11-02 Thread Tristan Ball
I'm curious as to how send/recv intersects with dedupe... if I send/recv a deduped filesystem, is the data sent it it's de-duped form, ie just sent once, followed by the pointers for subsequent dupe data, or is the the data sent in expanded form, with the recv side system then having to redo

Re: [zfs-discuss] More Dedupe Questions...

2009-11-02 Thread Craig S. Bell
Tristan, there's another dedup system for zfs send in PSARC 2009/557. This can be used independently of whether the in-pool data was deduped. Case log: http://arc.opensolaris.org/caselog/PSARC/2009/557/ Discussion: http://www.opensolaris.org/jive/thread.jspa?threadID=115082 So I believe your