My wife and I have both our iTunes libraries on ZFS on the basement server, each of our systems user data also is ZFS which backs up every 20 minutes to the basement server. This has been running for years under OSX and the current/stable and old MacZFS. That server then forwards all the snapshots to another location just in case, losing family photos is bad!
Currently, anything that must have HFS+ is being tested in a ZVOL (development builds) which is formatted for HFS+ with ZFS underneath. So far this has been quite good for Mail and seems to be Spotlight friendly, no guarantees yet. For those that want to try it. -- Jason Belec Sent from my iPad > On Mar 17, 2014, at 7:46 AM, Jason Belec <jasonbe...@belecmartin.com> wrote: > > Good man. > > > -- > Jason Belec > Sent from my iPad > >>> On Mar 17, 2014, at 3:35 AM, Dave Cottlehuber <d...@jsonified.com> wrote: >>> >>> On 17. März 2014 at 05:00:25, roemer (uwe.ro...@gmail.com) wrote: >>> Thanks for the detailed example! >>> >>>> On Monday, 17 March 2014 07:34:45 UTC+11, dch wrote: >>>> >>>> I've been a happy maczfs and also zfsosx user for several years now. >>>> [...] >>>> zfs send is a very easy way to do a very trustable >>>> backup, once you get past the first potentially large transfers. >>>> >>>> Can this happen bi-directiona? Or is it only applicable for creating >>> 'read-only' replicas of a master filesystem onto some clients? >>> I mean, what happens once you cloned one file system, sent it to your >>> laptop, then edit on both the laptop and your ZFS server? >> >> Then you’re screwed :-). It’s not duplicity or some other low-level sync >> tool. I find it works best when you have a known master that you’re working >> off. >> >> Slightly OT, but in FreeBSD with HAST you can do some gonzo crazy stuff: >> http://www.aisecure.net/2012/02/07/hast-freebsd-zfs-with-carp-failover/ >> >>>> All my source code & work lives in a zfs case sensitive noatime >>>> copies=2 filesystem, and I replicate that regularly to my other boxes >>>> as required. >>>> >>>> How does a 'copies=2' filesystem play together with a 'RAIDZ1' (or even >>> RAIDZ2) pool? >>> RAIDZ would have all data stored redundantly already, so would 'copies=2' >>> not end up in quadrupling the storage requirement if used on a raidz pool? >> >> Yes, but in this case, the laptop isn’t redundant, and my data is precious. >> IIRC the whole repos dataset, even with history, is < 40 Gb, so that’s >> reasonable IMO. >> >>>> For most customer projects I will have 3 or more VMs running different >>>> configs or operating systems under VMWare Fusion. These each live in >>>> their own zfs filesystem, compressed lz4 noatime case sensitive. I >>>> snapshot these after creation using vagrant install, again after >>>> config, and the changes are replicated using zfs snapshots again to >>>> the other OSX system, and also to the remote FreeBSD box. >>>> >>>> I can see that zfs is really good for handling multiple virtual machines. >> >> Yup, zfs rollback for testing deployments or upgrades is simply bliss. >> >>> In summary, I'm more than happy with the performance once I used >>>> ashift=12 and moved past 8GB ram. Datasets once you get used to them >>>> are extraordinarily useful -- snapshot your config just before a >>>> critical upgrade. >>>> >>>> I start seeing the potential in snapshots. In fact, I just realised that I >>> do manual >>> 'snapshots' on some of my repeating projects already for quite some time >>> with annual >>> clones of the previous directory structure. So ZFS snapshots would be a >>> natural fit here. >>> >>> But regarding the memory consumption: >>> What makes ZFS so memory hungry in your case? >> >> I don’t think it’s very hungry actually. 4GB (under the old MacZFS 74.1) >> simply wasn’t enough and I’d get crashes. With 8GB that went away. Bearing >> in mind with 16GB RAM I can run a web browser (oink at least 1GB), a 20GB VM >> that’s been compressed into a 10GB RAMdisk, +1 GB RAM for the VM, that seems >> pretty reasonable. That would leave 4GB for ZFS and the normal OSX baseline >> stuff roughly. >> >> I’m happy to report back with RAM usage if somebody tells me what z* >> incantation is needed. >> >>> Do you use deduplication? >> >> Never. But I do use cloned datasets a fair bit, which probably helps the >> situation a bit. >> >> The 2nd law of ZFS is not to use deduplication, even if you think you need >> it. >> IIRC the rough numbers are 1GB RAM / TB storage, and I’d want ECC RAM for >> that. >> >> BTW pretty sure the 1st law of ZFS is not to trust USB devices with your >> data. >> >> -- >> Dave Cottlehuber >> Sent from my PDP11 >> >> >> >> -- >> >> --- >> You received this message because you are subscribed to the Google Groups >> "zfs-macos" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to zfs-macos+unsubscr...@googlegroups.com. >> For more options, visit https://groups.google.com/d/optout. > > -- > > --- > You received this message because you are subscribed to the Google Groups > "zfs-macos" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to zfs-macos+unsubscr...@googlegroups.com. > For more options, visit https://groups.google.com/d/optout. -- --- You received this message because you are subscribed to the Google Groups "zfs-macos" group. To unsubscribe from this group and stop receiving emails from it, send an email to zfs-macos+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.