On Sep 18, 2012, at 10:40 AM, Dan Swartzendruber wrote:

> On 9/18/2012 10:31 AM, Eugen Leitl wrote:
>> I'm currently thinking about rolling a variant of
>> http://www.napp-it.org/napp-it/all-in-one/index_en.html
>> with remote backup (via snapshot and send) to 2-3
>> other (HP N40L-based) zfs boxes for production in
>> our organisation. The systems themselves would
>> be either Dell or Supermicro (latter with ZIL/L2ARC
>> on SSD, plus SAS disks (pools as mirrors) all with
>> hardware pass-through).
>> The idea is to use zfs for data integrity and
>> backup via data snapshot (especially important
>> data will be also back-up'd via conventional DLT
>> tapes).
>> Before I test thisi --
>> Is anyone using this is in production? Any caveats?
> I run an all-in-one and it works fine. Supermicro x9scl-f with 32gb ECC ram.  
> 20 is for the openindiana SAN, with an ibm m1015 passed through via vmdirect 
> (pci passthru).  4 SAS nearline drives in 2x2 mirror config in a jbod 
> chassis.  2 samsung 830 128gb ssds as l2arc.  The main caveat is to order the 
> VMs properly for auto-start (assuming you use that as I do.)  The OI VM goes 
> first, and I give a good 120 seconds before starting the other VMs.  For auto 
> shutdown, all VMs but OI do suspend, OI does shutdown.  The big caveat: do 
> NOT use iSCSI for the datastore, use NFS.  Maybe there's a way to fix this, 
> but I found that on start up, ESXi would time out the iSCSI datastore mount 
> before the virtualized SAN VM was up and serving the share - bad news.  NFS 
> seems to be more resilient there.  vmxnet3 vnics should work fine for OI VM, 
> but might want to stick to e1000.
>> Can I actually have a year's worth of snapshots in
>> zfs without too much performance degradation?
> Dunno about that.

I did something similar:  

Works great… need to bump up the RAM to 32GB.
zfs-discuss mailing list

Reply via email to