[zfs-discuss] OT FS : STEC ZeusRAM devices

2013-01-21 Thread Matt Breitbach
of 2011, put into service in September, used for approx. 1 year. We have 6x disks available - part number Z4RZF3D-8UC. If anyone is interested, please email me off-list. -Matt Breitbach ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] what have you been buying for slog and l2arc?

2012-08-06 Thread Matt Breitbach
Stec ZeusRAM for Slog - it's exensive and small, but it's the best out there. OCZ Talos C for L2ARC. -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Bob Friesenhahn Sent: Friday, August 03, 2012 8:40 PM To: Karl

Re: [zfs-discuss] IO load questions

2012-07-25 Thread Matt Breitbach
NFS - iSCSI and FC/FCoE to come once I get it into the proper lab. From: Richard Elling [mailto:richard.ell...@gmail.com] Sent: Tuesday, July 24, 2012 11:36 PM To: matth...@flash.shanje.com Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] IO load questions Important question,

Re: [zfs-discuss] IO load questions

2012-07-25 Thread Matt Breitbach
. -Original Message- From: Palmer, Trey [mailto:trey.pal...@gtri.gatech.edu] Sent: Wednesday, July 25, 2012 8:22 PM To: Richard Elling; Matt Breitbach Cc: zfs-discuss@opensolaris.org Subject: RE: [zfs-discuss] IO load questions BTW these SSD's are 480GB Talos 2's

Re: [zfs-discuss] IO load questions

2012-07-24 Thread Matt Breitbach
Pool is 6x striped Stec ZEUSRam as ZIL, 6x OCZ Talos C 230GB drives L2ARC, and 24x 15k SAS drives striped (no parity, no mirroring) - I know, terrible for reliability, but I just want to see what kind of IO I can hit. Checksum is ON - can't recall what default is right now. Compression is off

Re: [zfs-discuss] Occasional storm of xcalls on segkmem_zio_free

2012-06-12 Thread Matt Breitbach
arc_c and resulted in significantly fewer xcalls. -Matt Breitbach -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Sašo Kiselkov Sent: Tuesday, June 12, 2012 10:14 AM To: Richard Elling Cc: zfs-discuss Subject: Re: [zfs

Re: [zfs-discuss] Two disks giving errors in a raidz pool, advice needed

2012-04-23 Thread Matt Breitbach
So this is a point of debate that probably deserves being brought to the floor (probably for the umpteenth time, but indulge me). I've heard from several people that I'd consider experts that once per year scrubbing is sufficient, once per quarter is _possibly_ excessive, and once a week is

Re: [zfs-discuss] Cannot remove slog device

2012-03-16 Thread Matt Breitbach
How long have you let the box sit? I had to offline the slog device, and it took quite a while for it to come back to life after removing the device (4-5 minutes). It's a painful process, which is why ever since I've used mirrored slog devices. -Original Message- From:

Re: [zfs-discuss] Very poor pool performance - no zfs/controllererrors?!

2011-12-18 Thread Matt Breitbach
I'd look at iostat -En. It will give you a good breakdown of disks that have seen errors. I've also spotted failing disks just by watching an iostat -nxz and looking for the one who's spending more %busy than the rest of them, or exhibiting longer than normal service times. -Matt -Original

Re: [zfs-discuss] does log device (ZIL) require a mirror setup?

2011-12-11 Thread Matt Breitbach
I would say that it's a highly recommended. If you have a pool that needs to be imported and it has a faulted, unmirrored log device, you risk data corruption. -Matt Breitbach -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-boun...@opensolaris.org

Re: [zfs-discuss] does log device (ZIL) require a mirror setup?

2011-12-11 Thread Matt Breitbach
of it. _ From: Garrett D'Amore [mailto:garrett.dam...@nexenta.com] Sent: Sunday, December 11, 2011 10:35 PM To: Frank Cusack Cc: Matt Breitbach; zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] does log device (ZIL) require a mirror setup? Loss only. Sent from my iPhone

Re: [zfs-discuss] Compression

2011-11-28 Thread Matt Breitbach
that I needed. Thanks to all that took the time to reply. -Matt Breitbach -Original Message- From: Donal Farrell [mailto:vmlinuz...@gmail.com] Sent: Wednesday, November 23, 2011 10:42 AM To: Matt Breitbach Subject: Re: [zfs-discuss] Compression is this on esx 3.5.x? or 4.x or greater

Re: [zfs-discuss] Compression

2011-11-23 Thread Matt Breitbach
Currently using NFS to access the datastore. -Matt -Original Message- From: Richard Elling [mailto:richard.ell...@gmail.com] Sent: Tuesday, November 22, 2011 11:10 PM To: Matt Breitbach Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] Compression Hi Matt, On Nov 22, 2011

[zfs-discuss] Compression

2011-11-22 Thread Matt Breitbach
, or the compressed filesize? My gut tells me that since they inflated _so_ badly when I storage vmotioned them, that they are the compressed values, but I would love to know for sure. -Matt Breitbach ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Wanted: sanity check for a clustered ZFS idea

2011-11-08 Thread Matt Breitbach
) there are some additional tweaks that bring the failover time down significantly. Depending on pool configuration and load, failover can be done in under 10 seconds based on some of my internal testing. -Matt Breitbach -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs