Re: [OmniOS-discuss] Pliant/Sandisk SSD ZIL

2014-02-17 Thread Derek Yarnell
On 2/17/14, 7:31 PM, Richard Elling wrote: On Feb 17, 2014, at 2:48 PM, Derek Yarnell de...@umiacs.umd.edu wrote: Hi, So we bought a new Dell R720xd with 2 Dell SLC SSDs which were shipped as a Pliant-LB206S-D323-186.31GB via format. Pliant SSDs (note: Pliant was purchased by Sandisk

Re: [OmniOS-discuss] Pliant/Sandisk SSD ZIL

2014-02-18 Thread Derek Yarnell
On 2/18/14, 9:41 PM, Marion Hakanson wrote: No better than with the ZIL running on the pool's HDD's? Really? Yes it would seem I was getting just as bad svc_t from these Pliant SSDs then from the pools spinners. Out of curiosity, what HBA is being used for these drives (and slog) in this

Re: [OmniOS-discuss] Pliant/Sandisk SSD ZIL

2014-02-19 Thread Derek Yarnell
On 2/18/14, 10:54 PM, Marion Hakanson wrote: I actually will test some spare DC S3700 drives as the slog devices that we have for a Ceph cluster in this in the next few days and will report back on this thread. Cool, I look forward to seeing what you find out. For a 44MB/1981 file tar file.

[OmniOS-discuss] NFSv4 client soft lockups

2014-09-12 Thread Derek Yarnell
Hi, I am wondering if anyone else has experience running NFSv4 from OmniOS to RHEL (6.5) clients. We are running it with sec=krb5p and are getting these errors (soft lockups) at the end of this mail across all the nodes causing them to be unresponsive after. The OmniOS server doesn't report

Re: [OmniOS-discuss] NFSv4 client soft lockups

2014-09-12 Thread Derek Yarnell
On 9/12/14 11:05 AM, Schweiss, Chip wrote: Is you RHEL 6.5 client a virtual machine? If so this message is a red herring to your problem. See the VMware KB article: http://kb.vmware.com/selfservice/microsites/search.do?language=en_UScmd=displayKCexternalId=1009996 I run all NFSv4 from

Re: [OmniOS-discuss] ZFS crash/reboot loop

2015-07-13 Thread Derek Yarnell
ff0d4071ca98::print arc_buf_t b_hdr |::print arc_buf_hdr_t b_size b_size = 0 Ouch. There's your zero. I'm going to forward this very note to the illumos ZFS list. I see ONE possible bugfix post-r151014 that might help: commit 31c46cf23cd1cf4d66390a983dc5072d7d299ba2 Author:

Re: [OmniOS-discuss] ZFS crash/reboot loop

2015-07-13 Thread Derek Yarnell
Hi Dan, Sorry I have not dealt with dumpadm/savecore that much but it looks like this is what you want. https://obj.umiacs.umd.edu/derek_support/vmdump.0 Thanks, derek On 7/13/15 12:55 AM, Dan McDonald wrote: On Jul 12, 2015, at 9:18 PM, Richard Elling richard.ell...@richardelling.com

Re: [OmniOS-discuss] ZFS crash/reboot loop

2015-07-13 Thread Derek Yarnell
On 7/13/15 12:02 PM, Dan McDonald wrote: On Jul 13, 2015, at 11:56 AM, Derek Yarnell de...@umiacs.umd.edu wrote: I don't need to hot patch (cold patch would be fine) so any update that I can apply and reboot would be fine. We have a second OmniOS r14 copy running that we are happy

[OmniOS-discuss] ZFS crash/reboot loop

2015-07-11 Thread Derek Yarnell
Hi, We just have had a catastrophic event on one of our OmniOS r14 file servers. In what seems to have been triggered by the weekly scrub of its one large zfs pool (~100T) it panics. This made it basically reboot continually and we have installed a second copy of OmniOS r14 in the mean time.

Re: [OmniOS-discuss] ZFS crash/reboot loop

2015-07-12 Thread Derek Yarnell
On 7/12/15 3:21 PM, Günther Alka wrote: First action: If you can mount the pool read-only, update your backup We are securing all the non-scratch data currently before messing with the pool any more. We had backups as recent as the night before but it is still going to be faster to pull the

Re: [OmniOS-discuss] ZFS crash/reboot loop

2015-07-12 Thread Derek Yarnell
The on-going scrub automatically restarts, apparently even in read-only mode. You should 'zpool scrub -s poolname' ASAP after boot (if you can) to stop the ongoing scrub. We have tried to stop the scrub but it seems you can not cancel a scrub when the pool is mounted readonly. -- Derek T.