Re: [zfs-discuss] zfs destroy snapshot runs out of memory bug

2011-10-31 Thread Paul Kraus
On Sun, Oct 30, 2011 at 5:13 PM, Jim Klimov jimkli...@cos.ru wrote:     I know there was (is ?) a bug where a zfs destroy of a large snapshot would run a system out of kernel memory, but searching the Symptoms are like what you've described, including the huge scanrate just before the system

Re: [zfs-discuss] zfs destroy snapshot runs out of memory bug

2011-10-31 Thread Jim Klimov
2011-10-31 16:28, Paul Kraus wrote: How big is / was the snapshot and dataset ? I am dealing with a 7 TB dataset and a 2.5 TB snapshot on a system with 32 GB RAM. I had a smaller-scale problem, with datasets and snapshots sized several hundred GB, but on an 8Gb RAM system. So

Re: [zfs-discuss] zfs destroy snapshot runs out of memory bug

2011-10-31 Thread Paul Kraus
On Mon, Oct 31, 2011 at 9:07 AM, Jim Klimov jimkli...@cos.ru wrote: 2011-10-31 16:28, Paul Kraus wrote: Oracle has provided a loaner system with 128 GB RAM and it took 75 GB of RAM to destroy the problem snapshot). I had not yet posted a summary as we are still working through the overall

Re: [zfs-discuss] Poor relative performance of SAS over SATA drives

2011-10-31 Thread Paul Kraus
A couple points in line below ... On Wed, Oct 26, 2011 at 10:56 PM, weiliam.hong weiliam.h...@gmail.com wrote: I have a fresh installation of OI151a: - SM X8DTH, 12GB RAM, LSI 9211-8i (latest IT-mode firmware) - pool_A : SG ES.2 Constellation (SAS) - pool_B : WD RE4 (SATA) - no settings in

Re: [zfs-discuss] (Incremental) ZFS SEND at sub-snapshot level

2011-10-31 Thread Paul Kraus
On Sat, Oct 29, 2011 at 1:57 PM, Jim Klimov jimkli...@cos.ru wrote:  I am catching up with some 500 posts that I skipped this summer, and came up with a new question. In short, is it possible to add restartability to ZFS SEND, for example by adding artificial snapshots (of configurable

Re: [zfs-discuss] Poor relative performance of SAS over SATA drives

2011-10-31 Thread weiliam.hong
Thanks for the reply. Some background.. The server is fresh installed. Right before running the tests, the pools are newly created. Some comments below On 10/31/2011 10:33 PM, Paul Kraus wrote: A couple points in line below ... On Wed, Oct 26, 2011 at 10:56 PM,

Re: [zfs-discuss] Log disk with all ssd pool?

2011-10-31 Thread Karl Rossing
On 10/28/2011 01:04 AM, Mark Wolek wrote: before the forum closed. Did I miss something? Karl CONFIDENTIALITY NOTICE: This communication (including all attachments) is confidential and is intended for the use of the named addressee(s) only and may contain information that is private,

[zfs-discuss] Solaris Based Systems Lock Up - Possibly ZFS/memory related?

2011-10-31 Thread Lachlan Mulcahy
Hi Folks, I have been having issues with Solaris kernel based systems locking up and am wondering if anyone else has observed a similar symptom before. Some information/background... Systems the symptom has presented on: NFS server (Nexenta Core 3.01) and a MySQL Server (Sol 11 Express). The

Re: [zfs-discuss] Solaris Based Systems Lock Up - Possibly ZFS/memory related?

2011-10-31 Thread Marion Hakanson
lmulc...@marinsoftware.com said: . . . The MySQL server is: Dell R710 / 80G Memory with two daisy chained MD1220 disk arrays - 22 Disks each - 600GB 10k RPM SAS Drives Storage Controller: LSI, Inc. 1068E (JBOD) I have also seen similar symptoms on systems with MD1000 disk arrays containing

Re: [zfs-discuss] Solaris Based Systems Lock Up - Possibly ZFS/memory related?

2011-10-31 Thread Lachlan Mulcahy
Hi Marion, Thanks for your swifty reply! Have you got the latest firmware on your LSI 1068E HBA's? These have been known to have lockups/timeouts when used with SAS expanders (disk enclosures) with incompatible firmware revisions, and/or with older mpt drivers. I'll need to check that out

Re: [zfs-discuss] Solaris Based Systems Lock Up - Possibly ZFS/memory related?

2011-10-31 Thread Lachlan Mulcahy
Hi All/Marion, A small update... known to have lockups/timeouts when used with SAS expanders (disk enclosures) with incompatible firmware revisions, and/or with older mpt drivers. I'll need to check that out -- I'm 90% sure that these are fresh out of box HBAs. Will try an upgrade there

Re: [zfs-discuss] Poor relative performance of SAS over SATA drives

2011-10-31 Thread Richard Elling
On Oct 26, 2011, at 7:56 PM, weiliam.hong wrote: Questions: 1. Why does SG SAS drives degrade to 10 MB/s while WD RE4 remain consistent at 100MB/s after 10-15 min? 2. Why does SG SAS drive show only 70+ MB/s where is the published figures are 100MB/s refer here? Are the SAS drives

Re: [zfs-discuss] Poor relative performance of SAS over SATA drives

2011-10-31 Thread weiliam.hong
Thanks for the reply. On 11/1/2011 11:03 AM, Richard Elling wrote: On Oct 26, 2011, at 7:56 PM, weiliam.hong wrote: Questions: 1. Why does SG SAS drives degrade to10 MB/s while WD RE4 remain consistent at100MB/s after 10-15 min? 2. Why does SG SAS drive show only 70+ MB/s where is the

Re: [zfs-discuss] Solaris Based Systems Lock Up - Possibly ZFS/memory related?

2011-10-31 Thread Lachlan Mulcahy
Hi All, We did not have the latest firmware on the HBA - through a lot of pain I managed to boot into an MS-DOS disk and run the firmware update. We're now running the latest on this card from the LSI.com website. (both HBA BIOS and Firmware) No joy.. the system seized up again within a few

Re: [zfs-discuss] Solaris Based Systems Lock Up - Possibly ZFS/memory related?

2011-10-31 Thread Richard Elling
FWIW, we recommend disabling C-states in the BIOS for NexentaStor systems. C-states are evil. -- richard On Oct 31, 2011, at 9:46 PM, Lachlan Mulcahy wrote: Hi All, We did not have the latest firmware on the HBA - through a lot of pain I managed to boot into an MS-DOS disk and run the