Sun has seen all of this during various problems over the past year and a half,
but:
CX600 FLARE code 02.07.600.5.027
CX500 FLARE code 02.19.500.5.044
Brocade Fabric, relevant switch models are 4140 (core), 200e (edge), 3800
(edge).
Sun Branded Emulex HBAs in the following models:
I've been running ZFS against EMC Clariion CX-600 and CX-500s in various
configurations, mostly exported disk situations, with a number of kernel
flatlining situations. Most of these situations include Page83 data errors in
/var/adm/messages during kernel crashes.
As we're outgrowing the
Shows up as lpfc (is that Emulex?)
lpfc (or fibre-channel) is an Emulex branded emulex card device - sun branded
emulex uses the emlxs driver.
I run zfs (v2 and v3) on Emulex and Sun Branded emulex on SPARC with Powerpath
4.5.0(and MPxIOin other cases) and Clariion arrays and have never
Solaris 10, u3, zfs 3 kernel 118833-36.
Running into a weird problem where I attach a mirror to a large, existing
filesytem. The attach occurs and then the system starts swallowing all the
available memory and the system performance chokes, while the filesystems sync.
In some cases all memory
Why did you choose to deploy the database on ZFS ?
-On disk consistancy was big - one of our datacenters was having power problems
and the systems would sometimes drop live. I had a couple of instances of data
errors with VXVM/VXFS and we had to restore from tape.
-zfs snapshot saves us many
What's the maximum filesystem size you've used in production environment? How
did the experience come out?
I have a 26tb pool that will be upgraded to 39tb in the next couple of months.
This is the backend for Backup images. The ease of managing this sort of
expanding storage is a little
We are currently recommending separate (ZFS) file systems for redo logs.
Did you try that? Or did you go straight to a separate UFS file system for
redo logs?
I'd answered this directly in email originally.
The answer was that yes, I tested using zfs for logpools among a number of
disk
Try throttling back the max # of IOs. I saw a number of errors similar to this
on Pillar and EMC.
In /etc/system, set:
set sd:sd_max_throttle=20
and reboot.
This message posted from opensolaris.org
___
zfs-discuss mailing list
The thought is to start throttling and possibly tune up or down, depending on
errors or lack of errors. I don't know of a specific NexSAN throttle preference
(we use SATABoy, and go with 20).
This message posted from opensolaris.org
___
zfs-discuss
Any chance these fixes will make it into the normal Solaris RS patches?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
For the particular HDS array you're working on, or also on NexSAN storage?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
The big problem is that if you don't do your redundancy in the zpool, then the
loss of a single device flatlines the system. This occurs in single device
pools or stripes or concats. Sun support has said in support calls and Sunsolve
docs that this is by design, but I've never seen the loss of
I currently run 6 Oracle 9i and 10g dbs using 8GB SGA apiece in containers on a
v890 and find no difficulties starting Oracle (though we don't start all the
dbs truly simultaneously). The ARC cache doesn't ramp up until a lot of IO has
passed through after a reboot (typically a steady rise
General Oracle zpool/zfs tuning, from my tests with Oracle 9i and the APS
Memory Based Planner and filebench. All tests completed using Solaris 10 update
2 and update 3.:
-use zpools with 8k blocksize for data
-don't use zfs for redo logs - use ufs with directio and noatime. Building
redo
My biggest concern has been more making sure that Oracle doesn't have to fight
to get memory, which it does now. There's definite performance uptick during
the process of releasing ARC cache memory to allow Oracle to get what it's
asking for and this is passed on to the application. The problem
I've been seeing this failure to cap on a number of (Solaris 10 update 2 and 3)
machines since the script came out (arc hogging is a huge problem for me, esp
on Oracle). This is probably a red herring, but my v490 testbed seemed to
actually cap on 3 separate tests, but my t2000 testbed doesn't
I thought I'd share some lessons learned testing Oracle APS on Solaris 10 using
ZFS as backend storage. I just got done running 2 months worth of performance
tests on a v490 (32GB/4x1.8Ghz dual core proc system with 2xSun 2G HBAs on
separate fabrics) and varying how I managed storage. Storage
Yes. Works fine, though it's an interim solution until I can get rid of
PowerPath.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
For a quick overview of setting up MPxIO and the other configs:
[EMAIL PROTECTED]:~]# fcinfo hba-port
HBA Port WWN: 1000c952776f
OS Device Name: /dev/cfg/c8
Manufacturer: Sun Microsystems, Inc.
Model: LP1-S
Type: N-port
State: online
Actually, I'm using ZFS in a SAN environment often importing LUNS to save
management overhead and make snapshots easily available, among other things. I
would love zfs remove because it allows me, in conjunction with containers, to
build up a single managable pool for a number of local host
I'm using ZFS on both EMC and Pillar arrays with PowerPath and MPxIO,
respectively. Both work fine - the only caveat is to drop your sd_queue to
around 20 or so, otherwise you can run into an ugly display of bus resets.
This message posted from opensolaris.org
21 matches
Mail list logo