Hi all,
I have a bad bad problem with our brand new server!
The lengthy details are below but to cut the story short, on the same
hardware (3 x LSI 9240-8i, 20 x 3TB 6gb HDDs) I am getting ZFS
sequential writes of 1.4GB/s on Solaris 10 (20 disks, 10 mirrors) and
only 200-240MB/s on latest
On May 4, 2012, at 5:25 AM, Roman Matiyenko wrote:
Hi all,
I have a bad bad problem with our brand new server!
The lengthy details are below but to cut the story short, on the same
hardware (3 x LSI 9240-8i, 20 x 3TB 6gb HDDs) I am getting ZFS
sequential writes of 1.4GB/s on Solaris 10
hi
s11 come with its own driver for some lsi sas HCA
but on the HCL
I only see LSI
SAS 9200-8e
http://www.oracle.com/webfolder/technetwork/hcl/data/components/details/lsi_logic/sol_11_11_11/9409.html
LSI MegaRAID SAS 9260-8i
Roman,
If I were you, I will not use 9240-8I.
I will use 9211-8I as pure HBA with IT FW for ZFS.
Rocky
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Richard Elling
Sent: Friday, May 04, 2012 8:00 AM
To: Roman Matiyenko
Cc:
On Fri, 4 May 2012, Rocky Shek wrote:
If I were you, I will not use 9240-8I.
I will use 9211-8I as pure HBA with IT FW for ZFS.
Is there IT FW for the 9240-8i?
They seem to use the same SAS chipset.
My next system will have 9211-8i with IT FW. Playing it safe. Good
enough for Nexenta
Hi,
We add several bad experience with a LSI card (LSI 3081E, LSI SAS84016E).
Even with Solaris official drivers provided by LSI.
Finally we use LSI SAS9201-16i card.
http://www.lsi.com/channel/france/products/storagecomponents/Pages/LSISAS9201-16i.aspx
This one work as expected on
On Thu, May 03, 2012 at 07:35:45AM -0700, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Ray Van Dolson
System is a 240x2TB (7200RPM) system in 20 Dell MD1200 JBODs. 16 vdevs of
15
disks each -- RAIDZ3.
On Thu, May 3, 2012 at 3:35 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
I think you'll get better, both performance reliability, if you break each
of those 15-disk raidz3's into three 5-disk raidz1's. Here's why:
Incorrect on reliability; see below.
Now,
On 5/4/2012 1:24 PM, Peter Tribble wrote:
On Thu, May 3, 2012 at 3:35 PM, Edward Ned Harvey
opensolarisisdeadlongliveopensola...@nedharvey.com wrote:
I think you'll get better, both performance reliability, if you break each
of those 15-disk raidz3's into three 5-disk raidz1's. Here's why:
On a Solaris 11 (SR3) system I have a zfs destroy process what appears
to be doing nothing and can't be killed. It has used 5 seconds of CPU
in a day and a half, but truss -p won't attach. No data appears to have
been removed. The dataset (but not the pool) is busy.
I thought this was an
10 matches
Mail list logo