The HBA I use is an LSI MegaRAID 1038E-R but I guess it doesn't really matter
as most OEM manufacturers such as Dell, Intel, HP, IBM use the LSI 1068e/1078e
or the newer 2008e/2018e Megaraid chips which I believe use pretty much the
same firmware.
So I guess I could change these settings in
On Dec 21, 2010, at 4:41 PM, Robin Axelsson wrote:
There's nothing odd about the physical mounting of the hard drives. All
drives are firmly attached and secured in their casings, no loose connections
etc. There is some dust but not more than the hardware should be able to
handle.
I
On Tue, 21 Dec 2010, Robin Axelsson wrote:
There's nothing odd about the physical mounting of the hard drives. All drives
are firmly attached and secured in their casings, no loose connections etc.
There is some dust but not more than the hardware should be able to handle.
I replaced the
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Charles J. Knipe
Some more information about our configuration: We're running OpenSolaris
svn-134. ZFS is at version 22. Our disks are 15kRPM 300gb Seagate
Cheetahs,
mounted in Promise
There's nothing odd about the physical mounting of the hard drives. All drives
are firmly attached and secured in their casings, no loose connections etc.
There is some dust but not more than the hardware should be able to handle.
I replaced the hard drive with another one of the same size, I
I have now upgraded to OpenIndiana b148 which should fix those bugs that you
mentioned. I lost the picture on the monitor but by ssh:ing from another
computer the system seems to be running fine.
The problems have become worse now and I get a freeze every time I try to
access the 8-disk raidz2
On Sun, 19 Dec 2010, Robin Axelsson wrote:
To conclude this (in case you don't view this message using a
monospace font) all drives in the affected storage pool (c9t0d0 -
c9t7d0) report 2 Illegal Requests (save c9t3d0 that reports 5
illegal requests). There is one drive (c9t3d0) that looks
I also have this problem on my system which consists of an AMD Phenom 2 X4 with
system pools on various hard drives connected to the SB750 controller and a
larger raidz2 storage pool connected to an LSI 1068e controller (using IT
mode). The storage pool is also used to share files using CIFS.
On 28/09/10 09:22 PM, Robin Axelsson wrote:
I also have this problem on my system which consists of an AMD Phenom 2
X4 with system pools on various hard drives connected to the SB750
controller and a larger raidz2 storage pool connected to an LSI 1068e
controller (using IT mode). The storage
I have now run some hardware tests as suggested by Cindy.'iostat -En' indicates
no errors, i.e. after carefully checking the output from this command, all
errors are followed by zeroes.
The only messages found in /var/adm/messages are the following:
timestamp opensolaris scsi: [ID 365881
I am using a zpool for swap that is located in the rpool (i.e. not in the
storage pool). The system disk contains four primary partitions where the first
contains the system volume (c7d0s0) two are windows partitions (c7d0p2 and
c7d0p3) and the fourth (c7d0p4) is a zfs pool dedicated for
If one was sticking with OpenSolaris for the short term, is something older
than 134 more stable/less buggy? Not using de-dupe.
-J
On Thu, Sep 23, 2010 at 6:04 PM, Richard Elling richard.ell...@gmail.comwrote:
Hi Charles,
There are quite a few bugs in b134 that can lead to this. Alas, due to
So, I'm still having problems with intermittent hangs on write with my ZFS
pool. Details from my original post are below. Since posting that, I've gone
back and forth with a number of you, and gotten a lot of useful advice, but I'm
still trying to get to the root of the problem so I can
Hi Charles,
There are quite a few bugs in b134 that can lead to this. Alas, due to the new
regime, there was a period of time where the distributions were not being
delivered. If I were in your shoes, I would upgrade to OpenIndiana b147 which
has 26 weeks of maturity and bug fixes over b134.
div id=jive-html-wrapper-div
Charles,br
br
Just like UNIX, there are several ways to drill down
on the problem.nbsp; I
would probably start with a live crash dump (savecore
-L) when you see
the problem.nbsp; Another method would be to grap
multiple stats commands
during the problem to
At first we blamed de-dupe, but we've disabled that. Next we suspected
the SSD log disks, but we've seen the problem with those removed, as
well.
Did you have dedup enabled and then disabled it? If so, data can (or will) be
deduplicated on the drives. Currently the only way of de-deduping
At first we blamed de-dupe, but we've disabled that. Next we
suspected
the SSD log disks, but we've seen the problem with those removed, as
well.
Did you have dedup enabled and then disabled it? If so, data can (or
will) be deduplicated on the drives. Currently the only way of de-
Charles,
Just like UNIX, there are several ways to drill down on the problem. I
would probably start with a live crash dump (savecore -L) when you see
the problem. Another method would be to grap multiple stats commands
during the problem to see where you can drill down later. I would
Howdy,
We're having a ZFS performance issue over here that I was hoping you guys could
help me troubleshoot. We have a ZFS pool made up of 24 disks, arranged into 7
raid-z devices of 4 disks each. We're using it as an iSCSI back-end for VMWare
and some Oracle RAC clusters.
Under normal
Charles,
Did you check for any HW issues reported during the hangs? fmdump -ev
and the like?
..Remco
On 8/30/10 6:02 PM, Charles J. Knipe wrote:
Howdy,
We're having a ZFS performance issue over here that I was hoping you guys could
help me troubleshoot. We have a ZFS pool made up of 24
Charles,
Is it just ZFS hanging (or what it appears to be is slowing down or
blocking) or does the whole system hang?
A couple of questions
What does iostat show during the time period of the slowdown?
What does mpstat show during the time of the slowdown?
You can look at the metadata
David,
Thanks for your reply. Answers to your questions are below.
Is it just ZFS hanging (or what it appears to be is
slowing down or
blocking) or does the whole system hang?nbsp; br
Only the ZFS storage is affected. Any attempt to write to it blocks until the
issue passes. Other than
22 matches
Mail list logo