Have you tried wrapping your disks inside LVM metadevices and then used those
for your ZFS pool?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
What type of disks are you using?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I was just looking to see if it is a known problem before I submit it as a bug.
What would be the best category to submit the bug under? I am not sure if it is
driver/kernel issue. I would be more than glad to help. One of the machines is
a test environment and I can run any dumps/debug
On Wed, Nov 11, 2009 at 10:25 PM, James C. McPherson
j...@opensolaris.orgwrote:
The first step towards acknowledging that there is a problem
is you logging a bug in bugs.opensolaris.org. If you don't, we
don't know that there might be a problem outside of the ones
that we identify.
I
I submitted a bug on this issue, it looks like you can reference other bugs
when you submit one, so everyone having this issue could possibly link mine and
submit their own hardware config. It sounds like it's widespread though, so I'm
not sure if that would help or hinder. I'd hate to bury the
What type of disks are you using?
I'm using SATA disks with SAS-SATA breakout cables. I've tried different cables
as I have a couple spares.
mpt0 has 4x1.5TB Samsung Green drives.
mpt1 has 4x400GB Seagate 7200 RPM drives.
I get errors from both adapters. Each adapter has an unused SAS
Have you tried wrapping your disks inside LVM
metadevices and then used those for your ZFS pool?
I have not tried that. I could try it with my spare disks I suppose. I avoided
LVM as it didn't seem to offer me anything ZFS/ZPOOL didn't.
--
This message posted from opensolaris.org
issues).
Yours
Markus Kovero
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of M P
Sent: 11. marraskuuta 2009 18:08
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] ZFS on JBOD storage, mpt driver issue - server
I've experienced behavior similar this several times, each time it
was a single bad drive, in this case, looking like target 0. For
whatever reason, buggy Solaris/mpt driver, some of the other drives
get wind of it, then hide from their respective buses in fear. :-)
Operating System: SunOS
I already changed some of the drives, no difference. The target drive seem to
have random character - most likely not from the drives.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
storage, mpt driver issue - server not
responding
I already changed some of the drives, no difference. The target drive seem to
have random character - most likely not from the drives.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
Hi, you could try LSI itmpt driver as well, it seems
to handle this better, although I think it only
supports 8 devices at once or so.
You could also try more recent version of opensolaris
(123 or even 126), as there seems to be a lot fixes
regarding mpt-driver (which still seems to have
Have you tried another SAS-cable?
I have. 2 identical SAS cards, different cables, different disks (brand, size,
etc). I get the errors on random disks in the pool. I don't think it's hardware
related as there have been a few reports of this issue already.
--
This message posted from
Travis Tabbal wrote:
Hi, you could try LSI itmpt driver as well, it seems to handle this
better, although I think it only supports 8 devices at once or so.
You could also try more recent version of opensolaris (123 or even
126), as there seems to be a lot fixes regarding mpt-driver (which
still
Travis Tabbal wrote:
On Wed, Nov 11, 2009 at 10:25 PM, James C. McPherson
j...@opensolaris.org mailto:j...@opensolaris.org wrote:
The first step towards acknowledging that there is a problem
is you logging a bug in bugs.opensolaris.org
http://bugs.opensolaris.org. If you don't,
15 matches
Mail list logo