I am trying to get a commitment to get this fixed, if you have a server with a
whole bunch of SAN attached disks and then use the internals for some sort of
temp space if one of those el cheapo sas disks dies it takes down the whole
lot, not good. This problem is enought o prevent a roll out of
Why does update 6 have to bve out before a patch can be produced for this? This
is a show-stopper for putting ZFS into production on anything other then local
disks, a production box that panics when a single disk goes offline is worse
then useless. I cannot see why this is not a high priority
Is there any possibility that the psarc 2007/567 can be made as a patch to
Soalris 10 U5. We are planning to dispose of Veritas as quickly as possible but
since all storage on production machines is on EMC Symmetrix with back-end
mirroring, this panic is a showstopper for us. Or is it so
The fix is already in Solaris 10 U6. A patch for S10U5 will only be
available when S10U6 is released.
--
Prabahar.
Veltror wrote:
Is there any possibility that the psarc 2007/567 can be made as a patch to
Soalris 10 U5. We are planning to dispose of Veritas as quickly as possible
but since
Hi,
I just found out that ZFS triggers a kernel-panic while switching a mounted
volume
into read-only mode:
The system is attached to a Symmetrix, all zfs-io goes through Powerpath:
I ran some io-intensive stuff on /tank/foo and switched the device into
read-only mode at the same time (symrdf
You want:
PSARC 2007/567 zpool failmode property
Which went back into build 77 of nevada.
- Eric
On Thu, Mar 20, 2008 at 04:44:43PM +0100, Adrian Ulrich wrote:
Hi,
I just found out that ZFS triggers a kernel-panic while switching a mounted
volume
into read-only mode:
The system is
Hi Eric,
PSARC 2007/567 zpool failmode property
Thanks, that's exactly what i've been looking for :-)
Which went back into build 77 of nevada.
Any chance to see this in Solaris-10 ?
We are currently using VxFS on all LUNs ( 15TB Maildir) and i'd like
to give ZFS a try on a live system...
I have a problem with one of my zfs pools everytime I import it I get the error
below. I can not destroy it because it will not allow me to import. I have
tried trashing the cache file but did not help, is there a way to destory the
config then i can start over??? Also up to date on patches,
Hi,
I have been wrestling with ZFS issues since yesterday when one of my disks sort
of died. After much wrestling with zpool replace I managed to get the new
disk in and got the pool to resilver, but since then I have one error left that
I can't clear:
pool: data
state: ONLINE
status: One
Bertrand Sirodot wrote:
I am trying to backup the pool, but when I tar some of the filesystems, the
kernel panics with the following message:
This error is occurring because a critical piece of metadata can't be
read while we are trying to write out changes. Try ensuring that you
aren't
Hi,
I'm looking for assistance troubleshooting an x86 laptop that I upgraded
from Solaris 10 6/06 to 11/06 using standard upgrade.
The upgrade went smoothly, but all attempts to boot it since then have
failed. Every time, it panics, leaving a partial stack trace on the
screen for a few
How I managed to make this happen, I'm now no longer sure of.
After upgrading my workstation to Solaris 10, Update 2, I could
not find any ZFS pools to import where I thought they were.
Whether this is due to the partitioning not being correclty preserved
or some other problem remains a
Hi,
my box has started panic'ing in zpool.
I'm using bits around a year old (which doesn't help) on S10FCS - when I
can get a DVD with S10U2, I'll try that but...
But my concern here is that this panic pops up at boot and the only way
around this has been to rename /kernel/drv/amd64/zpool to
13 matches
Mail list logo