From Oracle Support we got the following info:
Bug ID: 6992124 reboot of Sol10 u9 host makes zpool FAULTED when zpool uses
iscsi LUNs
This is a duplicate of:
Bug ID: 6907687 zfs pool is not automatically fixed when disk are brought back
online or after boot
An IDR patch already exists, but no
After a while I was able to track down the problem.
During boot process service filesystem/local gets enabled long before
iscsi/initiator. Start method of filesystem/local will mount ufs, swap and all
stuff from /etc/vftab.
Some recent patch added a zfs mount -a to the filesystem/local start
Hello Matt,
you wrote about panic in u3 u4:
These stack traces look like 6569719 (fixed in s10u5).
Then I suppose it's also fixed by 127127-11 because that patch mentions 6569719.
According to my zfs-hardness-test script this is true.
Instead of crashing with an panic, with 127127-11 these
Could you provide the panic message and stack trace,
plus the stack traces of when it's hung?
--matt
Hello matt,
here is info and stack trace of a server running Update 3:
$ uname -a
SunOS qacpp03 5.10 Generic_127111-05 sun4us sparc FJSV,GPUSC-M
$ head -1 /etc/release
Maybe this is the same as I've described in article
http://www.opensolaris.org/jive/thread.jspa?threadID=81613tstart=0
I've written a quickdirty shell script to reproduce a race condition which
forces Update 34 to panic and leads Update 5 to hanging zfs commands.
- Andreas
--
This message
Hello,
occasionally we got some solaris 10 server to panic in zfs code while doing
zfs send -i [EMAIL PROTECTED] [EMAIL PROTECTED] | ssh remote zfs receive
poolname.
The race condition(s) get triggered by a broken data transmission or killing
sending zfs or ssh command.
The panic or hanging zfs
[I apologise for reposting this... but no one replied to my post from Dec, 4th.]
Hallo all,
while experimenting with zfs send and zfs receive mixed with cloning on
receiver side I found the following...
On server A there is a zpool with snapshots created on regular basis via cron.
Server B
Hallo all,
while experimenting with zfs send and zfs receive mixed with cloning on
receiver side I found the following...
On server A there is a zpool with snapshots created on regular basis via cron.
Server B get's updated by a zfs-send-ssh-zfs-receive command pipe.
Sometimes I want to do some
I forgot to mention: we are running Solaris 10 Update 4 (08/07)...
- Andreas
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
It seems my script got lost during edit/posting message.
I'll try again attaching...
- Andreas
This message posted from opensolaris.org
test-zfs-clone.sh
Description: Bourne shell script
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
which one is the most performant: copies=2 or zfs-mirror?
What type of copies are you talking about?
Mirrored data in underlying storage subsystem or a (new) feature in zfs?
- Andreas
This message posted from opensolaris.org
___
zfs-discuss
What does zpool status database say?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello,
solaris Internals wiki contains many interesting things about zfs.
But i have no glue about the reasons for this entry:
In Section ZFS Storage Pools Recommendations - Storage Pools you can read:
[i]For all production environments, set up a redundant ZFS storage pool, such
as a raidz,
13 matches
Mail list logo