Re: [zfs-discuss] zpool does not like iSCSI ?

2010-11-09 Thread Andreas Koppenhoefer
>From Oracle Support we got the following info: Bug ID: 6992124 reboot of Sol10 u9 host makes zpool FAULTED when zpool uses iscsi LUNs This is a duplicate of: Bug ID: 6907687 zfs pool is not automatically fixed when disk are brought back online or after boot An IDR patch already exists, but no

Re: [zfs-discuss] zpool does not like iSCSI ?

2010-11-03 Thread Andreas Koppenhoefer
After a while I was able to track down the problem. During boot process service filesystem/local gets enabled long before iscsi/initiator. Start method of filesystem/local will mount ufs, swap and all stuff from /etc/vftab. Some recent patch added a "zfs mount -a" to the filesystem/local start me

Re: [zfs-discuss] Race condition yields to kernel panic (u3, u4) or hanging zfs commands (u5)

2008-11-25 Thread Andreas Koppenhoefer
Hello Matt, you wrote about panic in u3 & u4: > These stack traces look like 6569719 (fixed in s10u5). Then I suppose it's also fixed by 127127-11 because that patch mentions 6569719. According to my zfs-hardness-test script this is true. Instead of crashing with an panic, with 127127-11 these se

Re: [zfs-discuss] Race condition yields to kernel panic (u3, u4) or hanging zfs commands (u5)

2008-11-14 Thread Andreas Koppenhoefer
> Could you provide the panic message and stack trace, > plus the stack traces of when it's hung? > > --matt Hello matt, here is info and stack trace of a server running Update 3: $ uname -a SunOS qacpp03 5.10 Generic_127111-05 sun4us sparc FJSV,GPUSC-M $ head -1 /etc/release

Re: [zfs-discuss] Possible ZFS panic on Solaris 10 Update 6

2008-11-12 Thread Andreas Koppenhoefer
Maybe this is the same as I've described in article I've written a quick&dirty shell script to reproduce a race condition which forces Update 3&4 to panic and leads Update 5 to hanging zfs commands. - Andreas -- This message

[zfs-discuss] Race condition yields to kernel panic (u3, u4) or hanging zfs commands (u5)

2008-11-05 Thread Andreas Koppenhoefer
Hello, occasionally we got some solaris 10 server to panic in zfs code while doing "zfs send -i [EMAIL PROTECTED] [EMAIL PROTECTED] | ssh remote zfs receive poolname". The race condition(s) get triggered by a broken data transmission or killing sending zfs or ssh command. The panic or hanging zf

[zfs-discuss] clones bound too tightly to its origin

2008-01-08 Thread Andreas Koppenhoefer
[I apologise for reposting this... but no one replied to my post from Dec, 4th.] Hallo all, while experimenting with "zfs send" and "zfs receive" mixed with cloning on receiver side I found the following... On server A there is a zpool with snapshots created on regular basis via cron. Server B

Re: [zfs-discuss] clones bound too tightly to its origin

2007-12-04 Thread Andreas Koppenhoefer
It seems my script got lost during edit/posting message. I'll try again attaching... - Andreas This message posted from opensolaris.org test-zfs-clone.sh Description: Bourne shell script ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http:

Re: [zfs-discuss] clones bound too tightly to its origin

2007-12-04 Thread Andreas Koppenhoefer
I forgot to mention: we are running Solaris 10 Update 4 (08/07)... - Andreas This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] clones bound too tightly to its origin

2007-12-04 Thread Andreas Koppenhoefer
Hallo all, while experimenting with "zfs send" and "zfs receive" mixed with cloning on receiver side I found the following... On server A there is a zpool with snapshots created on regular basis via cron. Server B get's updated by a zfs-send-ssh-zfs-receive command pipe. Sometimes I want to do s

[zfs-discuss] Re: ZFS not utilizing all disks

2007-05-10 Thread Andreas Koppenhoefer
What does "zpool status database" say? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: ZFS Storage Pools Recommendations for Productive Environments

2007-05-10 Thread Andreas Koppenhoefer
> which one is the most performant: copies=2 or zfs-mirror? What type of copies are you talking about? Mirrored data in underlying storage subsystem or a (new) feature in zfs? - Andreas This message posted from opensolaris.org ___ zfs-discuss mailin

[zfs-discuss] ZFS Storage Pools Recommendations for Productive Environments

2007-05-09 Thread Andreas Koppenhoefer
Hello, solaris Internals wiki contains many interesting things about zfs. But i have no glue about the reasons for this entry: In Section "ZFS Storage Pools Recommendations - Storage Pools" you can read: [i]For all production environments, set up a redundant ZFS storage pool, such as a raidz, ra