[zfs-discuss] renumbering and its potential side effects.
i am forced to reinstall s10u3 on my x4500. SP 1.1.1. exported zpool, and discovered during the reinstall that the controller numbers have changed. what used to be c5t0d0 is now c6t0d0. it it happens the exported zpool is using only half the disks, and has no reference to a c6t0d0, but still a disconcerting situation as i have no idea what the effects of this controller renumbering may be. i may have had a pool that had 5x9+1 layout, that would have used c6t0d0 and c6t4d0... puzzled and concerned... oz -- ozan s. yigit | [EMAIL PROTECTED] | o: 416-348-1540 if you want to have your head in the clouds you need to keep your feet on the ground. -- terry pratchett ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] boot disks controller layout...
ah, good stuff. thanks. oz Richard Elling [in response to my question] wrote: ozan s. yigit wrote: ... is there any reason why factory install comes with C5T0 and C5T4? a limitation of the bios or some other reason i am missing? (i may need to RTFM harder... :) BIOS limitation. For a discussion on why you don't need to worry: http://blogs.sun.com/relling/entry/zfs_raid_recommendations_space_vs -- richard -- ozan s. yigit | [EMAIL PROTECTED] | o: 416-348-1540 if you want to have your head in the clouds you need to keep your feet on the ground. -- terry pratchett ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] s10u3 query
can someone please confirm if hot spares are supported in s10u3? thanks. oz -- ozan s. yigit | [EMAIL PROTECTED] | http://nextbit.blogspot.com an open mind is no substitute for hard work -- nelson goodman ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] adding to a raidz pool and its discontents
eric kustarz wrote [in part] What bits are you running? s10r2. thumper-12tb# zpool add backup c7t7d0 invalid vdev specification use '-f' to override the following errors: mismatched replication level: pool uses raidz and new vdev is disk interesting. that helps. Did you use the -f flag when you added the single disk vdev? nope. thanks for the demo from nevada build. eric -- ozan s. yigit | [EMAIL PROTECTED] | http://nextbit.blogspot.com you take a banana, you get a lunar landscape. -- j. van wijk ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] q: zfs on das
actually zfs going over vtrak 200i promise array with 12x250g as scsi das. i have each disk on its own volume; anyone have had any experience running zfs on top of such a setup? any links and/or notes on similar setup esp. performance reliability would be helpful. [if noone has done this, i will be glad to share my notes in a few weeks.] oz -- ozan s. yigit | [EMAIL PROTECTED] | 416 977 1414 x 1540 an open mind is no substitute for hard work -- nelson goodman ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] no automatic clearing of zoned eh?
s10u2, once zoned, always zoned? i see that zoned property is not cleared after removing the dataset from a zone cfg or even uninstalling the entire zone... [right, i know how to clear it by hand, but maybe i am missing a bit of magic otherwise anodyne zonecfg et al.] oz -- ozan s. yigit | [EMAIL PROTECTED] don't be afraid to find the rhinoceros to pick fleas from. -- richard gabriel [patterns of software] ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss