Re: [zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-12 Thread David L Kensiski
On Mar 11, 2010, at 3:08 PM, Cindy Swearingen wrote: Hi David, In general, an I/O error means that the slice 0 doesn't exist or some other problem exists with the disk. Which makes complete sense because the partition table on the replacement didn't have anything specified for slice 0.

Re: [zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-11 Thread David L Kensiski
At Wed, 10 Mar 2010 15:28:40 -0800 Cindy Swearingen wrote: Hey list, Grant says his system is hanging after the zpool replace on a v240, running Solaris 10 5/09, 4 GB of memory, and no ongoing snapshots. No errors from zpool replace so it sounds like the disk was physically replaced

[zfs-discuss] abysmal performance using zfs iSCSI target

2009-11-04 Thread David L Kensiski
I have created an iSCSI target using ZFS on host k01: k01# zfs create -V 100g kpool_k01/k01tgt-i21-solotest k01# zfs set shareiscsi=on kpool_k01/k01tgt-i21-solotest And attached it statically to an initiator node i21: i21# iscsiadm add static-config iqn.1986-03.com.sun: