Re: [zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-12 Thread David L Kensiski

On Mar 11, 2010, at 3:08 PM, Cindy Swearingen wrote:


Hi David,

In general, an I/O error means that the slice 0 doesn't exist
or some other problem exists with the disk.



Which makes complete sense because the partition table on the  
replacement didn't have anything specified for slice 0.





The installgrub command is like this:

# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0



I reran it like that.  For curiosity sake, what is the zfs_stage1_5  
file file?





Thanks,

Cindy




Thanks you!

--Dave


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-12 Thread Cindy Swearingen

Hi David,

Make sure you test booting from the newly attached disk and update the
BIOS, if necessary.

The boot process occurs over several stages. I'm not sure what the
zfs_stage1_5 file provides but is the first two that we need for
booting.

For more details, go to this site:

http://hub.opensolaris.org/bin/view/Community+Group+zfs/boot

And review Lori's slides at the bottom of this page.

Thanks,

cindy

On 03/12/10 08:41, David L Kensiski wrote:

On Mar 11, 2010, at 3:08 PM, Cindy Swearingen wrote:


Hi David,

In general, an I/O error means that the slice 0 doesn't exist
or some other problem exists with the disk.



Which makes complete sense because the partition table on the 
replacement didn't have anything specified for slice 0.





The installgrub command is like this:

# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0



I reran it like that.  For curiosity sake, what is the zfs_stage1_5 file 
file?





Thanks,

Cindy




Thanks you!

--Dave



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-11 Thread David L Kensiski

At Wed, 10 Mar 2010 15:28:40 -0800 Cindy Swearingen wrote:
 Hey list,

 Grant says his system is hanging after the zpool replace on a v240,  
running Solaris 10 5/09, 4 GB of memory, and no ongoing snapshots.
 No errors from zpool replace so it sounds like the disk was  
physically

 replaced successfully.

 If anyone else can comment or help Grant diagnose this issue, please
 feel free...

 Thanks,

 Cindy


I'm had a similar problem -- swapped out the drive and now when I  
tried the zfs replace, I got an I/O error:


k01_dlk$ sudo zpool replace rpool c1t0d0s0
cannot open '/dev/dsk/c1t0d0s0': I/O error

I ran format and made sure the partition table matched the good root  
mirror, then was able to rerun the zpool replace:


k01_dlk$ sudo zpool replace -f rpool c1t0d0s0 c1t0d0s0
Please be sure to invoke installgrub(1M) to make 'c1t0d0s0' bootable.

Can I assume zfs_stage1_5 is correct?

k01_dlk$ cd /boot/grub/
k01_dlk$ ls
	binfat_stage1_5  jfs_stage1_5nbgrub  
stage1   ufs_stage1_5
	capability ffs_stage1_5  menu.lstpxegrub 
stage2   vstafs_stage1_5
	defaultinstall_menu  menu.lst.orig   reiserfs_stage1_5   
stage2_eltorito  xfs_stage1_5
	e2fs_stage1_5  iso9660_stage1_5  minix_stage1_5  splash.xpm.gz   
ufs2_stage1_5zfs_stage1_5


Installgrub succeeded, but I'm a tad nervous about rebooting until I  
know for sure.


k01_dlk$ sudo installgrub zfs_stage1_5 stage2 /dev/rdsk/c1t0d0s0
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 272 sectors starting at 50 (abs 16115)

Thanks,
--Dave


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-11 Thread Cindy Swearingen

Hi David,

In general, an I/O error means that the slice 0 doesn't exist
or some other problem exists with the disk.

The installgrub command is like this:

# installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/c1t0d0s0

Thanks,

Cindy

On 03/11/10 15:45, David L Kensiski wrote:

At Wed, 10 Mar 2010 15:28:40 -0800 Cindy Swearingen wrote:


Hey list,






Grant says his system is hanging after the zpool replace on a v240, 

running Solaris 10 5/09, 4 GB of memory, and no ongoing snapshots.


No errors from zpool replace so it sounds like the disk was physically
replaced successfully.







If anyone else can comment or help Grant diagnose this issue, please



feel free...







Thanks,







Cindy




I'm had a similar problem -- swapped out the drive and now when I tried the zfs 
replace, I got an I/O error:


k01_dlk$ sudo zpool replace rpool c1t0d0s0
cannot open '/dev/dsk/c1t0d0s0': I/O error


I ran format and made sure the partition table matched the good root mirror, 
then was able to rerun the zpool replace:


k01_dlk$ sudo zpool replace -f rpool c1t0d0s0 c1t0d0s0
Please be sure to invoke installgrub(1M) to make 'c1t0d0s0' bootable.


Can I assume zfs_stage1_5 is correct?


k01_dlk$ cd /boot/grub/

k01_dlk$ ls
binfat_stage1_5  jfs_stage1_5nbgrub 
stage1   ufs_stage1_5
capability ffs_stage1_5  menu.lstpxegrub
stage2   vstafs_stage1_5
defaultinstall_menu  menu.lst.orig   reiserfs_stage1_5  
stage2_eltorito  xfs_stage1_5
e2fs_stage1_5  iso9660_stage1_5  minix_stage1_5  splash.xpm.gz  
ufs2_stage1_5zfs_stage1_5


Installgrub succeeded, but I'm a tad nervous about rebooting until I know for 
sure.


k01_dlk$ sudo installgrub zfs_stage1_5 stage2 /dev/rdsk/c1t0d0s0
stage1 written to partition 0 sector 0 (abs 16065)
stage2 written to partition 0, 272 sectors starting at 50 (abs 16115)


Thanks,

--Dave






___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-10 Thread Cindy Swearingen

Hi Grant,

I don't have a v240 to test but I think you might need to unconfigure
the disk first on this system.

So I would follow the more complex steps.

If this is a root pool, then yes, you would need to use the slice
identifier, and make sure it has an SMI disk label.

After the zpool replace operation and the disk resilvering is
complete, apply the boot blocks.

The steps would look like this:

# zpool offline rpool c2t1d0
#cfgadm -c unconfigure c1::dsk/c2t1d0
(physically replace the drive)
(confirm an SMI label and a s0 exists)
# cfgadm -c configure c1::dsk/c2t1d0
# zpool replace rpool c2t1d0s0
# zpool online rpool c2t1d0s0
# zpool status rpool /* to confirm the replacement/resilver is complete
# installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk
/dev/rdsk/c2t1d0s0

Thanks,

Cindy


On 03/10/10 13:28, Grant Lowe wrote:

Please help me out here. I've got a V240 with the root drive, c2t0d0 mirrored 
to c2t1d0. The mirror is having problems, and I'm unsure of the exact procedure 
to pull the mirrored drive. I see in various googling:

zpool replace rpool c2t1d0 c2t1d0

or I've seen simply:

zpool replace rpool c2t1d0

or I've seen the much more complex:

zpool offline rpooll c2t1d0
cfgadm -c unconfigure c1::dsk/c2t1d0
(replace the drive)
cfgadm -c configure c1::dsk/c2t1d0
zpool replace rpool c2t1d0s0
zpool online rpool c2t1d0s0

So which is it? Also, do I need to include the slice as in the last example?

Thanks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-10 Thread Grant Lowe
Well, this system is Solaris 05/09, with patches form November. No snapshots 
running and no internal contollers. It's a file serving and attached to a HDS 
disk array. Help and please respond ASAP as this is production! Even an IM 
would be helpful.

--- On Wed, 3/10/10, Cindy Swearingen cindy.swearin...@sun.com wrote:

 From: Cindy Swearingen cindy.swearin...@sun.com
 Subject: Re: [zfs-discuss] Replacing a failed/failed mirrored root disk
 To: Grant Lowe gl...@sbcglobal.net
 Cc: zfs-discuss@opensolaris.org
 Date: Wednesday, March 10, 2010, 1:09 PM
 Hi Grant,
 
 I don't have a v240 to test but I think you might need to
 unconfigure
 the disk first on this system.
 
 So I would follow the more complex steps.
 
 If this is a root pool, then yes, you would need to use the
 slice
 identifier, and make sure it has an SMI disk label.
 
 After the zpool replace operation and the disk resilvering
 is
 complete, apply the boot blocks.
 
 The steps would look like this:
 
 # zpool offline rpool c2t1d0
 #cfgadm -c unconfigure c1::dsk/c2t1d0
 (physically replace the drive)
 (confirm an SMI label and a s0 exists)
 # cfgadm -c configure c1::dsk/c2t1d0
 # zpool replace rpool c2t1d0s0
 # zpool online rpool c2t1d0s0
 # zpool status rpool /* to confirm the replacement/resilver
 is complete
 # installboot -F zfs /usr/platform/`uname
 -i`/lib/fs/zfs/bootblk
 /dev/rdsk/c2t1d0s0
 
 Thanks,
 
 Cindy
 
 
 On 03/10/10 13:28, Grant Lowe wrote:
  Please help me out here. I've got a V240 with the root
 drive, c2t0d0 mirrored to c2t1d0. The mirror is having
 problems, and I'm unsure of the exact procedure to pull the
 mirrored drive. I see in various googling:
  
  zpool replace rpool c2t1d0 c2t1d0
  
  or I've seen simply:
  
  zpool replace rpool c2t1d0
  
  or I've seen the much more complex:
  
  zpool offline rpooll c2t1d0
  cfgadm -c unconfigure c1::dsk/c2t1d0
  (replace the drive)
  cfgadm -c configure c1::dsk/c2t1d0
  zpool replace rpool c2t1d0s0
  zpool online rpool c2t1d0s0
  
  So which is it? Also, do I need to include the slice
 as in the last example?
  
  Thanks.
  
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Replacing a failed/failed mirrored root disk

2010-03-10 Thread Cindy Swearingen

Hey list,

Grant says his system is hanging after the zpool replace on a v240, 
running Solaris 10 5/09, 4 GB of memory, and no ongoing snapshots.


No errors from zpool replace so it sounds like the disk was physically
replaced successfully.

If anyone else can comment or help Grant diagnose this issue, please
feel free...

Thanks,

Cindy

On 03/10/10 16:19, Grant Lowe wrote:

Well, this system is Solaris 05/09, with patches form November. No snapshots 
running and no internal contollers. It's a file serving and attached to a HDS 
disk array. Help and please respond ASAP as this is production! Even an IM 
would be helpful.

--- On Wed, 3/10/10, Cindy Swearingen cindy.swearin...@sun.com wrote:


From: Cindy Swearingen cindy.swearin...@sun.com
Subject: Re: [zfs-discuss] Replacing a failed/failed mirrored root disk
To: Grant Lowe gl...@sbcglobal.net
Cc: zfs-discuss@opensolaris.org
Date: Wednesday, March 10, 2010, 1:09 PM
Hi Grant,

I don't have a v240 to test but I think you might need to
unconfigure
the disk first on this system.

So I would follow the more complex steps.

If this is a root pool, then yes, you would need to use the
slice
identifier, and make sure it has an SMI disk label.

After the zpool replace operation and the disk resilvering
is
complete, apply the boot blocks.

The steps would look like this:

# zpool offline rpool c2t1d0
#cfgadm -c unconfigure c1::dsk/c2t1d0
(physically replace the drive)
(confirm an SMI label and a s0 exists)
# cfgadm -c configure c1::dsk/c2t1d0
# zpool replace rpool c2t1d0s0
# zpool online rpool c2t1d0s0
# zpool status rpool /* to confirm the replacement/resilver
is complete
# installboot -F zfs /usr/platform/`uname
-i`/lib/fs/zfs/bootblk
/dev/rdsk/c2t1d0s0

Thanks,

Cindy


On 03/10/10 13:28, Grant Lowe wrote:

Please help me out here. I've got a V240 with the root

drive, c2t0d0 mirrored to c2t1d0. The mirror is having
problems, and I'm unsure of the exact procedure to pull the
mirrored drive. I see in various googling:

zpool replace rpool c2t1d0 c2t1d0

or I've seen simply:

zpool replace rpool c2t1d0

or I've seen the much more complex:

zpool offline rpooll c2t1d0
cfgadm -c unconfigure c1::dsk/c2t1d0
(replace the drive)
cfgadm -c configure c1::dsk/c2t1d0
zpool replace rpool c2t1d0s0
zpool online rpool c2t1d0s0

So which is it? Also, do I need to include the slice

as in the last example?

Thanks.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss