Re: GEOM corrupt or invalid GPT detected on ZFS raid on Freebsd 8.0 x64

2010-01-15 Thread Kirk Strauser

On 01/08/10 09:56, Derrick Ryalls wrote:

After not getting daily system mails for a while, then suddenly
getting them, I took a closer look and noticed this message appears
after a boot:

+GEOM: ad4: corrupt or invalid GPT detected.
+GEOM: ad4: GPT rejected -- may not be recoverable.
+GEOM: label/disk1: corrupt or invalid GPT detected.
+GEOM: label/disk1: GPT rejected -- may not be recoverable.

label/disk1 should be the same thing as ad4, and it is part of a 4
disk raidz.
   


My guess it that ZFS overwrote the label. The two aren't very 
compatible, to the best of my knowledge.


--
Kirk Strauser

___
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org


GEOM corrupt or invalid GPT detected on ZFS raid on Freebsd 8.0 x64

2010-01-08 Thread Derrick Ryalls
Greetings,

After not getting daily system mails for a while, then suddenly
getting them, I took a closer look and noticed this message appears
after a boot:

+GEOM: ad4: corrupt or invalid GPT detected.
+GEOM: ad4: GPT rejected -- may not be recoverable.
+GEOM: label/disk1: corrupt or invalid GPT detected.
+GEOM: label/disk1: GPT rejected -- may not be recoverable.

label/disk1 should be the same thing as ad4, and it is part of a 4
disk raidz.  When I check the status of my pools, all is reported
fine:


[r...@frodo ~]# zpool status
  pool: backup
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
backup  ONLINE   0 0 0
  label/backup  ONLINE   0 0 0

errors: No known data errors

  pool: storage
 state: ONLINE
 scrub: none requested
config:

NAME STATE READ WRITE CKSUM
storage  ONLINE   0 0 0
  raidz1 ONLINE   0 0 0
label/disk1  ONLINE   0 0 0
label/disk2  ONLINE   0 0 0
label/disk3  ONLINE   0 0 0
label/disk4  ONLINE   0 0 0


Checking the history of the logs, it looks like this started to occur
after I did a disk replacement test for ZFS.  Going from memory, I
performed the following steps:

* Took the disk offline
* Powered down the system
* Replaced the physical disk
* Powered up the system
* Used glabel to label the new disk with the same name as old disk
* Told ZFS to replace the disk

The operation appear to be a success in that the drive resilvered and
the pool is listed as online.  Copying advice in this thread
http://forums.freebsd.org/showthread.php?t=8920page=3 I tried:

[r...@frodo ~]# zdb -l /dev/ad4

LABEL 0

version=13
name='storage'
state=0
txg=509115
pool_guid=3832644769924830246
hostid=400837641
hostname='myhost'
top_guid=7378337929137643727
guid=8898281456854820018
vdev_tree
type='raidz'
id=0
guid=7378337929137643727
nparity=1
metaslab_array=23
metaslab_shift=36
ashift=9
asize=8001576501248
is_log=0
children[0]
type='disk'
id=0
guid=8898281456854820018
path='/dev/label/disk1'
whole_disk=0
DTL=122
children[1]
type='disk'
id=1
guid=1353516608832566
path='/dev/label/disk2'
whole_disk=0
DTL=126
children[2]
type='disk'
id=2
guid=2985688821708093695
path='/dev/label/disk3'
whole_disk=0
DTL=125
children[3]
type='disk'
id=3
guid=16498259053924061255
path='/dev/label/disk4'
whole_disk=0
DTL=124

LABEL 1

version=13
name='storage'
state=0
txg=509115
pool_guid=3832644769924830246
hostid=400837641
hostname='myhost'
top_guid=7378337929137643727
guid=8898281456854820018
vdev_tree
type='raidz'
id=0
guid=7378337929137643727
nparity=1
metaslab_array=23
metaslab_shift=36
ashift=9
asize=8001576501248
is_log=0
children[0]
type='disk'
id=0
guid=8898281456854820018
path='/dev/label/disk1'
whole_disk=0
DTL=122
children[1]
type='disk'
id=1
guid=1353516608832566
path='/dev/label/disk2'
whole_disk=0
DTL=126
children[2]
type='disk'
id=2
guid=2985688821708093695
path='/dev/label/disk3'
whole_disk=0
DTL=125
children[3]
type='disk'
id=3
guid=16498259053924061255
path='/dev/label/disk4'
whole_disk=0
DTL=124

LABEL 2

version=13
name='storage'
state=0
txg=509115
pool_guid=3832644769924830246
hostid=400837641
hostname='myhost'
top_guid=7378337929137643727
guid=8898281456854820018
vdev_tree
type='raidz'
id=0
guid=7378337929137643727
nparity=1
metaslab_array=23
metaslab_shift=36
ashift=9
asize=8001576501248
is_log=0
children[0]