oops, meant to reply-all...

---------- Forwarded message ----------
From: Terry Heatlie <[EMAIL PROTECTED]>
Date: Wed, Oct 29, 2008 at 8:14 PM
Subject: Re: [zfs-discuss] zpool import problem
To: Eric Schrock <[EMAIL PROTECTED]>


well, this does seem to be the case:

bash-3.2# dtrace -s raidz_open2.d
run 'zpool import' to generate trace

1145357764648 BEGIN RAIDZ OPEN
1145357764648 config asize = 1600340623360
1145357764648 config ashift = 9
1145358131986 child[0]: asize = 320071851520, ashift = 9
1145358861331 child[1]: asize = 400088457216, ashift = 9
1145396437606 child[2]: asize = 400088457216, ashift = 9
1145396891657 child[3]: asize = 320072933376, ashift = 9
1145397584944 child[4]: asize = 400087375360, ashift = 9
1145397920504 child[5]: asize = 400087375360, ashift = 9
1145398947963 asize = 1600335380480
1145398947963 ashift = 9
1145398947963 END RAIDZ OPEN

But I still don't see a difference between the partition maps of the drive
with only 2 labels and a good one... c2 is bad, c4 is good...

# prtvtoc /dev/dsk/c2d0p0 > /tmp/vtoc_c2
# prtvtoc /dev/dsk/c4d0p0 > /tmp/vtoc_c4
# diff /tmp/vtoc_c2 /tmp/vtoc_c4
1c1
< * /dev/dsk/c2d0p0 partition map
---
> * /dev/dsk/c4d0p0 partition map
#


On Tue, Oct 28, 2008 at 3:53 AM, Eric Schrock <[EMAIL PROTECTED]> wrote:

> These are the symptoms of a shrinking device in a RAID-Z pool.  You can
> try to run the attached script during the import to see if this the
> case.  There's a bug filed on this, but I don't have it handy.
>
> [...]
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to