Robert Noland wrote:
Olivier Smedts wrote:2010/4/1 Bartosz Stec <[email protected]>:Hello ZFS and GPT hackers :)I'm sending this message to both freebsd-current and freebsd-fs because itdoesn't seems to be a CURRENT-specific issue.Yesterday I tried to migrate my mixed UFS/RAIDZ config to clean RAIDZ withGPT boot. I've following mostly this guide: http://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/RAIDZ1I'm using CURRENT on 3x40GB HDDs (ad0-ad3) and additional 250GB HDD has beenused for data migration (ad4). Data was copied form RAIDZ to 250GB HDD, GPT sheme was created on 40GBHDDs, then new zpool on them, and finally data went back to RAIDZ. Bootingfrom RAIDZ was succesful, so far so good.After a while I've noticed some SMART errors on ad1, so I've booted machinewith seatools for dos and made long test. One bad sector was found and reallocated, nothing to worry about.As I was in seatools already, I've decided to adjust LBA size on that disk (seatools can do that), because it was about 30MB larger than the other two, and because of that I had to adjust size of freebsd-zfs partition on that disk to match exact size of others (otherwise 'zpool create' will complain).So LBA was adjusted and system rebooted.I don't understand why you adjusted LBA. You're using GPT partitions, so, couldn't you just make the zfs partition the same size on all disks by adjusting it to the smallest disk, and let free space at the end of the bigger ones ?For that matter, my understanding is that ZFS just doesn't care. If you have disks of different sized in a raidz, the pool size will be limited by the size of the smallest device. If those devices are replaced with larger ones, then the pool just grows to take advantage of the additional available space.
balrog% gpart show
=> 34 2097085 md0 GPT (1.0G)
34 128 1 freebsd-boot (64K)
162 2096957 2 freebsd-zfs (1.0G)
=> 34 2097085 md1 GPT (1.0G)
34 128 1 freebsd-boot (64K)
162 2096957 2 freebsd-zfs (1.0G)
=> 34 4194237 md2 GPT (2.0G)
34 128 1 freebsd-boot (64K)
162 4194109 2 freebsd-zfs (2.0G)
balrog% zpool status
pool: test
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
raidz1 ONLINE 0 0 0
md0p2 ONLINE 0 0 0
md1p2 ONLINE 0 0 0
md2p2 ONLINE 0 0 0
errors: No known data errors
balrog% zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
test 2.98G 141K 2.98G 0% ONLINE -
robert.
robert.Cheers, OlivierYes, I was aware that changing disk size probably end with corrupted GPT anddata loss, but it doesn't seem to be a big deal for me as far as 2/3 of zpool is alive, because I can always recreate gpt and resilver ad1. Unfortunately it wasn't so easy. First of all system booted, and as Iexpected kernel message shows GPT error on ad1. Zpool was degraded but alive and kicking. However, when I tried to execute any gpart command on ad1, itreturn: ad1: no such geomad1 was present under /dev, and it could be accessed by sysinstall/fdisk,but no with gpart. I've created bsd slice with sysinstall on ad1 and rebooted, with hope that after reboot I could acces ad1 with gpart andrecreate GPT scheme. Another surprise - system didn't boot at all, rebootingafter couple of seconds in loader (changing boot device didn't make a difference).Only way I could boot system at this moment was connecting 250GB HDD which fortunately still had data from zpool migration and boot from it. Another surprise - kernel was still complaining about GPT corruption on ad1. I hadno other ideas so I ran dd if=/dev/zero of=/dev/ad1 bs=512 count=512to clear beginning of the hdd. After that disk was still unaccesible fromtgpart, so I tried sysinstall/fdisk againt to create standard BSD partitioning scheme and rebooted system.After that finally gpart started to talk with ad1 and GPT scheme and zpoolhas been recreated and work as it supposed to. Still, how can we clear broken GPT data after it got corrupted? Why gpart has been showing "ad1: no such geom", and how can we deal with this problem?Finally, why gptzfsboot failed with GPT corrupted on other disk after tryingto fix it, while it booted at first place?Or maybe changing LBA size of already partitioned HDD is extreme case, andthe only way these problems could be triggered ;)? Cheers! -- Bartosz Stec _______________________________________________ [email protected] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-fs To unsubscribe, send any mail to "[email protected]"_______________________________________________ [email protected] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "[email protected]"
_______________________________________________ [email protected] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "[email protected]"
