Re: zfs boot size
> We always put swap directly after it so if a resize is needed its easy > without and resilvering . i am an idiot raid0.dfw.rg.net:/root# gpart backup da0 GPT 128 1 freebsd-boot 34128 2 freebsd-swap162 33554432 3freebsd-zfs 33554594 3873308477 so something such as # swapoff # gpart delete -i 1 # gpart delete -i 2 # gpart add -t freebsd-boot -i 1 -b 40 -s 256 # gpart add -t freebsd-swap -i 2 # some incantation to install zfs boot blocks # swapon thanks for clue bat ___ freebsd-stable@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: zfs boot size
On Thu, Aug 16, 2018, 4:22 PM Steven Hartland, wrote: > The recommended size for a boot partition has been 512K for a while. > > We always put swap directly after it so if a resize is needed its easy > without and resilvering . > > If your pool is made up of partitions which are only 34 block smaller > than your zfs partition you're likely going to need to dump and restore > the entire pool as it won't accept vdevs smaller than the original. > Adding "-a 1M" to your gpart command when partitioning disks, regardless of their use, is very handy for this. It starts the first partition at 1MB, which gives you enough slack to increase the size of the freebsd-boot partition as needed. :) You can even add a freebsd-boot partition to make a data pool bootable as root with that amount of slack. :D Went through that at home. And ZFS has reserved a few MB from the end of the device you give it to allow for replacement drives that aren't the exact same size (in sectors or bytes) for awhile now (maybe since the 9.x days?). Cheers, Freddie Typos courtesy of my phone's keyboard. ___ freebsd-stable@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
Re: zfs boot size
The recommended size for a boot partition has been 512K for a while. We always put swap directly after it so if a resize is needed its easy without and resilvering . If your pool is made up of partitions which are only 34 block smaller than your zfs partition you're likely going to need to dump and restore the entire pool as it won't accept vdevs smaller than the original. Regards Steve On 16/08/2018 23:07, Randy Bush wrote: so the number of blocks one must reserve for zfs boot has gone from 34 to 40. is one supposed to, one at a time, drop each disk out of the pool, repartition, re-add, and resilver? luckily, there are only 16 drives, and resilvering a drive only takes a couple of days. so we might be done with it this calendar year. and what is the likelihood we make it through this without some sort of disaster? clue bat, please? randy ___ freebsd-stable@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org" ___ freebsd-stable@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"
zfs boot size
so the number of blocks one must reserve for zfs boot has gone from 34 to 40. is one supposed to, one at a time, drop each disk out of the pool, repartition, re-add, and resilver? luckily, there are only 16 drives, and resilvering a drive only takes a couple of days. so we might be done with it this calendar year. and what is the likelihood we make it through this without some sort of disaster? clue bat, please? randy ___ freebsd-stable@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"