Re: LLD: man pages missing?
On 25 December 2017 at 15:25, Dimitry Andricwrote: > > Since lld is now approaching a quite usable state, maybe it is time for > a request to upstream to provide [a manpage]. ;) Yes, it would've been nice if an upstream man page was created early on and had been kept up to date as features were added. In any case, lld is otherwise close to being ready to be installed as /usr/bin/ld on FreeBSD/amd64, and I suspect we might have to just create the man page (and send it upstream). Other than the man page I'm aware of some issues with ifunc support discovered by kib@ while working on kernel ifunc. This doesn't affect FreeBSD-HEAD yet (as we don't use ifunc today) but will soon be important. There's also additional work needed for the ports tree, although right now ~99.5 of the ports collection builds when lld is /usr/bin/ld. Most ports that were failing with lld have either been fixed, or worked around via LLD_UNSAFE so that the port continues using ld.bfd. ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: ZFS: alignment/boundary for partition type freebsd-zfs
Yes it does know how to figure out based on stripe size On Tue, 26 Dec 2017 at 20:25, O. Hartmannwrote: > Am Tue, 26 Dec 2017 09:31:53 -0800 (PST) > "Rodney W. Grimes" schrieb: > > > > On Tue, Dec 26, 2017 at 10:04 AM, O. Hartmann > > > wrote: > > > > > > > Am Tue, 26 Dec 2017 11:44:29 -0500 > > > > Allan Jude schrieb: > > > > > > > > > On 2017-12-26 11:24, O. Hartmann wrote: > > > > > > Running recent CURRENT on most of our lab's boxes, I was in need > to > > > > replace and > > > > > > restore a ZFS RAIDZ pool. Doing so, I was in need to partition > the > > > > disks I was about > > > > > > to replace. Well, the drives in question are 4k block size > drives with > > > > 512b emulation > > > > > > - as most of them today. I've created the only and sole partiton > on > > > > each 4 TB drive > > > > > > via the command sequence > > > > > > > > > > > > gpart create -s GPT adaX > > > > > > gpart add -t freebsd-zfs -a 4k -l nameXX adaX > > > > > > > > > > > > After doing this on all drives I was about to replace, something > drove > > > > me to check on > > > > > > the net and I found a lot of websites giving "advices", how to > prepare > > > > large, modern > > > > > > drives for ZFS. I think the GNOP trick is not necessary any > more, but > > > > many blogs > > > > > > recommend to perform > > > > > > > > > > > > gpart add -t freebsd-zfs -b 1m -a 4k -l nameXX adaX > > > > > > > > > > > > to put the partition boundary at the 1 Megabytes boundary. I > didn't do > > > > that. My > > > > > > partitions all start now at block 40. > > > > > > > > > > > > My question is: will this have severe performance consequences > or is > > > > that negligible? > > > > > > > > > > > > Since most of those websites I found via "zfs freebsd > alignement" are > > > > from years ago, > > > > > > I'm a bit confused now an consideration performing all this > > > > days-taking resilvering > > > > > > process let me loose some more hair as the usual "fallout" ... > > > > > > > > > > > > Thanks in advance, > > > > > > > > > > > > Oliver > > > > > > > > > > > > > > > > The 1mb alignment is not required. It is just what I do to leave > room > > > > > for the other partition types before the ZFS partition. > > > > > > > > > > However, the replacement for the GNOP hack, is separate. In > addition to > > > > > aligning the partitions to 4k, you have to tell ZFS that the drive > is 4k: > > > > > > > > > > sysctl vfs.zfs.min_auto_ashift=12 > > > > > > > > > > (2^12 = 4096) > > > > > > > > > > Before you create the pool, or add additional vdevs. > > > > > > > > > > > > > I didn't do the sysctl vfs.zfs.min_auto_ashift=12 :-(( when I > created the > > > > vdev. What is > > > > the consequence for that for the pool? I lived under the impression > that > > > > this is necessary > > > > for "native 4k" drives. > > > > > > > > How can I check what ashift is in effect for a specific vdev? > > > > > > > > > > It's only necessary if your drive stupidly fails to report its physical > > > sector size correctly, and no other FreeBSD developer has already > written a > > > quirk for that drive. Do "zdb -l /dev/adaXXXpY" for any one of the > > > partitions in the ZFS raid group in question. It should print either > > > "ashift: 12" or "ashift: 9". > > > > And more than likely if you used the bsdinstall from one of > > the distributions to setup the system you created the ZFS > > pool from it has the sysctl in /boot/loader.conf as the > > default for all? recent? bsdinstall's is that the 4k default > > is used and the sysctl gets written to /boot/loader.conf > > at install time so from then on all pools you create shall > > also be 4k. You have to change a default during the > > system install to change this to 512. > > > I never used any installation scripts so far. > > Before I replaced the pool's drives, I tried to search for informations on > how-to. This > important tiny fact must have slipped through - or it is very bad > documented. I didn't > find a hint in tuning(7), which is the man page I consulted first. > > Luckily, as Allan Jude stated, the disk recognition was correct (I guess > stripesize > instead of blocksize is taken?). > > > > > > -aLAn > > > > > ___ > > > freebsd-current@freebsd.org mailing list > > > https://lists.freebsd.org/mailman/listinfo/freebsd-current > > > To unsubscribe, send any mail to " > freebsd-current-unsubscr...@freebsd.org" > > > > > > > > > -- > O. Hartmann > > Ich widerspreche der Nutzung oder Übermittlung meiner Daten für > Werbezwecke oder für die Markt- oder Meinungsforschung (§ 28 Abs. 4 BDSG). > ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: ZFS: alignment/boundary for partition type freebsd-zfs
Am Tue, 26 Dec 2017 09:31:53 -0800 (PST) "Rodney W. Grimes"schrieb: > > On Tue, Dec 26, 2017 at 10:04 AM, O. Hartmann > > wrote: > > > > > Am Tue, 26 Dec 2017 11:44:29 -0500 > > > Allan Jude schrieb: > > > > > > > On 2017-12-26 11:24, O. Hartmann wrote: > > > > > Running recent CURRENT on most of our lab's boxes, I was in need to > > > replace and > > > > > restore a ZFS RAIDZ pool. Doing so, I was in need to partition the > > > disks I was about > > > > > to replace. Well, the drives in question are 4k block size drives > > > > > with > > > 512b emulation > > > > > - as most of them today. I've created the only and sole partiton on > > > each 4 TB drive > > > > > via the command sequence > > > > > > > > > > gpart create -s GPT adaX > > > > > gpart add -t freebsd-zfs -a 4k -l nameXX adaX > > > > > > > > > > After doing this on all drives I was about to replace, something > > > > > drove > > > me to check on > > > > > the net and I found a lot of websites giving "advices", how to > > > > > prepare > > > large, modern > > > > > drives for ZFS. I think the GNOP trick is not necessary any more, but > > > > > > > > many blogs > > > > > recommend to perform > > > > > > > > > > gpart add -t freebsd-zfs -b 1m -a 4k -l nameXX adaX > > > > > > > > > > to put the partition boundary at the 1 Megabytes boundary. I didn't > > > > > do > > > that. My > > > > > partitions all start now at block 40. > > > > > > > > > > My question is: will this have severe performance consequences or is > > > that negligible? > > > > > > > > > > Since most of those websites I found via "zfs freebsd alignement" are > > > > > > > > from years ago, > > > > > I'm a bit confused now an consideration performing all this > > > days-taking resilvering > > > > > process let me loose some more hair as the usual "fallout" ... > > > > > > > > > > Thanks in advance, > > > > > > > > > > Oliver > > > > > > > > > > > > > The 1mb alignment is not required. It is just what I do to leave room > > > > for the other partition types before the ZFS partition. > > > > > > > > However, the replacement for the GNOP hack, is separate. In addition to > > > > aligning the partitions to 4k, you have to tell ZFS that the drive is > > > > 4k: > > > > > > > > sysctl vfs.zfs.min_auto_ashift=12 > > > > > > > > (2^12 = 4096) > > > > > > > > Before you create the pool, or add additional vdevs. > > > > > > > > > > I didn't do the sysctl vfs.zfs.min_auto_ashift=12 :-(( when I created the > > > vdev. What is > > > the consequence for that for the pool? I lived under the impression that > > > this is necessary > > > for "native 4k" drives. > > > > > > How can I check what ashift is in effect for a specific vdev? > > > > > > > It's only necessary if your drive stupidly fails to report its physical > > sector size correctly, and no other FreeBSD developer has already written a > > quirk for that drive. Do "zdb -l /dev/adaXXXpY" for any one of the > > partitions in the ZFS raid group in question. It should print either > > "ashift: 12" or "ashift: 9". > > And more than likely if you used the bsdinstall from one of > the distributions to setup the system you created the ZFS > pool from it has the sysctl in /boot/loader.conf as the > default for all? recent? bsdinstall's is that the 4k default > is used and the sysctl gets written to /boot/loader.conf > at install time so from then on all pools you create shall > also be 4k. You have to change a default during the > system install to change this to 512. I never used any installation scripts so far. Before I replaced the pool's drives, I tried to search for informations on how-to. This important tiny fact must have slipped through - or it is very bad documented. I didn't find a hint in tuning(7), which is the man page I consulted first. Luckily, as Allan Jude stated, the disk recognition was correct (I guess stripesize instead of blocksize is taken?). > > > -aLAn > > > ___ > > freebsd-current@freebsd.org mailing list > > https://lists.freebsd.org/mailman/listinfo/freebsd-current > > To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org" > > > -- O. Hartmann Ich widerspreche der Nutzung oder Übermittlung meiner Daten für Werbezwecke oder für die Markt- oder Meinungsforschung (§ 28 Abs. 4 BDSG). pgpgQBD0Ynbaz.pgp Description: OpenPGP digital signature
Re: ZFS: alignment/boundary for partition type freebsd-zfs
You only need to set the min if the drives hide their true sector size, as Allan mentioned. camcontrol identify is one of the easiest ways to check this. If the pool reports ashift 12 then zfs correctly detected the drives as 4K so that part is good On Tue, 26 Dec 2017 at 20:15, Rodney W. Grimes < freebsd-...@pdx.rh.cn85.dnsmgr.net> wrote: > > On Tue, Dec 26, 2017 at 10:04 AM, O. Hartmann> > wrote: > > > > > Am Tue, 26 Dec 2017 11:44:29 -0500 > > > Allan Jude schrieb: > > > > > > > On 2017-12-26 11:24, O. Hartmann wrote: > > > > > Running recent CURRENT on most of our lab's boxes, I was in need to > > > replace and > > > > > restore a ZFS RAIDZ pool. Doing so, I was in need to partition the > > > disks I was about > > > > > to replace. Well, the drives in question are 4k block size drives > with > > > 512b emulation > > > > > - as most of them today. I've created the only and sole partiton on > > > each 4 TB drive > > > > > via the command sequence > > > > > > > > > > gpart create -s GPT adaX > > > > > gpart add -t freebsd-zfs -a 4k -l nameXX adaX > > > > > > > > > > After doing this on all drives I was about to replace, something > drove > > > me to check on > > > > > the net and I found a lot of websites giving "advices", how to > prepare > > > large, modern > > > > > drives for ZFS. I think the GNOP trick is not necessary any more, > but > > > many blogs > > > > > recommend to perform > > > > > > > > > > gpart add -t freebsd-zfs -b 1m -a 4k -l nameXX adaX > > > > > > > > > > to put the partition boundary at the 1 Megabytes boundary. I > didn't do > > > that. My > > > > > partitions all start now at block 40. > > > > > > > > > > My question is: will this have severe performance consequences or > is > > > that negligible? > > > > > > > > > > Since most of those websites I found via "zfs freebsd alignement" > are > > > from years ago, > > > > > I'm a bit confused now an consideration performing all this > > > days-taking resilvering > > > > > process let me loose some more hair as the usual "fallout" ... > > > > > > > > > > Thanks in advance, > > > > > > > > > > Oliver > > > > > > > > > > > > > The 1mb alignment is not required. It is just what I do to leave room > > > > for the other partition types before the ZFS partition. > > > > > > > > However, the replacement for the GNOP hack, is separate. In addition > to > > > > aligning the partitions to 4k, you have to tell ZFS that the drive > is 4k: > > > > > > > > sysctl vfs.zfs.min_auto_ashift=12 > > > > > > > > (2^12 = 4096) > > > > > > > > Before you create the pool, or add additional vdevs. > > > > > > > > > > I didn't do the sysctl vfs.zfs.min_auto_ashift=12 :-(( when I created > the > > > vdev. What is > > > the consequence for that for the pool? I lived under the impression > that > > > this is necessary > > > for "native 4k" drives. > > > > > > How can I check what ashift is in effect for a specific vdev? > > > > > > > It's only necessary if your drive stupidly fails to report its physical > > sector size correctly, and no other FreeBSD developer has already > written a > > quirk for that drive. Do "zdb -l /dev/adaXXXpY" for any one of the > > partitions in the ZFS raid group in question. It should print either > > "ashift: 12" or "ashift: 9". > > And more than likely if you used the bsdinstall from one of > the distributions to setup the system you created the ZFS > pool from it has the sysctl in /boot/loader.conf as the > default for all? recent? bsdinstall's is that the 4k default > is used and the sysctl gets written to /boot/loader.conf > at install time so from then on all pools you create shall > also be 4k. You have to change a default during the > system install to change this to 512. > > > -aLAn > > > ___ > > freebsd-current@freebsd.org mailing list > > https://lists.freebsd.org/mailman/listinfo/freebsd-current > > To unsubscribe, send any mail to " > freebsd-current-unsubscr...@freebsd.org" > > > > -- > Rod Grimes > rgri...@freebsd.org > ___ > freebsd-current@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org" > ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: ZFS: alignment/boundary for partition type freebsd-zfs
> On Tue, Dec 26, 2017 at 10:04 AM, O. Hartmann> wrote: > > > Am Tue, 26 Dec 2017 11:44:29 -0500 > > Allan Jude schrieb: > > > > > On 2017-12-26 11:24, O. Hartmann wrote: > > > > Running recent CURRENT on most of our lab's boxes, I was in need to > > replace and > > > > restore a ZFS RAIDZ pool. Doing so, I was in need to partition the > > disks I was about > > > > to replace. Well, the drives in question are 4k block size drives with > > 512b emulation > > > > - as most of them today. I've created the only and sole partiton on > > each 4 TB drive > > > > via the command sequence > > > > > > > > gpart create -s GPT adaX > > > > gpart add -t freebsd-zfs -a 4k -l nameXX adaX > > > > > > > > After doing this on all drives I was about to replace, something drove > > me to check on > > > > the net and I found a lot of websites giving "advices", how to prepare > > large, modern > > > > drives for ZFS. I think the GNOP trick is not necessary any more, but > > many blogs > > > > recommend to perform > > > > > > > > gpart add -t freebsd-zfs -b 1m -a 4k -l nameXX adaX > > > > > > > > to put the partition boundary at the 1 Megabytes boundary. I didn't do > > that. My > > > > partitions all start now at block 40. > > > > > > > > My question is: will this have severe performance consequences or is > > that negligible? > > > > > > > > Since most of those websites I found via "zfs freebsd alignement" are > > from years ago, > > > > I'm a bit confused now an consideration performing all this > > days-taking resilvering > > > > process let me loose some more hair as the usual "fallout" ... > > > > > > > > Thanks in advance, > > > > > > > > Oliver > > > > > > > > > > The 1mb alignment is not required. It is just what I do to leave room > > > for the other partition types before the ZFS partition. > > > > > > However, the replacement for the GNOP hack, is separate. In addition to > > > aligning the partitions to 4k, you have to tell ZFS that the drive is 4k: > > > > > > sysctl vfs.zfs.min_auto_ashift=12 > > > > > > (2^12 = 4096) > > > > > > Before you create the pool, or add additional vdevs. > > > > > > > I didn't do the sysctl vfs.zfs.min_auto_ashift=12 :-(( when I created the > > vdev. What is > > the consequence for that for the pool? I lived under the impression that > > this is necessary > > for "native 4k" drives. > > > > How can I check what ashift is in effect for a specific vdev? > > > > It's only necessary if your drive stupidly fails to report its physical > sector size correctly, and no other FreeBSD developer has already written a > quirk for that drive. Do "zdb -l /dev/adaXXXpY" for any one of the > partitions in the ZFS raid group in question. It should print either > "ashift: 12" or "ashift: 9". And more than likely if you used the bsdinstall from one of the distributions to setup the system you created the ZFS pool from it has the sysctl in /boot/loader.conf as the default for all? recent? bsdinstall's is that the 4k default is used and the sysctl gets written to /boot/loader.conf at install time so from then on all pools you create shall also be 4k. You have to change a default during the system install to change this to 512. > -aLAn > ___ > freebsd-current@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org" > -- Rod Grimes rgri...@freebsd.org ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: ZFS: alignment/boundary for partition type freebsd-zfs
Am Tue, 26 Dec 2017 10:13:09 -0700 Alan Somersschrieb: > On Tue, Dec 26, 2017 at 10:04 AM, O. Hartmann > wrote: > > > Am Tue, 26 Dec 2017 11:44:29 -0500 > > Allan Jude schrieb: > > > > > On 2017-12-26 11:24, O. Hartmann wrote: > > > > Running recent CURRENT on most of our lab's boxes, I was in need to > > replace and > > > > restore a ZFS RAIDZ pool. Doing so, I was in need to partition the > > disks I was about > > > > to replace. Well, the drives in question are 4k block size drives with > > 512b emulation > > > > - as most of them today. I've created the only and sole partiton on > > each 4 TB drive > > > > via the command sequence > > > > > > > > gpart create -s GPT adaX > > > > gpart add -t freebsd-zfs -a 4k -l nameXX adaX > > > > > > > > After doing this on all drives I was about to replace, something drove > > me to check on > > > > the net and I found a lot of websites giving "advices", how to prepare > > large, modern > > > > drives for ZFS. I think the GNOP trick is not necessary any more, but > > many blogs > > > > recommend to perform > > > > > > > > gpart add -t freebsd-zfs -b 1m -a 4k -l nameXX adaX > > > > > > > > to put the partition boundary at the 1 Megabytes boundary. I didn't do > > that. My > > > > partitions all start now at block 40. > > > > > > > > My question is: will this have severe performance consequences or is > > that negligible? > > > > > > > > Since most of those websites I found via "zfs freebsd alignement" are > > from years ago, > > > > I'm a bit confused now an consideration performing all this > > days-taking resilvering > > > > process let me loose some more hair as the usual "fallout" ... > > > > > > > > Thanks in advance, > > > > > > > > Oliver > > > > > > > > > > The 1mb alignment is not required. It is just what I do to leave room > > > for the other partition types before the ZFS partition. > > > > > > However, the replacement for the GNOP hack, is separate. In addition to > > > aligning the partitions to 4k, you have to tell ZFS that the drive is 4k: > > > > > > sysctl vfs.zfs.min_auto_ashift=12 > > > > > > (2^12 = 4096) > > > > > > Before you create the pool, or add additional vdevs. > > > > > > > I didn't do the sysctl vfs.zfs.min_auto_ashift=12 :-(( when I created the > > vdev. What is > > the consequence for that for the pool? I lived under the impression that > > this is necessary > > for "native 4k" drives. > > > > How can I check what ashift is in effect for a specific vdev? > > > > It's only necessary if your drive stupidly fails to report its physical > sector size correctly, and no other FreeBSD developer has already written a > quirk for that drive. Do "zdb -l /dev/adaXXXpY" for any one of the > partitions in the ZFS raid group in question. It should print either > "ashift: 12" or "ashift: 9". > > -Alan > ___ > freebsd-current@freebsd.org mailing list > https://lists.freebsd.org/mailman/listinfo/freebsd-current > To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org" I checked as suggested and all partitions report ashift: 12. So I guess I'm save and sound and do not need to rebuild the pools ...? -- O. Hartmann Ich widerspreche der Nutzung oder Übermittlung meiner Daten für Werbezwecke oder für die Markt- oder Meinungsforschung (§ 28 Abs. 4 BDSG). pgpx0X2QEPUYE.pgp Description: OpenPGP digital signature
Re: ZFS: alignment/boundary for partition type freebsd-zfs
Am Tue, 26 Dec 2017 11:44:29 -0500 Allan Judeschrieb: > On 2017-12-26 11:24, O. Hartmann wrote: > > Running recent CURRENT on most of our lab's boxes, I was in need to replace > > and > > restore a ZFS RAIDZ pool. Doing so, I was in need to partition the disks I > > was about > > to replace. Well, the drives in question are 4k block size drives with 512b > > emulation > > - as most of them today. I've created the only and sole partiton on each 4 > > TB drive > > via the command sequence > > > > gpart create -s GPT adaX > > gpart add -t freebsd-zfs -a 4k -l nameXX adaX > > > > After doing this on all drives I was about to replace, something drove me > > to check on > > the net and I found a lot of websites giving "advices", how to prepare > > large, modern > > drives for ZFS. I think the GNOP trick is not necessary any more, but many > > blogs > > recommend to perform > > > > gpart add -t freebsd-zfs -b 1m -a 4k -l nameXX adaX > > > > to put the partition boundary at the 1 Megabytes boundary. I didn't do > > that. My > > partitions all start now at block 40. > > > > My question is: will this have severe performance consequences or is that > > negligible? > > > > Since most of those websites I found via "zfs freebsd alignement" are from > > years ago, > > I'm a bit confused now an consideration performing all this days-taking > > resilvering > > process let me loose some more hair as the usual "fallout" ... > > > > Thanks in advance, > > > > Oliver > > > > The 1mb alignment is not required. It is just what I do to leave room > for the other partition types before the ZFS partition. > > However, the replacement for the GNOP hack, is separate. In addition to > aligning the partitions to 4k, you have to tell ZFS that the drive is 4k: > > sysctl vfs.zfs.min_auto_ashift=12 > > (2^12 = 4096) > > Before you create the pool, or add additional vdevs. > I just checked with "zdb" what ashift is reported to my pool(s) and the result claims to be "ashift: 12". -- O. Hartmann Ich widerspreche der Nutzung oder Übermittlung meiner Daten für Werbezwecke oder für die Markt- oder Meinungsforschung (§ 28 Abs. 4 BDSG). pgpBRx17jYsyF.pgp Description: OpenPGP digital signature
Re: ZFS: alignment/boundary for partition type freebsd-zfs
On Tue, Dec 26, 2017 at 10:04 AM, O. Hartmannwrote: > Am Tue, 26 Dec 2017 11:44:29 -0500 > Allan Jude schrieb: > > > On 2017-12-26 11:24, O. Hartmann wrote: > > > Running recent CURRENT on most of our lab's boxes, I was in need to > replace and > > > restore a ZFS RAIDZ pool. Doing so, I was in need to partition the > disks I was about > > > to replace. Well, the drives in question are 4k block size drives with > 512b emulation > > > - as most of them today. I've created the only and sole partiton on > each 4 TB drive > > > via the command sequence > > > > > > gpart create -s GPT adaX > > > gpart add -t freebsd-zfs -a 4k -l nameXX adaX > > > > > > After doing this on all drives I was about to replace, something drove > me to check on > > > the net and I found a lot of websites giving "advices", how to prepare > large, modern > > > drives for ZFS. I think the GNOP trick is not necessary any more, but > many blogs > > > recommend to perform > > > > > > gpart add -t freebsd-zfs -b 1m -a 4k -l nameXX adaX > > > > > > to put the partition boundary at the 1 Megabytes boundary. I didn't do > that. My > > > partitions all start now at block 40. > > > > > > My question is: will this have severe performance consequences or is > that negligible? > > > > > > Since most of those websites I found via "zfs freebsd alignement" are > from years ago, > > > I'm a bit confused now an consideration performing all this > days-taking resilvering > > > process let me loose some more hair as the usual "fallout" ... > > > > > > Thanks in advance, > > > > > > Oliver > > > > > > > The 1mb alignment is not required. It is just what I do to leave room > > for the other partition types before the ZFS partition. > > > > However, the replacement for the GNOP hack, is separate. In addition to > > aligning the partitions to 4k, you have to tell ZFS that the drive is 4k: > > > > sysctl vfs.zfs.min_auto_ashift=12 > > > > (2^12 = 4096) > > > > Before you create the pool, or add additional vdevs. > > > > I didn't do the sysctl vfs.zfs.min_auto_ashift=12 :-(( when I created the > vdev. What is > the consequence for that for the pool? I lived under the impression that > this is necessary > for "native 4k" drives. > > How can I check what ashift is in effect for a specific vdev? > It's only necessary if your drive stupidly fails to report its physical sector size correctly, and no other FreeBSD developer has already written a quirk for that drive. Do "zdb -l /dev/adaXXXpY" for any one of the partitions in the ZFS raid group in question. It should print either "ashift: 12" or "ashift: 9". -Alan ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
Re: ZFS: alignment/boundary for partition type freebsd-zfs
Am Tue, 26 Dec 2017 11:44:29 -0500 Allan Judeschrieb: > On 2017-12-26 11:24, O. Hartmann wrote: > > Running recent CURRENT on most of our lab's boxes, I was in need to replace > > and > > restore a ZFS RAIDZ pool. Doing so, I was in need to partition the disks I > > was about > > to replace. Well, the drives in question are 4k block size drives with 512b > > emulation > > - as most of them today. I've created the only and sole partiton on each 4 > > TB drive > > via the command sequence > > > > gpart create -s GPT adaX > > gpart add -t freebsd-zfs -a 4k -l nameXX adaX > > > > After doing this on all drives I was about to replace, something drove me > > to check on > > the net and I found a lot of websites giving "advices", how to prepare > > large, modern > > drives for ZFS. I think the GNOP trick is not necessary any more, but many > > blogs > > recommend to perform > > > > gpart add -t freebsd-zfs -b 1m -a 4k -l nameXX adaX > > > > to put the partition boundary at the 1 Megabytes boundary. I didn't do > > that. My > > partitions all start now at block 40. > > > > My question is: will this have severe performance consequences or is that > > negligible? > > > > Since most of those websites I found via "zfs freebsd alignement" are from > > years ago, > > I'm a bit confused now an consideration performing all this days-taking > > resilvering > > process let me loose some more hair as the usual "fallout" ... > > > > Thanks in advance, > > > > Oliver > > > > The 1mb alignment is not required. It is just what I do to leave room > for the other partition types before the ZFS partition. > > However, the replacement for the GNOP hack, is separate. In addition to > aligning the partitions to 4k, you have to tell ZFS that the drive is 4k: > > sysctl vfs.zfs.min_auto_ashift=12 > > (2^12 = 4096) > > Before you create the pool, or add additional vdevs. > I didn't do the sysctl vfs.zfs.min_auto_ashift=12 :-(( when I created the vdev. What is the consequence for that for the pool? I lived under the impression that this is necessary for "native 4k" drives. How can I check what ashift is in effect for a specific vdev? -- O. Hartmann Ich widerspreche der Nutzung oder Übermittlung meiner Daten für Werbezwecke oder für die Markt- oder Meinungsforschung (§ 28 Abs. 4 BDSG). pgp4mdgvhaxfE.pgp Description: OpenPGP digital signature
[SOLVED] Re: LLD_IS_LD: error compiling world after switch: incompatible with /usr/lib/crt1.o ...
Am Tue, 26 Dec 2017 11:36:27 +0100 "O. Hartmann"schrieb: > On to boxes running almost the same CURRENT (non-working is at FreeBSD > 12.0-CURRENT #241 > r327182: Mon Dec 25 22:45:06 CET 2017 amd64; the working one is at r327183) I > dared, > again, to flip the switch LLD_IS_LD=YES. > > On the box in question building world magically stops and dumping an error > sometimes, > see below. The other one is performing well and build world without problems. > I build, > due to resource limitations, on both systems kernel and world with META_MODE. > > When I switched over to LLD again, I didn't clear the whole obj tree - I do > not want > this since my resources are, as said, limited and build times of 90 minutes > is a bit a > burden at the moment. So, is there probably a workaround to this? > > Thanks in adavnce, > Oliver > > > [...] > Building /usr/obj/usr/src/amd64.amd64/obj-lib32/lib/libmagic/mkmagic > --- mkmagic --- > /usr/bin/ld: error: /usr/obj/usr/src/amd64.amd64/obj-lib32/lib/libz/libz.so is > incompatible with /usr/lib/crt1.o cc: error: linker command failed with exit > code 1 (use > -v to see invocation) *** [mkmagic] Error code 1 > > make[2]: stopped in /usr/src/lib/libmagic > .ERROR_TARGET='mkmagic' > .ERROR_META_FILE='/usr/obj/usr/src/amd64.amd64/obj-lib32/lib/libmagic/mkmagic.meta' > Deleting "/usr/obj/usr/src/amd64.amd64/obj-lib32/lib/libz/libz.so" solved the problem. -- O. Hartmann Ich widerspreche der Nutzung oder Übermittlung meiner Daten für Werbezwecke oder für die Markt- oder Meinungsforschung (§ 28 Abs. 4 BDSG). pgptapGg7FrCl.pgp Description: OpenPGP digital signature
Re: ZFS: alignment/boundary for partition type freebsd-zfs
On 2017-12-26 11:24, O. Hartmann wrote: > Running recent CURRENT on most of our lab's boxes, I was in need to replace > and restore a > ZFS RAIDZ pool. Doing so, I was in need to partition the disks I was about to > replace. > Well, the drives in question are 4k block size drives with 512b emulation - > as most of > them today. I've created the only and sole partiton on each 4 TB drive via > the command > sequence > > gpart create -s GPT adaX > gpart add -t freebsd-zfs -a 4k -l nameXX adaX > > After doing this on all drives I was about to replace, something drove me to > check on > the net and I found a lot of websites giving "advices", how to prepare large, > modern > drives for ZFS. I think the GNOP trick is not necessary any more, but many > blogs > recommend to perform > > gpart add -t freebsd-zfs -b 1m -a 4k -l nameXX adaX > > to put the partition boundary at the 1 Megabytes boundary. I didn't do that. > My > partitions all start now at block 40. > > My question is: will this have severe performance consequences or is that > negligible? > > Since most of those websites I found via "zfs freebsd alignement" are from > years ago, I'm > a bit confused now an consideration performing all this days-taking > resilvering process > let me loose some more hair as the usual "fallout" ... > > Thanks in advance, > > Oliver > The 1mb alignment is not required. It is just what I do to leave room for the other partition types before the ZFS partition. However, the replacement for the GNOP hack, is separate. In addition to aligning the partitions to 4k, you have to tell ZFS that the drive is 4k: sysctl vfs.zfs.min_auto_ashift=12 (2^12 = 4096) Before you create the pool, or add additional vdevs. -- Allan Jude ___ freebsd-current@freebsd.org mailing list https://lists.freebsd.org/mailman/listinfo/freebsd-current To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"
ZFS: alignment/boundary for partition type freebsd-zfs
Running recent CURRENT on most of our lab's boxes, I was in need to replace and restore a ZFS RAIDZ pool. Doing so, I was in need to partition the disks I was about to replace. Well, the drives in question are 4k block size drives with 512b emulation - as most of them today. I've created the only and sole partiton on each 4 TB drive via the command sequence gpart create -s GPT adaX gpart add -t freebsd-zfs -a 4k -l nameXX adaX After doing this on all drives I was about to replace, something drove me to check on the net and I found a lot of websites giving "advices", how to prepare large, modern drives for ZFS. I think the GNOP trick is not necessary any more, but many blogs recommend to perform gpart add -t freebsd-zfs -b 1m -a 4k -l nameXX adaX to put the partition boundary at the 1 Megabytes boundary. I didn't do that. My partitions all start now at block 40. My question is: will this have severe performance consequences or is that negligible? Since most of those websites I found via "zfs freebsd alignement" are from years ago, I'm a bit confused now an consideration performing all this days-taking resilvering process let me loose some more hair as the usual "fallout" ... Thanks in advance, Oliver -- O. Hartmann Ich widerspreche der Nutzung oder Übermittlung meiner Daten für Werbezwecke oder für die Markt- oder Meinungsforschung (§ 28 Abs. 4 BDSG). pgp8I4XXhIPDh.pgp Description: OpenPGP digital signature
LLD_IS_LD: error compiling world after switch: incompatible with /usr/lib/crt1.o ...
On to boxes running almost the same CURRENT (non-working is at FreeBSD 12.0-CURRENT #241 r327182: Mon Dec 25 22:45:06 CET 2017 amd64; the working one is at r327183) I dared, again, to flip the switch LLD_IS_LD=YES. On the box in question building world magically stops and dumping an error sometimes, see below. The other one is performing well and build world without problems. I build, due to resource limitations, on both systems kernel and world with META_MODE. When I switched over to LLD again, I didn't clear the whole obj tree - I do not want this since my resources are, as said, limited and build times of 90 minutes is a bit a burden at the moment. So, is there probably a workaround to this? Thanks in adavnce, Oliver [...] Building /usr/obj/usr/src/amd64.amd64/obj-lib32/lib/libmagic/mkmagic --- mkmagic --- /usr/bin/ld: error: /usr/obj/usr/src/amd64.amd64/obj-lib32/lib/libz/libz.so is incompatible with /usr/lib/crt1.o cc: error: linker command failed with exit code 1 (use -v to see invocation) *** [mkmagic] Error code 1 make[2]: stopped in /usr/src/lib/libmagic .ERROR_TARGET='mkmagic' .ERROR_META_FILE='/usr/obj/usr/src/amd64.amd64/obj-lib32/lib/libmagic/mkmagic.meta' -- O. Hartmann Ich widerspreche der Nutzung oder Übermittlung meiner Daten für Werbezwecke oder für die Markt- oder Meinungsforschung (§ 28 Abs. 4 BDSG). pgpureIQqpJ4o.pgp Description: OpenPGP digital signature