Re: I get odd time reports from poudriere on armv7 system, under a (non-debug) main [so: 14] FreeBSD.

2021-09-28 Thread Bryan Drewery
On 9/26/2021 11:05 PM, Mark Millard wrote:
> On 2021-Sep-26, at 10:02, Ian Lepore  wrote:
> 
>> On Sun, 2021-09-26 at 02:27 -0700, Mark Millard via freebsd-current
>> wrote:
>>> On 2021-Sep-25, at 23:25, Mark Millard  wrote:
>>>
>>>
>>> [...]
>>> if (argc == 3 && strcmp(argv[2], "-nsec") == 0)
>>> printf("%ld.%ld\n", ts.tv_sec, ts.tv_nsec);
>>
>> There are two problems with this, both the seconds and nanos are
>> printed incorrectly.  The correct incantation would be
>>
>>  printf("%jd.%09ld\n", (intmax_t)ts.tv_sec, ts.tv_nsec);
>>
> 
> Thanks Ian for looking into more than I did last night.
> 
> Based on the following (up to possible e-mail white space issues),
> poudriere-devel seems t be working for reporting times:
> 
> # more /usr/ports/ports-mgmt/poudriere-devel/files/patch-clock 
> --- src/libexec/poudriere/clock/clock.c.orig2021-09-26 22:24:54.735485000 
> -0700
> +++ src/libexec/poudriere/clock/clock.c 2021-09-26 11:46:12.076362000 -0700
> @@ -24,6 +24,7 @@
>   * THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
>   */
>  
> +#include 
>  #include 
>  #include 
>  #include 
> @@ -71,8 +72,8 @@
> } else
> usage();
> if (argc == 3 && strcmp(argv[2], "-nsec") == 0)
> -   printf("%ld.%ld\n", ts.tv_sec, ts.tv_nsec);
> +   printf("%jd.%09ld\n", (intmax_t)ts.tv_sec, ts.tv_nsec);
> else
> -   printf("%ld\n", ts.tv_sec);
> +   printf("%jd\n", (intmax_t)ts.tv_sec);
> return (EXIT_SUCCESS);
>  }


Thanks, I've committed it in my local git. Will push out later.


-- 
Bryan Drewery



OpenPGP_signature
Description: OpenPGP digital signature


Re: [HEADSUP] making /bin/sh the default shell for root

2021-09-28 Thread Mateusz Piotrowski

On 23/09/2021 10:55, Hans Ottevanger wrote:
As you mention elsewhere in this thread, usage in scripts is not affected by these changes. And 
for interactive use it could be a POLA violation, but the astonishment would be a positive one.


Unfortunately, the switch from csh to sh is going to affect scripts. Take a look at this (examples 
below assume that toor uses /bin/sh):


# su root -c 'echo $1' abc def
abc
# su toor -c 'echo $1' abc def
def

Another example:

# ssh -p  root@xxx -- '$@' echo 1 2 3
Illegal variable name.
# ssh -p  toor@xxx -- '$@' echo 1 2 3
1 2 3

I've been bitten by this a couple of times when working with some production scripts. I'm afraid 
changing the default shell to csh may cause a bit of hard-to-debug problems there in the wild.


Otherwise, I'd be happy with having sh(1) as the default shell for root. From my perspective, sh(1) 
was far more forgiving to my colleagues when they started with FreeBSD.


Best,

Mateusz Piotrowski





Re: Building ZFS disk images

2021-09-28 Thread Mina Galić



‐‐snip---

*puts on cloud-init contributor hat*


> > No, because you might create a VM image once, then instantiate it
> > dozens or thousands of times. The firstboot solution is great because
> > it lets you reuse the same image file.
>
> I would continue to argue that the place to fix this is in the
> "instantiate tool". ESXI vmfs deals with this all the time
> when you clone a disk. And again the "fix at boot" does not
> deal with the problem in that if I "instatiate" 10 copies of
> a zpool for VM's and then try to mount 2 of them at once on
> the host this problem rares it head. Fix the problem as close
> to point of creation as possible for minimal issues in all
> operations for everyone.

a lot of folks use cloud-init for provisioning different Unices onto
different virtualisation (cloud) platforms.

We could fix it there.
We already extend paritions, filesystems and ZFS pools in cloud-init.

now, again, one could argue that's the wrong place to do any of that,
and we should just be using firstboot.
But the problem seems to be that a lot folks out there got into the
habit of creating and publishing (FreeBSD) images, seem to have
forgotten or never knew about firstboot, and don't to set it.


Mina Galić

Web: https://igalic.co/
PkgBase: https://alpha.pkgbase.live/



Re: Building ZFS disk images

2021-09-28 Thread Alan Somers
On Tue, Sep 28, 2021 at 10:15 AM Rodney W. Grimes
 wrote:
>
> > On Tue, Sep 28, 2021 at 9:48 AM Rodney W. Grimes
> >  wrote:
> > >
> > > > On Mon, Sep 27, 2021 at 1:54 PM Mark Johnston  wrote:
> > > > >
> > > > > On Thu, Aug 05, 2021 at 10:54:19AM -0500, Alan Somers wrote:
> > > > > > There's this:
> > > > > > https://openzfs.github.io/openzfs-docs/man/8/zpool-reguid.8.html .  
> > > > > > I
> > > > > > haven't used it myself.
> > > > >
> > > > > Would it be useful to have an rc.d script that can run this, probably
> > > > > just on the root pool?  It could be configured to run only upon the
> > > > > first boot, like growfs already does.
> > > >
> > > > Absolutely!
> > >
> > > Eww!  :-)
> > >
> > > > >
> > > > > > On Thu, Aug 5, 2021, 9:29 AM David Chisnall  
> > > > > > wrote:
> > > > > >
> > > > > > > On 05/08/2021 13:53, Alan Somers wrote:
> > > > > > > > I don't know of any way to do it using the official release 
> > > > > > > > scripts
> > > > > > > > either. One problem is that every ZFS pool and file system is 
> > > > > > > > supposed
> > > > > > > > to have a unique GUID.  So any kind of ZFS release builder 
> > > > > > > > would need to
> > >  ^^^
> > > > > > > > re-guid the pool on first boot.
> > >
> > > Isnt the proper place to solve this lack of Unique UUID creation
> > > in the tool(s) that are creating the zfs pool in the first place.
> > >
> > > Fixing it "post boot" seems to be a far to late hack and doesnt
> > > fix any of the situations where one might import these pools
> > > between creation and first boot.
> >
> > No, because you might create a VM image once, then instantiate it
> > dozens or thousands of times.  The firstboot solution is great because
> > it lets you reuse the same image file.
>
> I would continue to argue that the place to fix this is in the
> "instantiate tool".  ESXI vmfs deals with this all the time
> when you clone a disk.  And again the "fix at boot" does not
> deal with the problem in that if I "instatiate" 10 copies of
> a zpool for VM's and then try to mount 2 of them at once on
> the host this problem rares it head.   Fix the problem as close
> to point of creation as possible for minimal issues in all
> operations for everyone.

But that requires ESXI, or whatever VM system you're using, to know
about ZFS and GPT, and to know to look for a zpool on the 3rd
partition, right?  That seems like a lot to ask, especially since the
logic would have to be duplicated for ESXI, vm-bhyve, OpenNebula, etc
etc.

>
> >
> > >
> > > > > > >
> > > > > > > Is there a tool / command to do this?  I've hit this problem in 
> > > > > > > the
> > > > > > > past: I have multiple FreeBSD VMs that are all created from the 
> > > > > > > same
> > > > > > > template and if one dies I can't import its zpool into another 
> > > > > > > because
> > > > > > > they have the same UUID.
> > > > > > >
> > > > > > > It doesn't matter for modern deployments where the VM is 
> > > > > > > stateless and
> > > > > > > reimaged periodically but it's annoying for classic deployments 
> > > > > > > where I
> > > > > > > have things I care about on the VM.
> > > > > > >
> > > > > > > David
> > > >
> > > >
> > >
> > > --
> > > Rod Grimes 
> > > rgri...@freebsd.org
> >
>
> --
> Rod Grimes rgri...@freebsd.org



Re: Building ZFS disk images

2021-09-28 Thread Rodney W. Grimes
> On Tue, Sep 28, 2021 at 9:48 AM Rodney W. Grimes
>  wrote:
> >
> > > On Mon, Sep 27, 2021 at 1:54 PM Mark Johnston  wrote:
> > > >
> > > > On Thu, Aug 05, 2021 at 10:54:19AM -0500, Alan Somers wrote:
> > > > > There's this:
> > > > > https://openzfs.github.io/openzfs-docs/man/8/zpool-reguid.8.html .  I
> > > > > haven't used it myself.
> > > >
> > > > Would it be useful to have an rc.d script that can run this, probably
> > > > just on the root pool?  It could be configured to run only upon the
> > > > first boot, like growfs already does.
> > >
> > > Absolutely!
> >
> > Eww!  :-)
> >
> > > >
> > > > > On Thu, Aug 5, 2021, 9:29 AM David Chisnall  
> > > > > wrote:
> > > > >
> > > > > > On 05/08/2021 13:53, Alan Somers wrote:
> > > > > > > I don't know of any way to do it using the official release 
> > > > > > > scripts
> > > > > > > either. One problem is that every ZFS pool and file system is 
> > > > > > > supposed
> > > > > > > to have a unique GUID.  So any kind of ZFS release builder would 
> > > > > > > need to
> >  ^^^
> > > > > > > re-guid the pool on first boot.
> >
> > Isnt the proper place to solve this lack of Unique UUID creation
> > in the tool(s) that are creating the zfs pool in the first place.
> >
> > Fixing it "post boot" seems to be a far to late hack and doesnt
> > fix any of the situations where one might import these pools
> > between creation and first boot.
> 
> No, because you might create a VM image once, then instantiate it
> dozens or thousands of times.  The firstboot solution is great because
> it lets you reuse the same image file.

I would continue to argue that the place to fix this is in the
"instantiate tool".  ESXI vmfs deals with this all the time
when you clone a disk.  And again the "fix at boot" does not
deal with the problem in that if I "instatiate" 10 copies of
a zpool for VM's and then try to mount 2 of them at once on
the host this problem rares it head.   Fix the problem as close
to point of creation as possible for minimal issues in all
operations for everyone.

> 
> >
> > > > > >
> > > > > > Is there a tool / command to do this?  I've hit this problem in the
> > > > > > past: I have multiple FreeBSD VMs that are all created from the same
> > > > > > template and if one dies I can't import its zpool into another 
> > > > > > because
> > > > > > they have the same UUID.
> > > > > >
> > > > > > It doesn't matter for modern deployments where the VM is stateless 
> > > > > > and
> > > > > > reimaged periodically but it's annoying for classic deployments 
> > > > > > where I
> > > > > > have things I care about on the VM.
> > > > > >
> > > > > > David
> > >
> > >
> >
> > --
> > Rod Grimes 
> > rgri...@freebsd.org
> 

-- 
Rod Grimes rgri...@freebsd.org



Re: Building ZFS disk images

2021-09-28 Thread Alan Somers
On Tue, Sep 28, 2021 at 9:48 AM Rodney W. Grimes
 wrote:
>
> > On Mon, Sep 27, 2021 at 1:54 PM Mark Johnston  wrote:
> > >
> > > On Thu, Aug 05, 2021 at 10:54:19AM -0500, Alan Somers wrote:
> > > > There's this:
> > > > https://openzfs.github.io/openzfs-docs/man/8/zpool-reguid.8.html .  I
> > > > haven't used it myself.
> > >
> > > Would it be useful to have an rc.d script that can run this, probably
> > > just on the root pool?  It could be configured to run only upon the
> > > first boot, like growfs already does.
> >
> > Absolutely!
>
> Eww!  :-)
>
> > >
> > > > On Thu, Aug 5, 2021, 9:29 AM David Chisnall  
> > > > wrote:
> > > >
> > > > > On 05/08/2021 13:53, Alan Somers wrote:
> > > > > > I don't know of any way to do it using the official release scripts
> > > > > > either. One problem is that every ZFS pool and file system is 
> > > > > > supposed
> > > > > > to have a unique GUID.  So any kind of ZFS release builder would 
> > > > > > need to
>  ^^^
> > > > > > re-guid the pool on first boot.
>
> Isnt the proper place to solve this lack of Unique UUID creation
> in the tool(s) that are creating the zfs pool in the first place.
>
> Fixing it "post boot" seems to be a far to late hack and doesnt
> fix any of the situations where one might import these pools
> between creation and first boot.

No, because you might create a VM image once, then instantiate it
dozens or thousands of times.  The firstboot solution is great because
it lets you reuse the same image file.

>
> > > > >
> > > > > Is there a tool / command to do this?  I've hit this problem in the
> > > > > past: I have multiple FreeBSD VMs that are all created from the same
> > > > > template and if one dies I can't import its zpool into another because
> > > > > they have the same UUID.
> > > > >
> > > > > It doesn't matter for modern deployments where the VM is stateless and
> > > > > reimaged periodically but it's annoying for classic deployments where 
> > > > > I
> > > > > have things I care about on the VM.
> > > > >
> > > > > David
> >
> >
>
> --
> Rod Grimes rgri...@freebsd.org



Re: Building ZFS disk images

2021-09-28 Thread Rodney W. Grimes
> On Mon, Sep 27, 2021 at 1:54 PM Mark Johnston  wrote:
> >
> > On Thu, Aug 05, 2021 at 10:54:19AM -0500, Alan Somers wrote:
> > > There's this:
> > > https://openzfs.github.io/openzfs-docs/man/8/zpool-reguid.8.html .  I
> > > haven't used it myself.
> >
> > Would it be useful to have an rc.d script that can run this, probably
> > just on the root pool?  It could be configured to run only upon the
> > first boot, like growfs already does.
> 
> Absolutely!

Eww!  :-)

> >
> > > On Thu, Aug 5, 2021, 9:29 AM David Chisnall  wrote:
> > >
> > > > On 05/08/2021 13:53, Alan Somers wrote:
> > > > > I don't know of any way to do it using the official release scripts
> > > > > either. One problem is that every ZFS pool and file system is supposed
> > > > > to have a unique GUID.  So any kind of ZFS release builder would need 
> > > > > to
 ^^^
> > > > > re-guid the pool on first boot.

Isnt the proper place to solve this lack of Unique UUID creation
in the tool(s) that are creating the zfs pool in the first place.

Fixing it "post boot" seems to be a far to late hack and doesnt
fix any of the situations where one might import these pools
between creation and first boot.

> > > >
> > > > Is there a tool / command to do this?  I've hit this problem in the
> > > > past: I have multiple FreeBSD VMs that are all created from the same
> > > > template and if one dies I can't import its zpool into another because
> > > > they have the same UUID.
> > > >
> > > > It doesn't matter for modern deployments where the VM is stateless and
> > > > reimaged periodically but it's annoying for classic deployments where I
> > > > have things I care about on the VM.
> > > >
> > > > David
> 
> 

-- 
Rod Grimes rgri...@freebsd.org



Re: latest current fails to boot.

2021-09-28 Thread Tomoaki AOKI
On Sun, 26 Sep 2021 12:16:55 -0400
Alexander Motin  wrote:

> Thank you for the notification.  08063e9f98a should fix the hang.

Thanks for the fix!
Rebuilt, Several reboots with varying 0, 1, commented out whole line for
kern.sched.steal_thresh and all went fine.


> I just want to add that lowering kern.sched.steal_thresh below 2 should
> not be a proper fix for any problem, but only a workaround.  I guess
> either some CPU can't wake up from sleep for too long, or the wakeup
> interrupt is not properly sent to it when the load is assigned.  In such
> case stealing makes other CPU to do the work instead.  It would be good
> to find and fix the real problem.

For me (with Core i7-8750H), lowering kern.sched.steal_thresh didn't
made significant improvement but had no reason to go back to default.

 *As sched_ule got modified now, I'm trying default
  kern.sched.steal_thresh value now.

In the other hand, at least some Ryzen users seem to have much more
severe problem than me and the workaround make significant imrovement.


> On 25.09.2021 21:47, Konstantin Belousov wrote:
> > On Sun, Sep 26, 2021 at 10:23:47AM +0900, Tomoaki AOKI wrote:
> >> On Sat, 25 Sep 2021 23:46:48 +0300
> >> Andriy Gapon  wrote:
> >>
> >>> On 25/09/2021 19:10, Johan Hendriks wrote:
>  For me i had kern.sched.steal_thresh=1 in my sysctl as i use this 
>  machine mainly 
>  for tests and so on.
>  By removing this sysctl the system boots again. I already used the 
>  latest 
>  snapshot and that booted fine.
> >>>
> >>> Might have something to do with
> >>> https://cgit.FreeBSD.org/src/commit/?id=bd84094a51c4648a7c97ececdaccfb30bc832096
> >>>
> >>> -- 
> >>> Andriy Gapon
> >>
> >> Commenting out kern.sched.steal_thresh=0 line in /etc/sysctl.conf let
> >> me boot fine. No other setting of kern.sched.* affected.
> >> I've introduced the setting by reading posts [1] and [2] on
> >> freebsd-current ML. Thanks for the hint, Jan!
> >>
> >> Andriy, I took time to bi-sect and determined the commit triggered
> >> this issue was e745d729be60. [3]
> >> Worked OK even with kern.sched.steal_thresh=0 at a342ecd326ee. [4]
> >>
> >> Tested commits are as below (tested order, not using git bisect):
> >>  0b79a76f8487: [Known to be OK]
> >>  8db1669959ce: [Problematic rev I first encountered]
> >>  0f6829488ef3: OK
> >>  df8dd6025af8: OK
> >>  4f917847c903: OK
> >>  e745d729be60: NG!
> >>  bd84094a51c4: OK
> >>  a342ecd326ee: OK
> >>
> >> Konstantin, no more chance to get into ddb on hang up until my previous
> >> post. ^T never worked on hang up situation. Sory. But does info above
> >> help?
> > Let the author of the commit look.
> > 
> >>
> >>
> >> [1]
> >> https://lists.freebsd.org/pipermail/freebsd-current/2021-March/079237.html
> >>
> >> [2]
> >> https://lists.freebsd.org/pipermail/freebsd-current/2021-March/079240.html
> >>
> >> [3]
> >> https://lists.freebsd.org/pipermail/dev-commits-src-main/2021-September/007513.html
> >>
> >> [4]
> >> https://lists.freebsd.org/pipermail/dev-commits-src-main/2021-September/007512.html
> >>
> >> -- 
> >> Tomoaki AOKI
> 
> -- 
> Alexander Motin
> 


-- 
Tomoaki AOKI