Re: [FIXED] Re: main-n254654-d4e8207317c results in "no pools available to import"

2022-04-26 Thread Dennis Clarke

On 4/26/22 16:17, Thomas Laus wrote:

On 4/11/22 11:17, Dennis Clarke wrote:


Did the usual git pull origin main and buildworld/buildkernel but 
after installkernel the machine will not boot.


The rev seems to be main-n254654-d4e8207317c.

I can boot single user mode and get a command prompt but nothing past
that. Is there something borked in ZFS in CURRENT ?


Group:

Everything is running back to normal for me today after my weekly build 
of MAIN.  My combination of EFI, GELI and ZFS are working for me after:


FreeBSD 14.0-CURRENT #1 main-n255058-fa8a6585c75: Tue Apr 26 12:13:10 
EDT 2022


It looks like another update to something else fixed the "no pools 
available to import" issue




Same here :

FreeBSD phobos 14.0-CURRENT FreeBSD 14.0-CURRENT #7 
main-n255054-67fc95025cc: Tue Apr 26 06:58:45 UTC 2022 
root@phobos:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64 amd64 1400057 
1400057


Somethng "magic" has changed and whatever that was it fixed the EFI boot 
issue.




--
Dennis Clarke
RISC-V/SPARC/PPC/ARM/CISC
UNIX and Linux spoken
GreyBeard and suspenders optional



[FIXED] Re: main-n254654-d4e8207317c results in "no pools available to import"

2022-04-26 Thread Thomas Laus

On 4/11/22 11:17, Dennis Clarke wrote:


Did the usual git pull origin main and buildworld/buildkernel but after 
installkernel the machine will not boot.


The rev seems to be main-n254654-d4e8207317c.

I can boot single user mode and get a command prompt but nothing past
that. Is there something borked in ZFS in CURRENT ?


Group:

Everything is running back to normal for me today after my weekly build 
of MAIN.  My combination of EFI, GELI and ZFS are working for me after:


FreeBSD 14.0-CURRENT #1 main-n255058-fa8a6585c75: Tue Apr 26 12:13:10 
EDT 2022


It looks like another update to something else fixed the "no pools 
available to import" issue


Tom

--
Public Keys:
PGP KeyID = 0x5F22FDC1
GnuPG KeyID = 0x620836CF



Re: Cross-compile worked, cross-install not so much ...

2022-04-26 Thread Patrick M. Hausen
Hi all,

I just threw a bit of hardware at the problem for now.

In addition to the seven-node TuringPi that I would like to run with
FreeBSD after I cannot make Raspbian run stable even when
completely idle I have another actively cooled single CM3+ system
(the "pi8" you saw in my first post)

So I connected a USB powered SSD and compile on the Pi now.
System is CPU bound, so no bottleneck because of USB:


CPU: 95.4% user,  0.0% nice,  4.5% system,  0.1% interrupt,  0.0% idle
Mem: 395M Active, 95M Inact, 384K Laundry, 224M Wired, 97M Buf, 187M Free
Swap: 4096M Total, 27M Used, 4069M Free

  PID USERNAMETHR PRI NICE   SIZERES STATEC   TIMEWCPU COMMAND
39861 root  1 1000   248M   162M CPU0 0   0:55  96.39% c++
39867 root  1  860   124M54M CPU1 1   0:06  96.34% c++
39865 root  1  870   124M56M RUN  2   0:07  91.54% c++
39863 root  1  990   224M   138M RUN  3   0:45  83.17% c++


I'd rather run 13.1 and use freebsd-update, but 13 does not support the CM3(+).
That was in fact the first not so pleasant surprise - thought, ARM64 was tier 
1, now?
Anyway, if -current works for now, there will be a 14.0-RELEASE, eventually.

Thanks for your help, folks.
Patrick


Re: Cross-compile worked, cross-install not so much ...

2022-04-26 Thread Patrick M. Hausen
Hi all,

> Am 25.04.2022 um 21:54 schrieb Warner Losh :
> Cross installing is supported. Native installing of a cross build world isn't.
> 
> When I've had to do this in the past, I've just mounted the embedded system 
> on my
> beefy server under /mumble (allowing root on the embedded system for NFS) and 
> did
> a sudo make installworld ... DESTDIR=/mumble. Though I've forgotten the 
> gymnastics
> to do etcupdate/mergemaster this way.

I'll try that tonight, thanks. How do you get chflags to work over NFS?

Kind regards,
Patrick


Re: nullfs and ZFS issues

2022-04-26 Thread Alexander Leidinger
Quoting Eirik Øverby  (from Mon, 25 Apr 2022  
18:44:19 +0200):



On Mon, 2022-04-25 at 15:27 +0200, Alexander Leidinger wrote:

Quoting Alexander Leidinger  (from Sun, 24
Apr 2022 19:58:17 +0200):

> Quoting Alexander Leidinger  (from Fri, 22
> Apr 2022 09:04:39 +0200):
>
> > Quoting Doug Ambrisko  (from Thu, 21 Apr
> > 2022 09:38:35 -0700):
>
> > > I've attached mount.patch that when doing mount -v should
> > > show the vnode usage per filesystem.  Note that the problem I was
> > > running into was after some operations arc_prune and arc_evict would
> > > consume 100% of 2 cores and make ZFS really slow.  If you are not
> > > running into that issue then nocache etc. shouldn't be needed.
> >
> > I don't run into this issue, but I have a huge perf difference when
> > using nocache in the nightly periodic runs. 4h instead of 12-24h
> > (22 jails on this system).
> >
> > > On my laptop I set ARC to 1G since I don't use swap and in the past
> > > ARC would consume to much memory and things would die.  When the
> > > nullfs holds a bunch of vnodes then ZFS couldn't release them.
> > >
> > > FYI, on my laptop with nocache and limited vnodes I haven't run
> > > into this problem.  I haven't tried the patch to let ZFS free
> > > it's and nullfs vnodes on my laptop.  I have only tried it via
> >
> > I have this patch and your mount patch installed now, without
> > nocache and reduced arc reclaim settings (100, 1). I will check the
> > runtime for the next 2 days.
>
> 9-10h runtime with the above settings (compared to 4h with nocache
> and 12-24h without any patch and without nocache).
> I changed the sysctls back to the defaults and will see in the next
> run (in 7h) what the result is with just the patches.

And again 9-10h runtime (I've seen a lot of the find processes in the
periodic daily run of those 22 jails in the state "*vnode"). Seems
nocache gives the best perf for me in this case.


Sorry for jumping in here - I've got a couple of questions:
- Will this also apply to nullfs read-only mounts? Or is it only in
case of writing "through" a nullfs mount that these problems are seen?
- Is it a problem also in 13, or is this "new" in -CURRENT?

We're having weird and unexplained CPU spikes on several systems, even
after tuning geli to not use gazillions of threads. So far our
suspicion has been ZFS snapshot cleanups but this is an interesting
contender - unless the whole "read only" part makes it moot.


For me this started after creating one more jail on this system and I  
dont't see CPU spikes (as the system is running permanently at 100%  
and the distribution of the CPU looks as I would expect it). The  
experience of Doug is a little bit different, as he experiences a high  
amount of CPU usage "for nothing" or even a dead-lock like situation.  
So I would say we see different things based on similar triggers.


The nocache option for nullfs is affecting the number of vnodes in use  
on the system no matter if ro or rw. As such you can give it a try.  
Note, depending on the usage pattern, the nocache option may increase  
lock contention. So it may or may not have a positive or negative  
performance impact.


Bye,
Alexander.

--
http://www.Leidinger.net alexan...@leidinger.net: PGP 0x8F31830F9F2772BF
http://www.FreeBSD.orgnetch...@freebsd.org  : PGP 0x8F31830F9F2772BF


pgp8REsu61FBW.pgp
Description: Digitale PGP-Signatur