buildkernel: Cleaning for nvidia-driver-470-470.161.03: [all] Stopped -- signal 22

2023-09-15 Thread Graham Perrin

The tail of a log:

…
Building /usr/obj/usr/src/amd64.amd64/sys/GENERIC/kernel.full
--- kernel.full ---
linking kernel.full
ctfmerge -L VERSION -g -o kernel.full ...
  text  data    bss    dec hex   filename
  20670038   1677321   19288064   41635423   0x27b4e5f kernel.full
Building /usr/obj/usr/src/amd64.amd64/sys/GENERIC/kernel.debug
Building /usr/obj/usr/src/amd64.amd64/sys/GENERIC/kernel
--- all ---
===> Ports module x11/nvidia-driver-470 (all)
cd /usr/ports/x11/nvidia-driver-470; env  -u CC  -u CXX  -u CPP -u 
MAKESYSPATH  -u MK_AUTO_OBJ  -u MAKEOBJDIR  MAKEFLAGS="-j 32 -J 15,16 -j 
32 -J 15,16 -D NO_MODULES_OBJ DISABLE_VULNERABILITIES=yes KERNEL=kernel 
TARGET=amd64 TARGET_ARCH=amd64" SYSDIR=/usr/src/sys 
PATH=/usr/obj/usr/src/amd64.amd64/tmp/bin:/usr/obj/usr/src/amd64.amd64/tmp/usr/sbin:/usr/obj/usr/src/amd64.amd64/tmp/usr/bin:/usr/obj/usr/src/amd64.amd64/tmp/legacy/usr/sbin:/usr/obj/usr/src/amd64.amd64/tmp/legacy/usr/bin:/usr/obj/usr/src/amd64.amd64/tmp/legacy/bin:/usr/obj/usr/src/amd64.amd64/tmp/legacy/usr/libexec::/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/bin:/usr/local/sbin 
SRC_BASE=/usr/src  OSVERSION=150 
WRKDIRPREFIX=/usr/obj/usr/src/amd64.amd64/sys/GENERIC make -B clean build

===>  Cleaning for nvidia-driver-470-470.161.03
*** [all] Stopped -- signal 22



Why might a stop occur, with that signal?

My src.conf included:

PORTS_MODULES+=x11/nvidia-driver-470




vfs.zfs.bclone_enabled (was: FreeBSD 14.0-BETA2 Now Available)

2023-09-15 Thread Graham Perrin

On 16/09/2023 01:28, Glen Barber wrote:

o A fix for the ZFS block_cloning feature has been implemented.


Thanks

I see 
, 
with 
 
in stable/14.


As vfs.zfs.bclone_enabled is still 0 (at least, with 15.0-CURRENT 
n265350-72d97e1dd9cc): should we assume that additional fixes, not 
necessarily in time for 14.0-RELEASE, will be required before 
vfs.zfs.bclone_enabled can default to 1?





time instability

2023-09-15 Thread Sulev-Madis Silber
there's something between (working):

47d997021fbc

and (not working):

03bfee175269

that causes this to happen:


# date +%s
1694821998
# date +%s
1694822034
# date +%s
1694822036
# date +%s
1694822003


it's armv7 allwinner h3 board this issue happens in. i have no clue where to 
look except to brute force it maybe tomorrow
what was committed in past two weeks that might cause this?

running ntpd and powerd were killed, it didn't help a bit

was real fun too, as time machine seems to have been invented, but, wtf!



FreeBSD 14.0-BETA2 Now Available

2023-09-15 Thread Glen Barber
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

The second BETA build of the 14.0-RELEASE release cycle is now
available.

Installation images are available for:

o 14.0-BETA2 amd64 GENERIC
o 14.0-BETA2 i386 GENERIC
o 14.0-BETA2 powerpc GENERIC
o 14.0-BETA2 powerpc64 GENERIC64
o 14.0-BETA2 powerpc64le GENERIC64LE
o 14.0-BETA2 powerpcspe MPC85XXSPE
o 14.0-BETA2 armv7 GENERICSD
o 14.0-BETA2 aarch64 GENERIC
o 14.0-BETA2 aarch64 RPI
o 14.0-BETA2 aarch64 PINE64
o 14.0-BETA2 aarch64 PINE64-LTS
o 14.0-BETA2 aarch64 PINEBOOK
o 14.0-BETA2 aarch64 ROCK64
o 14.0-BETA2 aarch64 ROCKPRO64
o 14.0-BETA2 riscv64 GENERIC
o 14.0-BETA2 riscv64 GENERICSD

Note regarding arm SD card images: For convenience for those without
console access to the system, a freebsd user with a password of
freebsd is available by default for ssh(1) access.  Additionally,
the root user password is set to root.  It is strongly recommended
to change the password for both users after gaining access to the
system.

Installer images and memory stick images are available here:

https://download.freebsd.org/releases/ISO-IMAGES/14.0/

The image checksums follow at the end of this e-mail.

If you notice problems you can report them through the Bugzilla PR
system or on the -stable mailing list.

If you would like to use Git to do a source based update of an existing
system, use the "releng/14.0" branch.

A summary of changes since 14.0-BETA1 includes:

o The Areca RAID driver has been updated to version 1.50.00.06.

o The libarchive library has been updated.

o A fix for the ZFS block_cloning feature has been implemented.

o Several linux(4) updates.

o Several manual page updates.

A list of changes since 13.x is available in the releng/14.0
release notes:

https://www.freebsd.org/releases/14.0R/relnotes/

Please note, the release notes page is not yet complete, and will be
updated on an ongoing basis as the 14.0-RELEASE cycle progresses.

=== Virtual Machine Disk Images ===

VM disk images are available for the amd64, i386, and aarch64
architectures.  Disk images may be downloaded from the following URL
(or any of the FreeBSD download mirrors):

https://download.freebsd.org/releases/VM-IMAGES/14.0-BETA2/

BASIC-CI images can be found at:

https://download.freebsd.org/releases/CI-IMAGES/14.0-BETA2/

The partition layout is:

~ 16 kB - freebsd-boot GPT partition type (bootfs GPT label)
~ 1 GB  - freebsd-swap GPT partition type (swapfs GPT label)
~ 20 GB - freebsd-ufs GPT partition type (rootfs GPT label)

The disk images are available in QCOW2, VHD, VMDK, and raw disk image
formats.  The image download size is approximately 135 MB and 165 MB
respectively (amd64/i386), decompressing to a 21 GB sparse image.

Note regarding arm64/aarch64 virtual machine images: a modified QEMU EFI
loader file is needed for qemu-system-aarch64 to be able to boot the
virtual machine images.  See this page for more information:

https://wiki.freebsd.org/arm64/QEMU

To boot the VM image, run:

% qemu-system-aarch64 -m 4096M -cpu cortex-a57 -M virt  \
-bios QEMU_EFI.fd -serial telnet::,server -nographic \
-drive if=none,file=VMDISK,id=hd0 \
-device virtio-blk-device,drive=hd0 \
-device virtio-net-device,netdev=net0 \
-netdev user,id=net0

Be sure to replace "VMDISK" with the path to the virtual machine image.

=== Amazon EC2 AMI Images ===

FreeBSD/amd64 EC2 AMI IDs can be retrieved from the Systems Manager
Parameter Store in each region using the keys:

/aws/service/freebsd/amd64/base/ufs/14.0/BETA2
/aws/service/freebsd/amd64/base/zfs/14.0/BETA2

FreeBSD/arm64 EC2 AMIs are not available for this BETA build.

=== Vagrant Images ===

FreeBSD/amd64 images are not available for this BETA build.

=== Upgrading ===

IMPORTANT IMPORTANT IMPORTANT IMPORTANT IMPORTANT IMPORTANT IMPORTANT

Due to an issue where an existing file had been replaced by a directory
with the same name, binary upgrades from 13.2 and earlier using the 
freebsd-update(8) utility will not work.  The issue is being
investigated.

IMPORTANT IMPORTANT IMPORTANT IMPORTANT IMPORTANT IMPORTANT IMPORTANT

The freebsd-update(8) utility supports binary upgrades of amd64, i386,
and aarch64 systems running earlier FreeBSD releases.  Systems running
earlier FreeBSD releases can upgrade as follows:

# freebsd-update upgrade -r 14.0-BETA2

During this process, freebsd-update(8) may ask the user to help by
merging some configuration files or by confirming that the automatically
performed merging was done correctly.

# freebsd-update install

The system must be rebooted with the newly installed kernel before
continuing.

# shutdown -r now

After rebooting, freebsd-update needs to be run again to install the new
userland components:

# freebsd-update install

It is recommended to rebuild and install all applications if possible,
especially if upgrading from an earlier FreeBSD release, for example,
FreeBSD 

Re: Speed improvements in ZFS

2023-09-15 Thread Alexander Leidinger

Am 2023-09-15 13:40, schrieb George Michaelson:

Not wanting to hijack threads I am interested if any of this can 
translate back up tree and make Linux ZFS faster.


And, if there are simple sysctl tuning worth trying in large (tb) 
memory model pre 14 FreeBSD systems with slow zfs. Older freebsd alas.


The current part of the discussion is not really about ZFS (I use a lot 
of nullfs on top of ZFS). So no to the first part.


The tuning I did (maxvnodes) doesn't really depend on the FreeBSD 
version, but on the number of files touched/contained in the FS. The 
only other change I made is updating the OS itself, so this part doesn't 
apply to pre 14 systems.


If you think your ZFS (with a large ARC) is slow, you need to review 
your primary cache settings per dataset, check the arcstats, and maybe 
think about a 2nd level arc on fast storage (cache device on nvm or 
ssd). IF you have a read-once workload, nothing of this will help. So 
all depends on your workload.


Bye,
Alexander.

--
http://www.Leidinger.net alexan...@leidinger.net: PGP 0x8F31830F9F2772BF
http://www.FreeBSD.orgnetch...@freebsd.org  : PGP 0x8F31830F9F2772BF

signature.asc
Description: OpenPGP digital signature


Re: Speed improvements in ZFS

2023-09-15 Thread George Michaelson
Not wanting to hijack threads I am interested if any of this can translate
back up tree and make Linux ZFS faster.

And, if there are simple sysctl tuning worth trying in large (tb) memory
model pre 14 FreeBSD systems with slow zfs. Older freebsd alas.


Re: Speed improvements in ZFS

2023-09-15 Thread Alexander Leidinger

Am 2023-09-04 14:26, schrieb Mateusz Guzik:

On 9/4/23, Alexander Leidinger  wrote:

Am 2023-08-28 22:33, schrieb Alexander Leidinger:

Am 2023-08-22 18:59, schrieb Mateusz Guzik:

On 8/22/23, Alexander Leidinger  wrote:

Am 2023-08-21 10:53, schrieb Konstantin Belousov:
On Mon, Aug 21, 2023 at 08:19:28AM +0200, Alexander Leidinger 
wrote:

Am 2023-08-20 23:17, schrieb Konstantin Belousov:
> On Sun, Aug 20, 2023 at 11:07:08PM +0200, Mateusz Guzik wrote:
> > On 8/20/23, Alexander Leidinger  wrote:
> > > Am 2023-08-20 22:02, schrieb Mateusz Guzik:
> > >> On 8/20/23, Alexander Leidinger 
> > >> wrote:
> > >>> Am 2023-08-20 19:10, schrieb Mateusz Guzik:
> >  On 8/18/23, Alexander Leidinger 
> >  wrote:
> > >>>
> > > I have a 51MB text file, compressed to about 1MB. Are you
> > > interested
> > > to
> > > get it?
> > >
> > 
> >  Your problem is not the vnode limit, but nullfs.
> > 
> >  https://people.freebsd.org/~mjg/netchild-periodic-find.svg
> > >>>
> > >>> 122 nullfs mounts on this system. And every jail I setup has
> > >>> several
> > >>> null mounts. One basesystem mounted into every jail, and then
> > >>> shared
> > >>> ports (packages/distfiles/ccache) across all of them.
> > >>>
> >  First, some of the contention is notorious VI_LOCK in order
> >  to
> >  do
> >  anything.
> > 
> >  But more importantly the mind-boggling off-cpu time comes
> >  from
> >  exclusive locking which should not be there to begin with --
> >  as
> >  in
> >  that xlock in stat should be a slock.
> > 
> >  Maybe I'm going to look into it later.
> > >>>
> > >>> That would be fantastic.
> > >>>
> > >>
> > >> I did a quick test, things are shared locked as expected.
> > >>
> > >> However, I found the following:
> > >> if ((xmp->nullm_flags & NULLM_CACHE) != 0) {
> > >> mp->mnt_kern_flag |=
> > >> lowerrootvp->v_mount->mnt_kern_flag &
> > >> (MNTK_SHARED_WRITES | MNTK_LOOKUP_SHARED |
> > >> MNTK_EXTENDED_SHARED);
> > >> }
> > >>
> > >> are you using the "nocache" option? it has a side effect of
> > >> xlocking
> > >
> > > I use noatime, noexec, nosuid, nfsv4acls. I do NOT use nocache.
> > >
> >
> > If you don't have "nocache" on null mounts, then I don't see how
> > this
> > could happen.
>
> There is also MNTK_NULL_NOCACHE on lower fs, which is currently set
> for
> fuse and nfs at least.

11 of those 122 nullfs mounts are ZFS datasets which are also NFS
exported.
6 of those nullfs mounts are also exported via Samba. The NFS
exports
shouldn't be needed anymore, I will remove them.

By nfs I meant nfs client, not nfs exports.


No NFS client mounts anywhere on this system. So where is this
exclusive
lock coming from then...
This is a ZFS system. 2 pools: one for the root, one for anything I
need
space for. Both pools reside on the same disks. The root pool is a
3-way
mirror, the "space-pool" is a 5-disk raidz2. All jails are on the
space-pool. The jails are all basejail-style jails.



While I don't see why xlocking happens, you should be able to dtrace
or printf your way into finding out.


dtrace looks to me like a faster approach to get to the root than
printf... my first naive try is to detect exclusive locks. I'm not 
100%

sure I got it right, but at least dtrace doesn't complain about it:
---snip---
#pragma D option dynvarsize=32m

fbt:nullfs:null_lock:entry
/args[0]->a_flags & 0x08 != 0/
{
stack();
}
---snip---

In which direction should I look with dtrace if this works in 
tonights

run of periodic? I don't have enough knowledge about VFS to come up
with some immediate ideas.


After your sysctl fix for maxvnodes I increased the amount of vnodes 
10

times compared to the initial report. This has increased the speed of
the operation, the find runs in all those jails finished today after 
~5h

(@~8am) instead of in the afternoon as before. Could this suggest that
in parallel some null_reclaim() is running which does the exclusive
locks and slows down the entire operation?



That may be a slowdown to some extent, but the primary problem is
exclusive vnode locking for stat lookup, which should not be
happening.


With -current as of 2023-09-03 (and right now 2023-09-11), the periodic 
daily runs are down to less than an hour... and this didn't happen 
directly after switching to 2023-09-13. First it went down to 4h, then 
down to 1h without any update of the OS. The only thing what I did was 
modifying the number of maxfiles. First to some huge amount after your 
commit in the sysctl affecting part. Then after noticing way more 
freevnodes than configured down to 5.


Bye,
Alexander.

--
http://www.Leidinger.net alexan...@leidinger.net: PGP 0x8F31830F9F2772BF
http://www.FreeBSD.orgnetch...@freebsd.org  : PGP 0x8F31830F9F2772BF


signature.asc
Description: OpenPGP digital signature