Re: releng/13 release/13.0.0 : odd/incorrect diff result over nfs (in a zfs file systems context)

2021-05-23 Thread Mark Millard via freebsd-stable
On 2021-May-23, at 01:27, Mark Millard wrote: > On 2021-May-23, at 00:44, Mark Millard wrote: > >> On 2021-May-21, at 17:56, Rick Macklem wrote: >> >>> Mark Millard wrote: >>> [stuff snipped] Well, why is it that ls -R, find, and diff -r all get file name problems via genet0 but

Re: releng/13 release/13.0.0 : odd/incorrect diff result over nfs (in a zfs file systems context)

2021-05-23 Thread Mark Millard via freebsd-stable
On 2021-May-23, at 00:44, Mark Millard wrote: > On 2021-May-21, at 17:56, Rick Macklem wrote: > >> Mark Millard wrote: >> [stuff snipped] >>> Well, why is it that ls -R, find, and diff -r all get file >>> name problems via genet0 but diff -r gets no problems >>> comparing the content of files

Re: releng/13 release/13.0.0 : odd/incorrect diff result over nfs (in a zfs file systems context)

2021-05-23 Thread Mark Millard via freebsd-stable
On 2021-May-21, at 17:56, Rick Macklem wrote: > Mark Millard wrote: > [stuff snipped] >> Well, why is it that ls -R, find, and diff -r all get file >> name problems via genet0 but diff -r gets no problems >> comparing the content of files that it does match up (the >> vast majority)? Any clue

Re: releng/13 release/13.0.0 : odd/incorrect diff result over nfs (in a zfs file systems context)

2021-05-21 Thread Rick Macklem
Mark Millard wrote: [stuff snipped] >Well, why is it that ls -R, find, and diff -r all get file >name problems via genet0 but diff -r gets no problems >comparing the content of files that it does match up (the >vast majority)? Any clue how could the problems possibly >be unique to the handling of

Re: releng/13 release/13.0.0 : odd/incorrect diff result over nfs (in a zfs file systems context)

2021-05-21 Thread Mark Millard via freebsd-stable
On 2021-May-21, at 09:00, Rick Macklem wrote: > Mark Millard wrote: >> On 2021-May-20, at 22:19, Rick Macklem wrote: > [stuff snipped] >>> ps: I do not think that r367492 could cause this, but it would be >>>nice if you try a kernel with the r367492 patch reverted. >>>It is currently

Re: releng/13 release/13.0.0 : odd/incorrect diff result over nfs (in a zfs file systems context)

2021-05-21 Thread Rick Macklem
Mark Millard wrote: >On 2021-May-20, at 22:19, Rick Macklem wrote: [stuff snipped] >> ps: I do not think that r367492 could cause this, but it would be >> nice if you try a kernel with the r367492 patch reverted. >> It is currently in all of releng13, stable13 and main, although >>

Re: releng/13 release/13.0.0 : odd/incorrect diff result over nfs (in a zfs file systems context) [RPi4B genet0 involved in problem]

2021-05-21 Thread Mark Millard via freebsd-stable
[Looks like the RPi4B genet0 handling is involved.] On 2021-May-20, at 22:56, Mark Millard wrote: > > On 2021-May-20, at 22:19, Rick Macklem wrote: > >> Ok, so it isn't related to "soft". >> I am wondering if it is something specific to what >> "diff -r" does? >> >> Could you try: >> # cd

Re: releng/13 release/13.0.0 : odd/incorrect diff result over nfs (in a zfs file systems context)

2021-05-20 Thread Mark Millard via freebsd-stable
On 2021-May-20, at 22:19, Rick Macklem wrote: > Ok, so it isn't related to "soft". > I am wondering if it is something specific to what > "diff -r" does? > > Could you try: > # cd /usr/ports > # ls -R > /tmp/x > # cd /mnt > # ls -R > /tmp/y > # cd /tmp > # diff -u -p x y > --> To see if "ls

Re: releng/13 release/13.0.0 : odd/incorrect diff result over nfs (in a zfs file systems context)

2021-05-20 Thread Rick Macklem
__ From: Mark Millard Sent: Friday, May 21, 2021 12:40 AM To: Rick Macklem Cc: FreeBSD-STABLE Mailing List Subject: Re: releng/13 release/13.0.0 : odd/incorrect diff result over nfs (in a zfs file systems context) CAUTION: This email originated from outside of the University of Guelph.

Re: releng/13 release/13.0.0 : odd/incorrect diff result over nfs (in a zfs file systems context)

2021-05-20 Thread Mark Millard via freebsd-stable
a rather small EtherNet network. >>> >>>> rick >>>> >>>> >>>> From: owner-freebsd-sta...@freebsd.org >>>> on behalf of Rick Macklem >>>> Sent: Thursday, May 20, 2021 8:55 PM

Re: releng/13 release/13.0.0 : odd/incorrect diff result over nfs (in a zfs file systems context)

2021-05-20 Thread Mark Millard via freebsd-stable
d network. It is not >> outward facing. It is a rather small EtherNet network. >> >>> rick >>> >>> >>> From: owner-freebsd-sta...@freebsd.org >>> on behalf of Rick Macklem >>> Sent: Th

Re: releng/13 release/13.0.0 : odd/incorrect diff result over nfs (in a zfs file systems context)

2021-05-20 Thread Mark Millard via freebsd-stable
f of Rick Macklem >> Sent: Thursday, May 20, 2021 8:55 PM >> To: FreeBSD-STABLE Mailing List; Mark Millard >> Subject: Re: releng/13 release/13.0.0 : odd/incorrect diff result over nfs >> (in a zfs file systems context) >> >> Mark Millard wrote: >>>

Re: releng/13 release/13.0.0 : odd/incorrect diff result over nfs (in a zfs file systems context)

2021-05-20 Thread Mark Millard via freebsd-stable
ehalf of Rick Macklem > Sent: Thursday, May 20, 2021 8:55 PM > To: FreeBSD-STABLE Mailing List; Mark Millard > Subject: Re: releng/13 release/13.0.0 : odd/incorrect diff result over nfs > (in a zfs file systems context) > > Mark Millard wrote: >> [I warn that I'm a fairly

Re: releng/13 release/13.0.0 : odd/incorrect diff result over nfs (in a zfs file systems context)

2021-05-20 Thread Rick Macklem
reebsd-sta...@freebsd.org on behalf of Rick Macklem Sent: Thursday, May 20, 2021 8:55 PM To: FreeBSD-STABLE Mailing List; Mark Millard Subject: Re: releng/13 release/13.0.0 : odd/incorrect diff result over nfs (in a zfs file systems context) Mark Millard wrote: >[I warn that I'm a fairly minimal user o

Re: releng/13 release/13.0.0 : odd/incorrect diff result over nfs (in a zfs file systems context)

2021-05-20 Thread Rick Macklem
-first-parent --count for merge-base) > ># uname -apKU >FreeBSD CA72_4c8G_ZFS 13.0-RELEASE FreeBSD 13.0-RELEASE #0 >releng/13.0-n244733-ea31abc261ff-dirty: Thu Apr 29 >21:53:20 PDT 2021 >root@CA72_4c8G_ZFS:/usr/obj/BUILDs/13_0R-CA72-nodbg-clang/usr/13_0R->src/arm64.aarc

releng/13 release/13.0.0 : odd/incorrect diff result over nfs (in a zfs file systems context)

2021-05-20 Thread Mark Millard via freebsd-stable
: release/13.0.0, freebsd/releng/13.0) 13.0: update to RELEASE n244733 (--first-parent --count for merge-base) >From zfs list commands (one machine per line shown): zopt0/usr/ports 2.13G 236G 2.13G /usr/ports zroot/usr/ports 2.13G 113G

Re: Does FreeBSD 13 disable the VEV cache in ZFS ?

2021-05-14 Thread Pete French
As predicted, all zero, apart from bshift which is 16 and max which is 16384 > https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Module%20Parameters.html#zfs-vdev-cache-size > > Quote: > > Note: with the current ZFS code, the vdev cache is not helpful and

Re: Does FreeBSD 13 disable the VEV cache in ZFS ?

2021-05-14 Thread Stefan Esser
Am 14.05.21 um 10:34 schrieb Pete French: > > Am just upgrading my machiens, and have noticed an oddity. > This is on a machine runnign 12.2 > > # zfs-stats -D > > -------- > ZFS Subsystem Report

Does FreeBSD 13 disable the VEV cache in ZFS ?

2021-05-14 Thread Pete French
Am just upgrading my machiens, and have noticed an oddity. This is on a machine runnign 12.2 # zfs-stats -D ZFS Subsystem ReportFri May 14 08:30:50 2021

Re: Install of 13.0-RELEASE i386 with ZFS root hangs up

2021-05-08 Thread Konstantin Belousov
On Sat, May 08, 2021 at 06:33:02PM +0700, Eugene Grosbein wrote: > 08.05.2021 2:52, Konstantin Belousov wrote: > > > i386 kernel uses memory up to 24G since 13.0. > > > > PAE only means that devices that can access full 64bit address are allowed > > to avoid dma bouncing. > > Maybe you could

Re: Loading zfs module results in hangup on i386

2021-05-08 Thread Yasuhiro Kimura
From: Yasuhiro Kimura Subject: Re: Loading zfs module results in hangup on i386 Date: Sat, 08 May 2021 07:44:15 +0900 (JST) >> Now I think I know what is the source of problem. After all, on >> 13.0-RELEASE i386 system simply loading zfs module results in system >> hang up

Re: Install of 13.0-RELEASE i386 with ZFS root hangs up

2021-05-08 Thread Eugene Grosbein
08.05.2021 2:52, Konstantin Belousov wrote: > i386 kernel uses memory up to 24G since 13.0. > > PAE only means that devices that can access full 64bit address are allowed > to avoid dma bouncing. Maybe you could tell something on similar topic? There is FreeBSD 12.2-STABLE r369567 Base12 amd64

Re: Loading zfs module results in hangup on i386

2021-05-07 Thread Yasuhiro Kimura
From: Yasuhiro Kimura Subject: Loading zfs module results in hangup on i386 (Re: Install of 13.0-RELEASE i386 with ZFS root hangs up) Date: Sat, 08 May 2021 07:31:47 +0900 (JST) > Now I think I know what is the source of problem. After all, on > 13.0-RELEASE i386 system simply loadi

Loading zfs module results in hangup on i386 (Re: Install of 13.0-RELEASE i386 with ZFS root hangs up)

2021-05-07 Thread Yasuhiro Kimura
From: Yasuhiro Kimura Subject: Install of 13.0-RELEASE i386 with ZFS root hangs up Date: Fri, 07 May 2021 21:47:59 +0900 (JST) > Hello, > > Does anyone succeed to install 13.0-RELEASE i386 with ZFS root? > > I tried this with VirtualBox and VMware Player on Windows with

Re: Install of 13.0-RELEASE i386 with ZFS root hangs up

2021-05-07 Thread Konstantin Belousov
On Fri, May 07, 2021 at 09:48:07AM -0700, Freddie Cash wrote: > On Fri, May 7, 2021 at 5:49 AM Yasuhiro Kimura wrote: > > > Does anyone succeed to install 13.0-RELEASE i386 with ZFS root? > > > > I tried this with VirtualBox and VMware Player on Windows with &g

Re: Install of 13.0-RELEASE i386 with ZFS root hangs up

2021-05-07 Thread Freddie Cash
On Fri, May 7, 2021 at 5:49 AM Yasuhiro Kimura wrote: > Does anyone succeed to install 13.0-RELEASE i386 with ZFS root? > > I tried this with VirtualBox and VMware Player on Windows with > following VM condition. > > * 4 CPUs > * 8GB memory > * 100GB disk > * Brid

Re: Install of 13.0-RELEASE i386 with ZFS root hangs up

2021-05-07 Thread Yasuhiro Kimura
From: 8zwk...@oldach.net (Helge Oldach) Subject: Re: Install of 13.0-RELEASE i386 with ZFS root hangs up Date: Fri, 7 May 2021 15:41:45 +0200 (CEST) > Yasuhiro Kimura wrote on Fri, 07 May 2021 14:47:59 +0200 (CEST): >> Does anyone succeed to install 13.0-RELEASE i386 with

Install of 13.0-RELEASE i386 with ZFS root hangs up

2021-05-07 Thread Yasuhiro Kimura
Hello, Does anyone succeed to install 13.0-RELEASE i386 with ZFS root? I tried this with VirtualBox and VMware Player on Windows with following VM condition. * 4 CPUs * 8GB memory * 100GB disk * Bridge mode NIC But in both cases, VM gets high CPU load and hangs up after I moved to 'YES

Re: zpool list -p 's FREE vs. zfs list -p's AVAIL ? FREE-AVAIL == 6_675_374_080 (199G zroot pool)

2021-05-05 Thread Mark Millard via freebsd-stable
t is on zfs0 above. >> >> # zpool list -p >> NAME SIZEALLOC FREE CKPOINT EXPANDSZ FRAG >> CAP DEDUPHEALTH ALTROOT >> zroot 213674622976 71075655680 142598967296- - 28 >> 33 1.00ONLINE - >&

Re: zpool list -p 's FREE vs. zfs list -p's AVAIL ? FREE-AVAIL == 6_675_374_080 (199G zroot pool)

2021-05-05 Thread Yuri Pankov
OINT EXPANDSZ FRAG > CAP DEDUPHEALTH ALTROOT > zroot 213674622976 71075655680 142598967296- - 28 > 33 1.00ONLINE - > > So FREE: 142_598_967_296 > (using _ to make it more readable) > > # zfs list -p zroot > NAME

zpool list -p 's FREE vs. zfs list -p's AVAIL ? FREE-AVAIL == 6_675_374_080 (199G zroot pool)

2021-05-05 Thread Mark Millard via freebsd-stable
296- - 28 33 1.00ONLINE - So FREE: 142_598_967_296 (using _ to make it more readable) # zfs list -p zroot NAME USED AVAIL REFER MOUNTPOINT zroot 71073697792 135923593216 98304 /zroot So AVAIL: 135_923_593_216 FREE-AVAIL == 6_675_374_080 The questi

Re: ZFS rename with associated snapshot present: odd error message

2021-05-05 Thread Mark Millard via freebsd-stable
On 2021-May-5, at 05:28, Mark Millard wrote: > On 2021-May-5, at 02:47, Andriy Gapon wrote: > >> On 05/05/2021 01:59, Mark Millard via freebsd-current wrote: >>> I had a: >>> # zfs list -tall >>> NAME USED AVAIL

Re: ZFS rename with associated snapshot present: odd error message

2021-05-05 Thread Mark Millard via freebsd-stable
On 2021-May-5, at 02:47, Andriy Gapon wrote: > On 05/05/2021 01:59, Mark Millard via freebsd-current wrote: >> I had a: >> # zfs list -tall >> NAME USED AVAIL REFER MOUNTPOINT >> . . . >> zroot/DESTDIRs/13_0R-CA72

Re: ZFS rename with associated snapshot present: odd error message

2021-05-05 Thread Andriy Gapon
On 05/05/2021 01:59, Mark Millard via freebsd-current wrote: I had a: # zfs list -tall NAME USED AVAIL REFER MOUNTPOINT . . . zroot/DESTDIRs/13_0R-CA72-instwrld-norm 1.44G 117G 96K /usr/obj/DESTDIRs/13_0R-CA72-instwrld-norm

ZFS rename with associated snapshot present: odd error message

2021-05-04 Thread Mark Millard via freebsd-stable
I had a: # zfs list -tall NAME USED AVAIL REFER MOUNTPOINT . . . zroot/DESTDIRs/13_0R-CA72-instwrld-norm 1.44G 117G 96K /usr/obj/DESTDIRs/13_0R-CA72-instwrld-norm zroot/DESTDIRs/13_0R-CA72-instwrld-norm@dirty-style 1.44G

Re: zfs native encryption best practices on RELENG13

2021-04-26 Thread Alan Somers
> > < > https://forums.freebsd.org/threads/freebsd-13-openzfs-encrypted-thumb-drive.80008/ > > > > > > > > > > Thanks, good points to consider! I wonder too if there are any > performance differences > > ---Mike > Yes there are. Firstl

Re: zfs native encryption best practices on RELENG13

2021-04-26 Thread mike tancsa
On 4/23/2021 11:47 PM, Peter Libassi wrote: > Yes, I’ve come to the same conclusion. This should be used on a > data-zpool and not on the system-pool (zroot). Encryption is per > dataset. Also if found that if the encrypted dataset is not mounted of > some reason you will be writing to the parent

Re: zfs native encryption best practices on RELENG13

2021-04-26 Thread mike tancsa
On 4/23/2021 5:23 PM, Xin Li wrote: > On 4/23/21 13:53, mike tancsa wrote: >> Starting to play around with RELENG_13 and wanted explore ZFS' built in >> encryption.  Is there a best practices doc on how to do full disk >> encryption anywhere thats not GELI based  ?  There

(D29934) Reorder commented steps in UPDATING following sequential order. (was: etcupdate -p vs. root on zfs (and bectl use and such): no /usr/src/etc/master.passwd (for example))

2021-04-25 Thread Graham Perrin
On 23/04/2021 08:39, Mark Millard via freebsd-current wrote: [3] With regard to mounting ZFS file systems in single user mode What's currently footnote 3 will probably become footnote 4, please see: <https://reviews.freebsd.org/D29934#inline-186

Re: (D29934) Reorder commented steps in UPDATING following sequential order. (was: etcupdate -p vs. root on zfs (and bectl use and such): no /usr/src/etc/master.passwd (for example))

2021-04-25 Thread Mark Millard via freebsd-stable
On 2021-Apr-25, at 08:14, Graham Perrin wrote: > On 23/04/2021 08:39, Mark Millard via freebsd-current wrote: > >> [3] > > > With regard to mounting ZFS file systems in single user mode > > What's currently footnote 3 will probabl

Re: zfs native encryption best practices on RELENG13

2021-04-24 Thread Andrea Venturoli
On 4/23/21 11:23 PM, Xin Li via freebsd-stable wrote: I think loader do not support the native OpenZFS encryption yet. However, you can encrypt non-essential datasets on a boot pool (that is, if com.datto:encryption is "active" AND the bootfs dataset is not encrypted, you can still boot from

Re: zfs native encryption best practices on RELENG13

2021-04-23 Thread Peter Libassi
> 23 apr. 2021 kl. 23:23 skrev Xin Li via freebsd-stable > : > > On 4/23/21 13:53, mike tancsa wrote: >> Starting to play around with RELENG_13 and wanted explore ZFS' built in >> encryption. Is there a best practices doc on how to do full disk >> encryption

Re: zfs native encryption best practices on RELENG13

2021-04-23 Thread Xin Li via freebsd-stable
On 4/23/21 13:53, mike tancsa wrote: > Starting to play around with RELENG_13 and wanted explore ZFS' built in > encryption.  Is there a best practices doc on how to do full disk > encryption anywhere thats not GELI based  ?  There are lots for > GELI, > but nothing I could

zfs native encryption best practices on RELENG13

2021-04-23 Thread mike tancsa
Starting to play around with RELENG_13 and wanted explore ZFS' built in encryption.  Is there a best practices doc on how to do full disk encryption anywhere thats not GELI based  ?  There are lots for GELI, but nothing I could find for native OpenZFS encryption on FreeBSD i.e box gets rebooted

etcupdate -p vs. root on zfs (and bectl use and such): no /usr/src/etc/master.passwd (for example)

2021-04-23 Thread Mark Millard via freebsd-stable
FYI: The default bsdinstall result for auto ZFS that I tried has a separate zroot/usr/src dataset, which zfs mounts at /usr/src . UPDATING and such places indicate sequences like: (think etcupdate where it lists mergemaster and ignore -F and -Fi) make buildworld make

Re: Frequent disk I/O stalls while building (poudriere), processes in "zfs tear" state

2021-04-16 Thread Felix Palmen
blem is solved for me! > > > > I'd still be curious about what might be the cause, and, what this state > > "zfs tear" actually means. But that's kind of an "academic interest" > > now. > > Most likely your other processes are pre-empting your build, which is

Re: Frequent disk I/O stalls while building (poudriere), processes in "zfs tear" state

2021-04-15 Thread Dewayne Geraghty
ing a test with idprio 0 instead, which still seems > to have the desired effect, and so far, I didn't have any of these > stalls. If this persists, the problem is solved for me! > > I'd still be curious about what might be the cause, and, what this state > "zfs tear"

Re: Frequent disk I/O stalls while building (poudriere), processes in "zfs tear" state

2021-04-15 Thread Felix Palmen
ave any of these stalls. If this persists, the problem is solved for me! I'd still be curious about what might be the cause, and, what this state "zfs tear" actually means. But that's kind of an "academic interest" now. -- Dipl.-Inform. Felix Palmen ,.//.. {we

Frequent disk I/O stalls while building (poudriere), processes in "zfs tear" state

2021-04-12 Thread Felix Palmen
Hello all, since following the releng/13.0 branch, I experience stalled disk I/O quite often (ca. once per minute) while building packages with poudriere. What I can see in this case is the CPU going almost idle, and several processes shown in `top` in state "zfs te" (and procstat

Re: ZFS (in virtual machine): $HOME being a dataset causes xauth to timeout- access delays?

2021-04-02 Thread John Kennedy
On Thu, Apr 01, 2021 at 07:18:56PM -1000, parv/freebsd wrote: > On Thu, Apr 1, 2021 at 3:38 PM parv/freebsd wrote: > > I am wondering if $SRC_BASE, $MAKEOBJDIRPREFIX, & $WRKDIRPREFIX being > ZFS datasets now would increase compile time. I will found that out in > few weeks (in

ZFS (in virtual machine): $HOME being a dataset causes xauth to timeout- access delays?

2021-04-02 Thread parv
Hi there, I have FreeBSD 12-STABLE in VirtualBox 5.2.44 on Windows 10. All the "disks" are file-backed virtual disks. I noticed that after making $HOME a ZFS dataset, there were delays ... - generally in start of Xorg; - exhibited by xauth (after using "startx") error me

Re: ZFS (in virtual machine): $HOME being a dataset causes xauth to timeout- access delays?

2021-04-01 Thread parv/freebsd
On Thu, Apr 1, 2021 at 3:38 PM parv/freebsd wrote: I am wondering if $SRC_BASE, $MAKEOBJDIRPREFIX, & $WRKDIRPREFIX being ZFS datasets now would increase compile time. I will found that out in few weeks (in case of buildworld & kernel) as earlier I had both $SRC_BASE & $MAKEOBJDIRPRE

ZFS (in virtual machine): $HOME being a dataset causes xauth to timeout- access delays?

2021-04-01 Thread parv/freebsd
Hi there, I have FreeBSD 12-STABLE in VirtualBox 5.2.44 on Windows 10. All the "disks" are file-backed virtual disks. I noticed that after making $HOME a ZFS dataset, there were delays ... - generally in start of Xorg; - exhibited by xauth (after using "startx") error me

Re: kldload zfs spins the system after upgrading from 12.2 to 13-BETA

2021-03-20 Thread Andriy Gapon
09, Yoshihiro Ota wrote: >>>>> Hi all, >>>>> >>>>> I'm upgrading fron 12.2-RELEASE to 13-BETA/RC one by one. >>>>> >>>>> After upgrading one in VMWare, 'zfs mount -a' hangs the system. >>>>> I don't have boottime

Request: Mount zfs encrypted datasets at boot? Re: FreeBSD 13.0-RC2 Now Available

2021-03-20 Thread Ruben van Staveren via freebsd-stable
Hi, > On 13 Mar 2021, at 1:11, Glen Barber wrote: > > The second RC build of the 13.0-RELEASE release cycle is now available. Might it be interesting to change the zfs mount -a / zfs unmount -a in /etc/rc.d/zfs to zfs mount -al / zfs unmount -au so that filesystems using th

Re: kldload zfs spins the system after upgrading from 12.2 to 13-BETA

2021-03-19 Thread Yoshihiro Ota
;>> I'm upgrading fron 12.2-RELEASE to 13-BETA/RC one by one. > >>> > >>> After upgrading one in VMWare, 'zfs mount -a' hangs the system. > >>> I don't have boottime zfs mount on nor don't have zfsroot. > >>> I just simply ran install world/k

Re: Updating to 13-stable and existing ZFS pools: any gotchas?

2021-03-17 Thread Dean E. Weimer via freebsd-stable
On 2021-03-17 9:59 am, tech-lists wrote: On Sun, Mar 14, 2021 at 09:59:21AM +0100, Stefan Bethke wrote: I'm planning to upgrade three production machines with existing ZFS pools to 13-stable. Is there anything I need to pay attention to wrt OpenZFS? Or should it be fully transparent, apart

Re: Updating to 13-stable and existing ZFS pools: any gotchas?

2021-03-17 Thread tech-lists
On Sun, Mar 14, 2021 at 09:59:21AM +0100, Stefan Bethke wrote: I'm planning to upgrade three production machines with existing ZFS pools to 13-stable. Is there anything I need to pay attention to wrt OpenZFS? Or should it be fully transparent, apart from updating loader? My (limited) testing

Re: Updating to 13-stable and existing ZFS pools: any gotchas?

2021-03-14 Thread Mathias Picker
three production machines with existing ZFS pools to 13-stable. Is there anything I need to pay attention to wrt OpenZFS? Or should it be fully transparent, apart from updating loader? My (limited) testing with VMs went without a hitch, but I want to make sure I don't paint myself

Updating to 13-stable and existing ZFS pools: any gotchas?

2021-03-14 Thread Stefan Bethke
I'm planning to upgrade three production machines with existing ZFS pools to 13-stable. Is there anything I need to pay attention to wrt OpenZFS? Or should it be fully transparent, apart from updating loader? My (limited) testing with VMs went without a hitch, but I want to make sure I don't

Re: kldload zfs spins the system after upgrading from 12.2 to 13-BETA

2021-03-12 Thread Yoshihiro Ota
;>> I'm upgrading fron 12.2-RELEASE to 13-BETA/RC one by one. > >>> > >>> After upgrading one in VMWare, 'zfs mount -a' hangs the system. > >>> I don't have boottime zfs mount on nor don't have zfsroot. > >>> I just simply ran install world/k

Re: kldload zfs spins the system after upgrading from 12.2 to 13-BETA

2021-03-09 Thread Andriy Gapon
On 08/03/2021 05:24, Yoshihiro Ota wrote: > On Sun, 7 Mar 2021 00:09:33 +0200 > Andriy Gapon wrote: > >> On 06/03/2021 20:09, Yoshihiro Ota wrote: >>> Hi all, >>> >>> I'm upgrading fron 12.2-RELEASE to 13-BETA/RC one by one. >>> >>> Af

Re: kldload zfs spins the system after upgrading from 12.2 to 13-BETA

2021-03-07 Thread Yoshihiro Ota
On Sun, 7 Mar 2021 00:09:33 +0200 Andriy Gapon wrote: > On 06/03/2021 20:09, Yoshihiro Ota wrote: > > Hi all, > > > > I'm upgrading fron 12.2-RELEASE to 13-BETA/RC one by one. > > > > After upgrading one in VMWare, 'zfs mount -a' hangs the system. > >

Re: zfs mount -a spins the system after upgrading from 12.2 to 13-BETA

2021-03-06 Thread Andriy Gapon
On 06/03/2021 20:09, Yoshihiro Ota wrote: > Hi all, > > I'm upgrading fron 12.2-RELEASE to 13-BETA/RC one by one. > > After upgrading one in VMWare, 'zfs mount -a' hangs the system. > I don't have boottime zfs mount on nor don't have zfsroot. > I just simply ran

zfs mount -a spins the system after upgrading from 12.2 to 13-BETA

2021-03-06 Thread Yoshihiro Ota
Hi all, I'm upgrading fron 12.2-RELEASE to 13-BETA/RC one by one. After upgrading one in VMWare, 'zfs mount -a' hangs the system. I don't have boottime zfs mount on nor don't have zfsroot. I just simply ran install world/kernel and mergemaster. Are there any special steps we need to take before

Re: lots of "no such file or directory" errors in zfs filesystem

2021-02-23 Thread Peter Jeremy via freebsd-stable
On 2021-Feb-23 11:30:58 -0600, Chris Anderson wrote: >nope, it led a pretty boring life. that zfs filesystem was created on that >server and has been on the same two mirrored disks for its lifetime. Does the server have ECC RAM? Possibly it's a bitflip somewhere before the data got t

Re: lots of "no such file or directory" errors in zfs filesystem

2021-02-23 Thread Chris Anderson
LE contiguous unique double size=800L/800P > > birth=46916371L/46916371P fill=908537 > > cksum=11fdd21d1d:13cb24c87a6e:da0c9bf1b5df3:715ab2ec45b7b09 > > > > > > Object lvl iblk dblk dsize dnsize lsize %full type > > > > 382681 128K

Re: lots of "no such file or directory" errors in zfs filesystem

2021-02-23 Thread Andriy Gapon
f3:715ab2ec45b7b09 > > >     Object  lvl   iblk   dblk  dsize  dnsize  lsize   %full  type > >      38268    1   128K     1K      0    512     1K  100.00  ZFS directory > >                                                264   bonus  ZFS znode > >         dnode flag

Re: lots of "no such file or directory" errors in zfs filesystem

2021-02-22 Thread Chris Anderson
537 objects, rootbp DVA[0]=<0:13210311000:1000> DVA[1]=<0:18b9a02c000:1000> [L0 DMU objset] fletcher4 uncompressed LE contiguous unique double size=800L/800P birth=46916371L/46916371P fill=908537 cksum=11fdd21d1d:13cb24c87a6e:da0c9bf1b5df3:715ab2ec45b7b09 Object lvl iblk dblk ds

Re: lots of "no such file or directory" errors in zfs filesystem

2021-02-22 Thread Andriy Gapon
On 22/02/2021 16:20, Chris Anderson wrote: > On Mon, Feb 22, 2021 at 1:36 AM Andriy Gapon > wrote: > > On 22/02/2021 09:31, Chris Anderson wrote: > > None of these files are especially important to me, however I was > wondering > > if there would be any

Re: lots of "no such file or directory" errors in zfs filesystem

2021-02-22 Thread Chris Anderson
On Mon, Feb 22, 2021 at 1:36 AM Andriy Gapon wrote: > On 22/02/2021 09:31, Chris Anderson wrote: > > None of these files are especially important to me, however I was > wondering > > if there would be any benefit to the community from trying to debug this > > issue further to understand what

Re: lots of "no such file or directory" errors in zfs filesystem

2021-02-21 Thread Andriy Gapon
On 22/02/2021 09:31, Chris Anderson wrote: > None of these files are especially important to me, however I was wondering > if there would be any benefit to the community from trying to debug this > issue further to understand what might be going wrong. Yes. -- Andriy Gapon

lots of "no such file or directory" errors in zfs filesystem

2021-02-21 Thread Chris Anderson
I'm in the process of decommissioning an old zfs based file server and I noticed that around a dozen files with directory entries which fail with "No such file or directory" when trying to read them. I can't remember what the original version of freebsd installed was, but it's been in

Re: stable/13 and zfs <> openzfs

2021-02-10 Thread tech-lists
On Tue, Feb 09, 2021 at 03:40:43PM -0700, Alan Somers wrote: The new ZFS is backwards compatible with the old one. So your 12.2-p3 system will be able to zfs send, and the stable/13 will be able to zfs recv. You can go the other direction too, if you're careful to create the new pool using

Re: stable/13 and zfs <> openzfs

2021-02-09 Thread Alan Somers
table/13 > 3. destroy the original zpool [1] > 4. build new zpool and restore data > > stable/13 seems to have openzfs as the default zfs. Is there anything > more I need do? [2] The system is havdling remote zfs receive from a > 12.2-p3 system that uses the earlier default Free

stable/13 and zfs <> openzfs

2021-02-09 Thread tech-lists
stable/13 seems to have openzfs as the default zfs. Is there anything more I need do? [2] The system is havdling remote zfs receive from a 12.2-p3 system that uses the earlier default FreeBSD zfs. [3] [1] the original zpool was going to be destroyed anyway due to wrong ashift for these new disks

Re: 12.2-RELEASE buildworld fail at cddl/zfs / libuutil.so

2021-01-27 Thread Tomasz CEDRO
On 27.01.2021 19:31, parv/freebsd wrote: # git log --pretty=fuller --grep libuutil release/12.2.0. .stable/12 commit 30ec3f368d Author:     Eugene Grosbein AuthorDate: Dec.2020.1206-1622 + Commit:     Eugene Grosbein CommitDate: Dec.2020.1206-1622 +     MFC r364027

Re: 12.2-RELEASE buildworld fail at cddl/zfs / libuutil.so

2021-01-27 Thread parv/freebsd
On Wed, Jan 27, 2021 at 6:10 AM Tomasz CEDRO wrote: Hi Tomasz, On 27.01.2021 12:52, parv/freebsd wrote: > ... > > # /usr/bin/time -h make buildkernel buildworld NO_CLEAN=1 > > . ^ ^ ^ ^ ^ ^ > > Perhaps try with cleaning out first buildworld,

Re: 12.2-RELEASE buildworld fail at cddl/zfs / libuutil.so

2021-01-27 Thread Tomasz CEDRO
Hello PARV :-) On 27.01.2021 12:52, parv/freebsd wrote: > What is the git commit hash (for others)? For the 12.2-RELEASE: # git branch main * releng/12.2 stable/12 # git log --pretty=oneline | head 97eb1441ca7b602510a81fd512d830563998b3e7 PS/2 Synaptics touchpad identification fix.

Re: 12.2-RELEASE buildworld fail at cddl/zfs / libuutil.so

2021-01-27 Thread parv/freebsd
race (all) > ===> cddl/lib/libnvpair (all) > ===> cddl/lib/libumem (all) > ===> cddl/lib/libuutil (all) > ===> cddl/lib/libzfs_core (all) > ===> cddl/lib/libzfs (all) > ===> cddl/lib/libzpool (all) > ===> cddl/lib/tests (all) > ===> cddl/sbin (all) &g

12.2-RELEASE buildworld fail at cddl/zfs / libuutil.so

2021-01-26 Thread Tomasz CEDRO
===> cddl/lib/libzfs_core (all) ===> cddl/lib/libzfs (all) ===> cddl/lib/libzpool (all) ===> cddl/lib/tests (all) ===> cddl/sbin (all) ===> cddl/sbin/zfs (all) cc -target x86_64-unknown-freebsd12.2 --sysroot=/usr/obj/usr/src/amd64.amd64/tmp -B/usr/obj/usr/src/amd64.amd64/tmp/usr/bi

Re: zfs panic RELENG_12

2020-12-22 Thread mike tancsa
g limited to ~30G of the 64G of RAM on the box.  I removed that limit a few weeks ago after upgrading the box to RELENG_12 to pull in the OpenSSL changes.  The panic seems to happen under disk load. I have 3 zfs pools that are pretty busy receiving snapshots. One day a week, we write a full set to a 4th z

Re: zfs panic RELENG_12

2020-12-22 Thread mike tancsa
On 12/22/2020 10:07 AM, Mark Johnston wrote: > > Could you go to frame 11 and print zone->uz_name and > bucket->ub_bucket[18]? I'm wondering if the item pointer was mangled > somehow. Thank you for looking! (kgdb) frame 11 #11 0x80ca47d4 in bucket_drain (zone=0xf800037da000,

Re: zfs panic RELENG_12

2020-12-22 Thread Mark Johnston
On Tue, Dec 22, 2020 at 09:05:01AM -0500, mike tancsa wrote: > Hmmm, another one. Not sure if this is hardware as it seems different ? > > > > Fatal trap 12: page fault while in kernel mode > cpuid = 11; apic id = 0b > fault virtual address   = 0x0 > fault code  = supervisor write

Re: zfs panic RELENG_12

2020-12-22 Thread mike tancsa
   td = 0xf80004964740     p = 0xf800049f8530     dtd = #22 No locals. (kgdb) On 12/15/2020 4:39 PM, mike tancsa wrote: > Was doing a backup via zfs send | zfs recv when the box panic'd.  Its a > not so old RELENG_12 box from last week. Any ideas if this is a hardware > issue or a

zfs panic RELENG_12

2020-12-15 Thread mike tancsa
Was doing a backup via zfs send | zfs recv when the box panic'd.  Its a not so old RELENG_12 box from last week. Any ideas if this is a hardware issue or a bug ? Its r368493 from last Wednesday. I dont see an ECC errors logged, so dont think its hardware. Reading symbols from /boot/kernel/kernel

12.2-RC2 crashing on leftover ZFS cache/log on SSD

2020-10-14 Thread Willem Jan Withagen
and installed that on the 2 rusty spinners. But now the SSDs have leftover ZFS stuff on it that panics the 12.2-RC2. Simple soluction is to remover the SSDs from their trays, boot and reinsert the SSDs. That works. I'll be able to clean the SSDs and redo cache and log stuff. There are 2 things

Re: spa_namespace_lock and concurrent zfs commands

2020-09-09 Thread Eugene Grosbein
09.09.2020 19:29, Eugene M. Zheganin wrote: > I'm using sort of FreeBSD ZFS appliance with custom API, and I'm suffering > from huge timeouts when large (dozens, actually) of concurrent zfs/zpool > commands are issued (get/create/destroy/snapshot/clone mostly). > > Are the

Re: spa_namespace_lock and concurrent zfs commands

2020-09-09 Thread Eugene M. Zheganin
On 09.09.2020 17:29, Eugene M. Zheganin wrote: Hello, I'm using sort of FreeBSD ZFS appliance with custom API, and I'm suffering from huge timeouts when large (dozens, actually) of concurrent zfs/zpool commands are issued (get/create/destroy/snapshot/clone mostly). Are there any tunables

spa_namespace_lock and concurrent zfs commands

2020-09-09 Thread Eugene M. Zheganin
Hello, I'm using sort of FreeBSD ZFS appliance with custom API, and I'm suffering from huge timeouts when large (dozens, actually) of concurrent zfs/zpool commands are issued (get/create/destroy/snapshot/clone mostly). Are there any tunables that could help mitigate this ? Once I took part

Re: zfs meta data slowness

2020-07-31 Thread mike tancsa
t; > Also, make sure you invoke top while "zfs" command is running. > Also, procstat -kk for pid of "zfs" command would be useful (but may occur > pretty long). > > I suppose it blocks waiting for some kernel lock and procstat would show > details. .txt file i

Re: zfs meta data slowness

2020-07-22 Thread Eugene Grosbein
22.07.2020 20:02, mike tancsa wrote: > > On 7/22/2020 1:29 AM, Eugene Grosbein wrote: >> 22.07.2020 2:37, mike tancsa wrote: >> >>>> Something else special about the setup. >>>> output of "top -b" >>>> >>> ports are right

Re: zfs meta data slowness

2020-07-22 Thread mike tancsa
On 7/22/2020 1:29 AM, Eugene Grosbein wrote: > 22.07.2020 2:37, mike tancsa wrote: > >>> Something else special about the setup. >>> output of "top -b" >>> >> ports are right now being built in a VM, but the problem (zrepl hanging) >> and

Re: zfs meta data slowness

2020-07-22 Thread mike tancsa
ut that my >>> not match your business case. >> As its a backup server, its sort of the point to have all those snapshots. > I'm the last guy who should be commenting on ZFS, since I never use it. > However, it is my understanding that ZFS "pseudo automounts" each > snaps

Re: zfs meta data slowness

2020-07-22 Thread Ronald Klop
Van: mike tancsa Datum: dinsdag, 21 juli 2020 21:37 Aan: Ronald Klop , FreeBSD-STABLE Mailing List Onderwerp: Re: zfs meta data slowness Hi, Thanks for the response. Reply in line On 7/20/2020 9:04 AM, Ronald Klop wrote: > Hi, > > My first suggestion would be to rem

Re: zfs meta data slowness

2020-07-21 Thread Eugene Grosbein
22.07.2020 2:37, mike tancsa wrote: >> Something else special about the setup. >> output of "top -b" >> > > ports are right now being built in a VM, but the problem (zrepl hanging) > and zfs list -t snapshots taking forever happens regardless > >

Re: zfs meta data slowness

2020-07-21 Thread Rick Macklem
r, its sort of the point to have all those snapshots. I'm the last guy who should be commenting on ZFS, since I never use it. However, it is my understanding that ZFS "pseudo automounts" each snapshot when you go there, so I think that might be what is taking so long (ie. not really me

Re: zfs meta data slowness

2020-07-21 Thread mike tancsa
; Maybe you can provide more information about your setup: > Amount of RAM, CPU? 64G, Xeon(R) CPU E3-1240 v6 @ 3.70GHz > output of "zpool status" # zpool status -x all pools are healthy > output of "zfs list" if possible to share its a big list # zfs list | wc 824

Re: zfs meta data slowness

2020-07-20 Thread Eugene Grosbein
19.07.2020 21:17, mike tancsa wrote: > Are there any tweaks that can be done to speed up or improve zfs > metadata performance ? I have a backup server with a lot of snapshots > (40,000) and just doing a listing can take a great deal of time. Best > case scenario is about 24 seconds

Re: zfs meta data slowness

2020-07-20 Thread Ronald Klop
Hi, My first suggestion would be to remove a lot of snapshots. But that my not match your business case. Maybe you can provide more information about your setup: Amount of RAM, CPU? output of "zpool status" output of "zfs list" if possible to share Type of disks/ss

  1   2   3   4   5   6   7   8   9   10   >