FreeBSD CI Weekly Report 2019-09-29

2019-10-04 Thread Li-Wen Hsu
(Please send the followup to freebsd-testing@ and note Reply-To is set.)

FreeBSD CI Weekly Report 2019-09-29
===

Here is a summary of the FreeBSD Continuous Integration results for the period
from 2019-09-23 to 2019-09-29.

During this period, we have:

* 2159 builds (98.6% (-0.4) passed, 1.4% (+0.4) failed) of buildworld and
  buildkernel (GENERIC and LINT) were executed on aarch64, amd64, armv6,
  armv7, i386, mips, mips64, powerpc, powerpc64, powerpcspe, riscv64,
  sparc64 architectures for head, stable/12, stable/11 branches.
* 340 test runs (71.5% (-5.7) passed, 20% (-2.8) unstable, 8.5% (+8.5)
  exception) were executed on amd64, i386, riscv64 architectures for head,
  stable/12, stable/11 branches.
* 22 doc builds (100% passed)

Test case status (on 2019-09-29 23:59):
| Branch/Architecture | Total  | Pass   | Fail   | Skipped  |
| --- | -- | -- | -- |  |
| head/amd64  | 7588 (+21) | 7525 (+21) | 0 (0)  | 63 (0)   |
| head/i386   | 7586 (+21) | 7514 (+21) | 0 (0)  | 72 (0)   |
| 12-STABLE/amd64 | 7474 (0)   | 7430 (0)   | 0 (0)  | 44 (0)   |
| 12-STABLE/i386  | 7472 (0)   | 7421 (0)   | 0 (0)  | 51 (+3)  |
| 11-STABLE/amd64 | 6849 (0)   | 6805 (0)   | 0 (0)  | 44 (0)   |
| 11-STABLE/i386  | 6847 (0)   | 6767 (-3)  | 34 (0) | 46 (+3)  |

(The statistics from experimental jobs are omitted)

If any of the issues found by CI are in your area of interest or expertise
please investigate the PRs listed below.

The latest web version of this report is available at
https://hackmd.io/@FreeBSD-CI/report-20190929 and archive is available at
https://hackmd.io/@FreeBSD-CI/, any help is welcome.

## News

* [FCP 20190401-ci_policy: CI 
policy](https://github.com/freebsd/fcp/blob/master/fcp-20190401-ci_policy.md)
  is in "feedback" state, please check and provide comments on -fcp@ and 
-hackers@ mailing lists.

## Fixed Tests

* lib.libc.sys.mmap_test.mmap_truncate_signal
* https://svnweb.freebsd.org/changeset/base/352807
* https://svnweb.freebsd.org/changeset/base/352869

## Failing Tests

* https://ci.freebsd.org/job/FreeBSD-stable-11-i386-test/
* local.kyua.* (31 cases)
* local.lutok.* (3 cases)

## Failing and Flaky Tests (from experimental jobs)

* https://ci.freebsd.org/job/FreeBSD-head-amd64-dtrace_test/
* cddl.usr.sbin.dtrace.common.misc.t_dtrace_contrib.tst_dynopt_d
* https://bugs.freebsd.org/237641
* cddl.usr.sbin.dtrace.amd64.arrays.t_dtrace_contrib.tst_uregsarray_d
* https://bugs.freebsd.org/240358
* Fixed in head https://svnweb.freebsd.org/changeset/base/353107

* https://ci.freebsd.org/job/FreeBSD-head-amd64-test_zfs/
* There are ~60 failing cases, including flakey ones, see
  
https://ci.freebsd.org/job/FreeBSD-head-amd64-test_zfs/lastCompletedBuild/testReport/
 for more details

## Disabled Tests

* sys.fs.tmpfs.mount_test.large
  https://bugs.freebsd.org/212862
* sys.fs.tmpfs.link_test.kqueue
  https://bugs.freebsd.org/213662
* sys.kqueue.libkqueue.kqueue_test.main
  https://bugs.freebsd.org/233586
* sys.kern.ptrace_test.ptrace__PT_KILL_competing_stop
  https://bugs.freebsd.org/220841
* lib.libc.regex.exhaust_test.regcomp_too_big (i386 only)
  https://bugs.freebsd.org/237450
* sys.netinet.socket_afinet.socket_afinet_bind_zero (new)
  https://bugs.freebsd.org/238781
* sys.netpfil.pf.names.names
* sys.netpfil.pf.synproxy.synproxy
  https://bugs.freebsd.org/238870
* sys.kern.ptrace_test.ptrace__follow_fork_child_detached_unrelated_debugger 
  https://bugs.freebsd.org/239292
* sys.netpfil.pf.forward.v4 (i386 only)
* sys.netpfil.pf.forward.v6 (i386 only)
* sys.netpfil.pf.set_tos.v4 (i386 only)
  https://bugs.freebsd.org/239380
* sys.kern.ptrace_test.ptrace__follow_fork_both_attached_unrelated_debugger 
  https://bugs.freebsd.org/239397
* sys.kern.ptrace_test.ptrace__parent_sees_exit_after_child_debugger
  https://bugs.freebsd.org/239399
* sys.kern.ptrace_test.ptrace__follow_fork_parent_detached_unrelated_debugger
  https://bugs.freebsd.org/239425
* lib.libc.gen.getmntinfo_test.getmntinfo_test
  https://bugs.freebsd.org/240049
* sys.sys.qmath_test.qdivq_s64q
  https://bugs.freebsd.org/240219
* sys.kern.ptrace_test.ptrace__getppid
  https://bugs.freebsd.org/240510
* lib.libc.sys.stat_test.stat_socket
  https://bugs.freebsd.org/240621
* sys.netpfil.common.tos.pf_tos (i386 only)
  https://bugs.freebsd.org/240086
* lib.libarchive.functional_test.test_write_filter_zstd
  https://bugs.freebsd.org/240683

## Issues

### Cause build fails
* https://bugs.freebsd.org/233735
  Possible build race: genoffset.o /usr/src/sys/sys/types.h: error: 
machine/endian.h: No such file or directory
* https://bugs.freebsd.org/233769
  Possible build race: ld: error: unable to find library -lgcc_s

### Cause kernel panics
* https://bugs.freebsd.org/238870
  sys.netpfil.pf.names.names and sys.netpfil.pf.synproxy.synproxy cause panic
  Patch exists:

FreeBSD 12.1-BETA3 Now Available

2019-10-04 Thread Glen Barber
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

The third BETA build of the 12.1-RELEASE release cycle is now available.

Installation images are available for:

o 12.1-BETA3 amd64 GENERIC
o 12.1-BETA3 i386 GENERIC
o 12.1-BETA3 powerpc GENERIC
o 12.1-BETA3 powerpc64 GENERIC64
o 12.1-BETA3 powerpcspe MPC85XXSPE
o 12.1-BETA3 sparc64 GENERIC
o 12.1-BETA3 armv6 RPI-B
o 12.1-BETA3 armv7 BANANAPI
o 12.1-BETA3 armv7 BEAGLEBONE
o 12.1-BETA3 armv7 CUBIEBOARD
o 12.1-BETA3 armv7 CUBIEBOARD2
o 12.1-BETA3 armv7 CUBOX-HUMMINGBOARD
o 12.1-BETA3 armv7 RPI2
o 12.1-BETA3 armv7 PANDABOARD
o 12.1-BETA3 armv7 WANDBOARD
o 12.1-BETA3 armv7 GENERICSD
o 12.1-BETA3 aarch64 GENERIC
o 12.1-BETA3 aarch64 RPI3
o 12.1-BETA3 aarch64 PINE64
o 12.1-BETA3 aarch64 PINE64-LTS

Note regarding arm SD card images: For convenience for those without
console access to the system, a freebsd user with a password of
freebsd is available by default for ssh(1) access.  Additionally,
the root user password is set to root.  It is strongly recommended
to change the password for both users after gaining access to the
system.

Installer images and memory stick images are available here:

https://download.freebsd.org/ftp/releases/ISO-IMAGES/12.1/

The image checksums follow at the end of this e-mail.

If you notice problems you can report them through the Bugzilla PR
system or on the -stable mailing list.

If you would like to use SVN to do a source based update of an existing
system, use the "releng/12.1" branch.

A summary of changes since 12.1-BETA2 includes:

o An issue with imx6-based arm boards had been fixed.

o An issue with 64-bit long double types leading to link failures had
  been fixed.

o An overflow logic error had been fixed in fsck_msdosfs(8).

o An issue in destruction of robust mutexes had been fixed.

o Support for the '-vnP' flags to the zfs send subcommand had been
  added for bookmarks.

o The ixgbe(4) driver had been updated to prevent a potential system
  crash with certain 10Gb Intel NICs.

o A regression with the zfs send subcommand when using the '-n', '-P',
  and '-i' flags had been fixed.

o The freebsd-update(8) utility had been updated to include two new
  subcommands, updatesready and showconfig.

o Support for 'ps -H' had been added to kvm(3).

o An issue when compiling certain ports targeting Intel Atom CPUs had
  been fixed.

o A use-after-free in SCTP had been fixed.

o A regression that could lead to a system crash when using vmxnet3 had
  been fixed.

A list of changes since 12.0-RELEASE is available in the releng/12.1
release notes:

https://www.freebsd.org/releases/12.1R/relnotes.html

Please note, the release notes page is not yet complete, and will be
updated on an ongoing basis as the 12.1-RELEASE cycle progresses.

=== Virtual Machine Disk Images ===

VM disk images are available for the amd64, i386, and aarch64
architectures.  Disk images may be downloaded from the following URL
(or any of the FreeBSD download mirrors):

https://download.freebsd.org/ftp/releases/VM-IMAGES/12.1-BETA3/

The partition layout is:

~ 16 kB - freebsd-boot GPT partition type (bootfs GPT label)
~ 1 GB  - freebsd-swap GPT partition type (swapfs GPT label)
~ 20 GB - freebsd-ufs GPT partition type (rootfs GPT label)

The disk images are available in QCOW2, VHD, VMDK, and raw disk image
formats.  The image download size is approximately 135 MB and 165 MB
respectively (amd64/i386), decompressing to a 21 GB sparse image.

Note regarding arm64/aarch64 virtual machine images: a modified QEMU EFI
loader file is needed for qemu-system-aarch64 to be able to boot the
virtual machine images.  See this page for more information:

https://wiki.freebsd.org/arm64/QEMU

To boot the VM image, run:

% qemu-system-aarch64 -m 4096M -cpu cortex-a57 -M virt  \
-bios QEMU_EFI.fd -serial telnet::,server -nographic \
-drive if=none,file=VMDISK,id=hd0 \
-device virtio-blk-device,drive=hd0 \
-device virtio-net-device,netdev=net0 \
-netdev user,id=net0

Be sure to replace "VMDISK" with the path to the virtual machine image.

=== Amazon EC2 AMI Images ===

FreeBSD/amd64 EC2 AMIs are available in the following regions:

  eu-north-1 region: ami-07085de4e26071c9e
  ap-south-1 region: ami-095bd806d8acfffb1
  eu-west-3 region: ami-0314542b8d7579bdd
  eu-west-2 region: ami-06ec921eb87ef4d7b
  eu-west-1 region: ami-0f0051c800be4091e
  ap-northeast-2 region: ami-0f109258a463177bb
  ap-northeast-1 region: ami-0224a1cb8e19333b8
  sa-east-1 region: ami-0536a86bff5f33356
  ca-central-1 region: ami-06709921360dccfa3
  ap-east-1 region: ami-0142af9336f6e529c
  ap-southeast-1 region: ami-0c439e0bc0c567dd3
  ap-southeast-2 region: ami-0fa770b7f07583b48
  eu-central-1 region: ami-0dfca49cf2ba89c43
  us-east-1 region: ami-06884b4e2e511590f
  us-east-2 region: ami-06c687665309d8b17
  us-west-1 region: ami-0dce597e8b07a6c6d
  us-west-2 region: ami-0e1f5ccdd2221b1d6

FreeBSD/aarch64 EC2 AMIs are available in the 

ZVOLs volmode/sync performance influence – affecting windows guests via FC RDM.vmdk

2019-10-04 Thread Harry Schmalzbauer

Hello,

I noticed a significant guest write performance drop with volmode=dev 
during my 12.1 fibre channel tests.
I remember having heard of such reports by some people oaccasionally 
during the last years, so I decided to see how far I can track it down.
Unfortunately, I found no way to demonstrate the effect with in-box 
tools, even not utilizing fio(1) (from ports/benchmarks/fio).


Since I don't know how zvols/ctl work under the hood, I'd need help from 
the experts, how/why volmode seems to affect sync property/behaviour.


The numbers I see let me think that setting volmode=geom will cause the 
same ZFS _zvol_-behaviour as setting the sync property to "disabled".
Why? Shortest summary: Performance on Windows guests writing files onto 
a NTFS filesystem drops by factor ~8 with volmode=dev, but
· After setting sync=disabled with vomode=dev ZVOLs, I see the same 
write rate as I get with volmode=geom.
· Also, disabling write cache flush on windows has exactly the same 
effect, while leaving sync=standard.


Here's a little more background information.

The windows guest uses the zvol-backed-FC-target as mapped raw device 
from a virtual SCSI controller 
(ZVOL->ctl(4)->isp(4)->qlnativefc(ESXi-Initiator)->RDM.vmdk->paravirt-SCSI->\\.\PhysicalDrive1->GPT...NTFS)
The initiator is ESXi6.7, but I'm quiet sure I saw the same effect with 
iSCSI (windows software iSCSI initiator) instead of FC some time ago, 
while I haven't falsified this run.



Here's what I've done trying to reproduce the issue, leaving 
windows/ESXi out of the game:


I'm creating a ZVOL block backend for ctl(4):
    zfs create -V 10G -o compression=off -o volmode=geom -o 
sync=standard MyPool/testvol

    ctladm create -b block -d guest-zvol -o file=/dev/zvol/MyPool/testvol

The first line creates the ZVOL with default values.  If the pool or 
parent dataset hasn't set local values for the compression, volmode or 
snyc properties, defining the 3 "-o"s can be omitted.


    ctladm port -p `ctladm port -l | grep "camsim.*naa" | cut -w -f 1` 
-o on


Now I have a "FREEBSD CTLDISK 0001", available as geom "daN".

To simulate even better, I'm using the second isp(4) port as initiator 
(to be precise, I use 2 ports in simultanious target/initiator role, so 
I have the ZVOL backed block device available with and without real FC 
link in the path)
Utilizing dd(1) on the 'da' connected to the FC-initiator, I get 
_exactly_ the same numbers as I get in my windows guest along all the 
different block sizes!!!

E.g., for the 1k test, I'm running
    dd if=/dev/zero bs=1k of=/dev/da11 count=100k status=progress (~8MB/s)

For those wanting to follow the experiment – remove the "volmode=geom"-zvol:
    ctladm port -p `ctladm port -l | grep "camsim.*naa" | cut -w -f 1` 
-o off
    ctladm remove -b block -l 0  (<– only if you don't have LUN 0 in 
use otherwise)

    zfs destroy MyPool/testvol

"volmode" property can be altered at runtime, but won't have any 
effect!  Either you would have to reboot or re-import the pool.
For my test I can simply create a new, identical ZVOL, this time with 
volmode=dev (instead of geom like before).
    zfs create -V 10G -o compression=off -o volmode=dev -o 
sync=standard MyPool/testvol

    ctladm create -b block -d guest-zvol -o file=/dev/zvol/MyPool/testvol
    ctladm port -p `ctladm port -l | grep "camsim.*naa" | cut -w -f 1` 
-o on


Now the same Windows filesystem write test drops throughput rate by 
factor 8 up for .5-32k block sizes and still about factor 3 for larger 
block sizes.


(at this point you'll most likely have noticed a panic with 12.1-BETA3; 
see https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=240917 )


Unfortunately, I can't see any performance drop with the dd line from above.
Since fio(8) has an parameter to issue fsync(3) any N written blocks, I 
also tried to reproduce with:
    echo "[noop]" | fio --ioengine=sync --filename=/dev/da11 --bs=1k 
--rw=write --io_size=80m --fsync=1 -
To my surprise, I still do _not_ see any performance drop, while I 
reproducably see the big factor 8 penalty on the windows guest.


Can anybody tell me, which part I'm missing to simulate the real-world 
issue?
Like mentioned, either disabling disk's write chache flush in windows, 
or alternatively setting sync=disabled restore the windows write 
throughput to the same numbers as with volmode=geom.


fio(1) has the not usable ioengine "sg", which I know nothing about.  
Maybe somebody has any hint in that direction?


Thanks

-harry


___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"


ZFS with 32-bit, non-x86 kernel

2019-10-04 Thread Andriy Gapon


Does anyone use ZFS with a 32-bit kernel, that is also not i386 ?
If you do, could you please let me know?  Along with uname -rmp output.
Thank you!

-- 
Andriy Gapon
___
freebsd-stable@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-stable
To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"