panic: rw_enter: netlock locking against myself (NFS related?)
I see this reproduceably when, eg doing cvs ops with Feb 5 snap. I found a thread from a couple of weeks ago but AFAICT the diff in that thread is already in. panic: rw_enter: netlock locking against myself Stopped at Debugger+0x9: leave TIDPIDUID PRFLAGS PFLAGS CPU COMMAND *331497 655595000x13 00 ssh Debugger() at Debugger+0x9 panic() at panic+0xfe rw_enter() at rw_enter+0x1c1 sosend() at sosend+0x114 nfs_send() at nfs_send+0x60 nfs_request() at nfs_request+0x408 nfs_removerpc() at nfs_removerpc+0x12e nfs_inactive() at nfs_inactive+0x88 VOP_INACTIVE() at VOP_INACTIVE+0x35 vrele() at vrele+0x5c unp_detach() at unp_detach+0x59 uipc_usrreq() at uipc_usrreq+0x2cd soclose() at soclose+0x1a3 soo_close() at soo_close+0x1c end trace frame: 0x800021397dd0, count: 0 Copyright (c) 1982, 1986, 1989, 1991, 1993 The Regents of the University of California. All rights reserved. Copyright (c) 1995-2017 OpenBSD. All rights reserved. https://www.OpenBSD.org OpenBSD 6.0-current (GENERIC) #162: Sun Feb 5 13:49:23 MST 2017 dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC real mem = 2130575360 (2031MB) avail mem = 2061467648 (1965MB) mpath0 at root scsibus0 at mpath0: 256 targets mainbus0 at root bios0 at mainbus0: SMBIOS rev. 2.8 @ 0xf6480 (9 entries) bios0: vendor SeaBIOS version "Ubuntu-1.8.2-1ubuntu1" date 04/01/2014 bios0: QEMU Standard PC (i440FX + PIIX, 1996) acpi0 at bios0: rev 0 acpi0: sleep states S3 S4 S5 acpi0: tables DSDT FACP SSDT APIC HPET acpi0: wakeup devices acpitimer0 at acpi0: 3579545 Hz, 24 bits acpimadt0 at acpi0 addr 0xfee0: PC-AT compat cpu0 at mainbus0: apid 0 (boot processor) cpu0: QEMU Virtual CPU version 2.4.0, 2400.54 MHz cpu0: FPU,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,VMX,CX16,x2APIC,POPCNT,HV,NXE,LONG,LAHF cpu0: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 64b/line 16-way L2 cache cpu0: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped cpu0: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped cpu0: smt 0, core 0, package 0 mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges cpu0: apic clock running at 1000MHz ioapic0 at mainbus0: apid 0 pa 0xfec0, version 11, 24 pins acpihpet0 at acpi0: 1 Hz acpiprt0 at acpi0: bus 0 (PCI0) acpicpu0 at acpi0: C1(@1 halt!) "ACPI0006" at acpi0 not configured "PNP0303" at acpi0 not configured "PNP0F13" at acpi0 not configured "PNP0700" at acpi0 not configured "PNP0501" at acpi0 not configured "PNP0A06" at acpi0 not configured "PNP0A06" at acpi0 not configured "PNP0A06" at acpi0 not configured pvbus0 at mainbus0: KVM pci0 at mainbus0 bus 0 pchb0 at pci0 dev 0 function 0 "Intel 82441FX" rev 0x02 pcib0 at pci0 dev 1 function 0 "Intel 82371SB ISA" rev 0x00 pciide0 at pci0 dev 1 function 1 "Intel 82371SB IDE" rev 0x00: DMA, channel 0 wired to compatibility, channel 1 wired to compatibility pciide0: channel 0 disabled (no drives) atapiscsi0 at pciide0 channel 1 drive 0 scsibus1 at atapiscsi0: 2 targets cd0 at scsibus1 targ 0 lun 0:ATAPI 5/cdrom removable cd0(pciide0:1:0): using PIO mode 4, DMA mode 2 piixpm0 at pci0 dev 1 function 3 "Intel 82371AB Power" rev 0x03: apic 0 int 9 iic0 at piixpm0 vga1 at pci0 dev 2 function 0 "Cirrus Logic CL-GD5446" rev 0x00 wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation) wsdisplay0: screen 1-5 added (80x25, vt100 emulation) virtio0 at pci0 dev 3 function 0 "Qumranet Virtio RNG" rev 0x00 viornd0 at virtio0 virtio0: apic 0 int 11 virtio1 at pci0 dev 4 function 0 "Qumranet Virtio Network" rev 0x00 vio0 at virtio1: address 52:54:00:f6:02:ea virtio1: msix shared virtio2 at pci0 dev 5 function 0 "Qumranet Virtio Storage" rev 0x00 vioblk0 at virtio2 scsibus2 at vioblk0: 2 targets sd0 at scsibus2 targ 0 lun 0: SCSI3 0/direct fixed sd0: 16384MB, 512 bytes/sector, 33554432 sectors virtio2: msix shared virtio3 at pci0 dev 6 function 0 "Qumranet Virtio Memory" rev 0x00 viomb0 at virtio3 virtio3: apic 0 int 10 virtio4 at pci0 dev 7 function 0 "Qumranet Virtio Storage" rev 0x00 vioblk1 at virtio4 scsibus3 at vioblk1: 2 targets sd1 at scsibus3 targ 0 lun 0: SCSI3 0/direct fixed sd1: 16384MB, 512 bytes/sector, 33554432 sectors virtio4: msix shared virtio5 at pci0 dev 8 function 0 "Qumranet Virtio SCSI" rev 0x00 vioscsi0 at virtio5: qsize 128 scsibus4 at vioscsi0: 255 targets virtio5: msix shared isa0 at pcib0 isadma0 at isa0 fdc0 at isa0 port 0x3f0/6 irq 6 drq 2 fd0 at fdc0 drive 1: density unknown com0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo com0: console pckbc0 at isa0 port 0x60/5 irq 1 irq 12 pckbd0 at pckbc0 (kbd slot) wskbd0 at pckbd0: console keyboard, using wsdisplay0 pms0 at pckbc0 (aux slot) wsmouse0 at pms0 mux 0 pcppi0 at isa0 port 0x61 spkr0 at pcppi0 vmm0 at mainbus0: VMX/EPT vscsi0 at root scsibus5 at vscsi0: 256
Re: SSD read performance benchmark OpenBSD 6.0 vs. Linux 4.7: OpenBSD will benefit of multiqueueing and also a speedup for sequential reads, and Linux' mmap() is extremely slow for random reads.
2017-02-09 Mikael: > Dear misc@, > > *## Intro, environment* > Find below a comparative benchmark of OpenBSD 6.0 vs Linux 4.7 read speeds > on a 3.3Ghz Xeon E3 server with a Samsung 850 Pro 256GB SATA SSD, which is > one of the very fastest SSD:s in the sub-1000USD/TB price range. dmesg > below. > [..] Someone reminded me to benchmark rsd0c vs. sd0c on OpenBSD, and there are some great surprises in there!: Random multithreaded actually goes up to 198MB/sec at 4KB, and 578MB/sec at 16KB, and sequential multithreaded follows the same curve. So OpenBSD's current IO subsystem gives ~31,000 IOPS for 16KB random and sequential reads, multithreaded. This is excellent and shows that a custom database program on current OpenBSD indeed can utilize all bandwidth of a SATA SSD! The 4K multithreaded random and sequential read performs at 2/3 of Linux, which is also not bad. The natural next step in understanding SSD performance would be to benchmark two SATA SSD:s (on separate SATA plugs), on OpenBSD rsd0c, OpenBSD sd0c, and Linux sd0 . *Benchmark details* Unlike sd0c , rsd0c cannot be mmap():ed. Also, rsd0c requires reads to be aligned? - which is fine for any database usecase as these are based on internal pages anyhow. Sequential singlethreaded: 4KB is 121MB/sec, 16KB is 225MB/sec, and 64KB+ is 342MB/sec. Sequential 10-threaded: 4KB is 19MB/sec per process so 190MB/sec total (vs. sd0c's 48MB/sec total), 16KB is 50MB/sec per process so 500MB/sec total (vs. sd0c's 150MB/sec total), 64KB is 53MB/sec so 530MB/sec total (vs. sd0c's 288MB/sec total). Random singlethreaded: 4KB is 45MB/sec (so same as sd0c), 16KB is 132MB/sec (vs. sd0c's 52MB/sec), 32KB is 198MB/sec (vs. sd0c's 67MB/sec), 64KB is 264MB/sec (vs. sd0c's 85MB/sec). Random 10-threaded 4KB is 19.6MB/sec per process so 196MB/sec total (yey, that's 4x sd0c!), 16KB is 48MB/sec per process so 480MB/sec total, 32KB is 51MB/sec per process so 510MB/sec, and 64KB is 53MB/sec per process so 530MB/sec. Random 20-threaded 4KB is 9.9MB/sec per process so 198MB/sec, 16KB is 24.8MB/process so 496MB/sec total, 32KB is 25.9MB/process so 518MB/sec. Random 40-threaded 4KB is 4.95MB/process so 198MB/sec total, 16KB is 14.45MB/process so 578MB/sec total, 32KB is 12.99MB/process so 519MB/sec.
Per-device multiqueuing would be fantastic. Are there any plans? Are donations a matter here?
Hi misc@, The SSD reading benchmark in the previous email shows that per-device multiqueuing will boost multithreaded random read performance very much e.g. by ~7X+, e.g. the current 50MB/sec will increase to ~350MB/sec+. (I didn't benchmark yet but I suspect the current 50MB/sec is system-wide, whereas with multiqueuing the 350MB/sec+ would be per drive.) Multiuser databases, and any parallell file reading activity, will/would see a proportional speedup with multiqueing. Do you have plans to implement this? Was anything done to this end already, any idea when multiqueueing can happen? Are donations a matter here, if so about what size of donations and to who? Someone suggested that implementing it would take a year of work. Any clarifications of what's going on and what's possible and how would be much appreciated. Thanks, Mikael
SSD read performance benchmark OpenBSD 6.0 vs. Linux 4.7: OpenBSD will benefit of multiqueueing and also a speedup for sequential reads, and Linux' mmap() is extremely slow for random reads.
Dear misc@, *## Intro, environment* Find below a comparative benchmark of OpenBSD 6.0 vs Linux 4.7 read speeds on a 3.3Ghz Xeon E3 server with a Samsung 850 Pro 256GB SATA SSD, which is one of the very fastest SSD:s in the sub-1000USD/TB price range. dmesg below. No dual-disk case was tested. (To be meaningful, that would need to be done on separate SATA ports, as the 600MB/sec bandwidth is per SATA motherboard/controller board plug.) I would guess OpenBSD's throughput would be equal to the results in this test in a dual-test disk case, whereas Linux would have double throughput due to its per-device multiqueue. *## General characteristics of SSD:s, some aha moments* First maybe I should say that this benchmark gave me some aha moments regarding how SSD:s work in general - SSD specsheets are generally adorned with figures between 500MB/sec for SATA and 1500MB/sec for NVME. In actuality, both SATA and NVME SSD:s do 4K single-thread random reading at no more than approx 45MB/sec (!). But, if you pump their queues - i.e. ask the SSD to do more reads concurrently - then you're suddenly getting into the ~400MB/sec range for 4K reads on SATA, and (I didn't test it but when correlating with other benchmarks I would expect) ~900MB/sec on NVME. SATA does 100,000IOPS and NVME 400,000IOPS. Multithread random 4K reading at 400MB/sec equals 100,000IOPS (400 * 1024 * 1024 divided by 4096 is ~100,000), so we see that my SSD actually saturates the SATA bus, when pumped with multithreaded reads. The NVME benchmark I got hold of showed ~900MB/sec at 4K in the same usecase, meaning that the disk performs at ~55% of the bus speed. This means that while NVME drives have better "multi-processing power" than SATA disks, NVME still has the same disk access latency as SATA (if defined as a seek+read operation): This should mean that paying a quadruple price tag for current NVME drive not is worth it for a multiuser database usercase - for multithreaded random read performance, an NVME drive would be worth max +125% (for its 900MB/sec performance vs. 400MB/sec). But in that case, why not simply buy two SATA disks and enjoy the higher performance but get double the storage volume. Finally, for sequential reads, a SATA SSD will do something-like 500MB/sec and an NVME SSD will do something-like 1500MB/sec, and that's of course the figure that they like to show in advertisements. (There are SSD:s with a higher performance profile in the ~~5000 USD/TB or so price range, I didn't study those.) The NVME benchmark referenced to here was http://ssd.userbenchmark.com/Compare/Samsung-950-NVMe-PCIe-M2-256GB-vs-Samsun g-850-Pro-256GB/m38570vs2385 . Now on to the benchmark: *## Benchmark specs* Measures were taken against bias from buffer cache, scheduler, and filesystem specifics, by disabling/confusing the buffer cache, running in unnice mode, and benchmarking only direct block device reading. The benchmark was performed using the disk_io_benchmark_c.c program, inlined below. Both read() and mmap() modes were tested. *## Observations* The observations are, to sum up: - In multithreaded random reads, Linux is way faster, e.g. by ~7X (OpenBSD runs at ~50MB/sec and Linux ~350MB/sec at 4K). The difference should be only due to Linux' having and OpenBSD's not having multiqueuing. - In singlethreaded random reads, Linux and OpenBSD perform similarly (both ~50MB/sec at 4K). - For singlethreaded sequential reads, Linux is ~5x faster than OpenBSD (OpenBSD ~120MB/sec and Linux ~500MB/sec). - Re mmap() vs read() performance: In OpenBSD, same performance in both sequential and random reads. In Linux, in sequential read mode, same performance except for in <128B reads where read()'s performance dumps faster than mmap-reads. In Linux, in random read mode, mmap() performs disastrously compared to read() - e.g. mmap() is, in singlethreaded mode 2.5x slower, and in multithreaded mode 20x slower, than read() on Linux! *## My comments* My comments: - I have a multithreaded random reading usecase e.g. multiuser database, and would very humbly call for performance improvements here! This will happen through implementing per-device multiqueues. Following up on this in subsequent email. - While I don't have particular need for it, I find it strange that the sequential read speed is so much slower on OpenBSD than on Linux - I mean, I guess a sequential reads are executed sequentially on Linux (as they are on OpenBSD), and I guess lower-level mechanisms like SATA controller drivers and DMA logics should be have similar functioning between Linux and OpenBSD, so how come the steep difference? (OpenBSD's buffer cache can deliver up to 1700MB/sec in mmap and ~500MB/sec in read(), so, OpenBSD's sequential reading speed constraint (of ~120MB/sec vs. Linux' ~500MB/sec) seems to lie in the logics that do actual disk work.) - Would any particular sysctl:s
Re: relayd send/expect syntax
On Tue, Feb 07, 2017 at 05:04:18PM -0500, Michael W. Lucas wrote: > host 104.236.197.233, check send expect (9020ms,tcp read timeout), state > unknown -> down, availability 0.00% The send/expect code looses its error because of its async nature - it goes like: 1. "we got data, let's verify it" 2. "expect test failed, but maybe we didn't read enough, let's try again" 3. "no more data, timeout" When we reach 3), the code also has to check if there is anything in the input buffer from 1) and verify it again. The following code fixes it to show "send/expect failed" instead of "tcp read timeout". Reyk Index: usr.sbin/relayd/check_tcp.c === RCS file: /cvs/src/usr.sbin/relayd/check_tcp.c,v retrieving revision 1.51 diff -u -p -u -p -r1.51 check_tcp.c --- usr.sbin/relayd/check_tcp.c 11 Jan 2016 21:31:42 - 1.51 +++ usr.sbin/relayd/check_tcp.c 8 Feb 2017 23:16:14 - @@ -233,8 +233,12 @@ tcp_read_buf(int s, short event, void *a struct ctl_tcp_event*cte = arg; if (event == EV_TIMEOUT) { - tcp_close(cte, HOST_DOWN); - hce_notify_done(cte->host, HCE_TCP_READ_TIMEOUT); + if (ibuf_size(cte->buf)) + (void)cte->validate_close(cte); + else + cte->host->he = HCE_TCP_READ_TIMEOUT; + tcp_close(cte, cte->host->up == HOST_UP ? 0 : HOST_DOWN); + hce_notify_done(cte->host, cte->host->he); return; }
Re: Funding for Skylake support
On 01/07, Jordon wrote: > > On Jan 7, 2017, at 2:19 PM, Peter Membreywrote: > > > > Hi all, > > > > I've gotten OpenBSD up and running on a new Intel NUC, but unfortunately > Skylake isn't supported. I was able to get X working in software accelerated > mode, but it would be great to see true support for the chipset. Unfortunately > I don't have the necessary skills to work on this myself, but I am willing to > put my money where my mouth is. > > > > I realise that for a lot of people, the issue is time and not money, but > that aside, would anybody be interested in focusing on adding support for > Skylake? The deliverable would be getting Skylake support merged. > > > > Happy to discuss what sort of funding would be needed. > > > > Thanks in advance! > > > > Kind Regards, > > > > Peter Membrey > > > > > I second this. OpenBSD runs really well on my TP x260 with the UEFI frame > buffer, but full Skylake support could turn it into my ‘main system’. > When Skylake support hits the tree, count me in for a donation as well. Donated now. Thanks OpenBSD, gabe.
collecting relayd check scripts?
Hi, I'm collecting relayd check scripts for the httpd/relayd book. If you have a check script that you don't mind sharing, please send it to me. Regards, ==ml -- Michael W. LucasTwitter @mwlauthor nonfiction: https://www.michaelwlucas.com/ fiction: https://www.michaelwarrenlucas.com/ blog: http://blather.michaelwlucas.com/
Re: splassert: yield message on 5 Feb snapshot (amd64)
On 8.2.2017. 17:51, Scott Vanderbilt wrote: > Updated a machine to latest (5 Feb.) snapshot of amd64. I'm now seeing > the following message after booting that I've not recalled seeing before: > >splassert: yield: want 0 have 1 add sysctl kern.splassert=2 ...
splassert: yield message on 5 Feb snapshot (amd64)
Updated a machine to latest (5 Feb.) snapshot of amd64. I'm now seeing the following message after booting that I've not recalled seeing before: splassert: yield: want 0 have 1 Looking in the list archives, I see a thread from Sept. 2016 where the following response from Theo Buehler is given to a similar message (splassert: sorwakeup: want 64 have 0) observed by someone else: These should all be fixed now. If you still get them with the next snapshot, set sysctl kern.splassert=2 to get a backtrace which you can report. Does this advice still hold, or is this unrelated? Thank you. OpenBSD 6.0-current (GENERIC.MP) #163: Sun Feb 5 13:55:12 MST 2017 dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP real mem = 1020133376 (972MB) avail mem = 984612864 (939MB) mpath0 at root scsibus0 at mpath0: 256 targets mainbus0 at root bios0 at mainbus0: SMBIOS rev. 2.4 @ 0xf0100 (38 entries) bios0: vendor Award Software International, Inc. version "F3" date 04/09/2009 bios0: Gigabyte Technology Co., Ltd. G41M-ES2L acpi0 at bios0: rev 0 acpi0: TAMG checksum error acpi0: sleep states S0 S3 S4 S5 acpi0: tables DSDT FACP HPET MCFG TAMG APIC SSDT acpi0: wakeup devices PEX0(S5) PEX1(S5) PEX2(S5) PEX3(S5) PEX4(S5) PEX5(S5) HUB0(S5) UAR1(S3) UAR2(S3) USB0(S3) USB1(S3) USB2(S3) USB3(S3) USBE(S3) AZAL(S5) PCI0(S5) acpitimer0 at acpi0: 3579545 Hz, 24 bits acpihpet0 at acpi0: 14318179 Hz acpimcfg0 at acpi0 addr 0xc000, bus 0-255 acpimadt0 at acpi0 addr 0xfee0: PC-AT compat cpu0 at mainbus0: apid 0 (boot processor) cpu0: Intel(R) Celeron(R) CPU E3200 @ 2.40GHz, 1700.17 MHz cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,XSAVE,NXE,LONG,LAHF,PERF,SENSOR cpu0: 1MB 64b/line 4-way L2 cache cpu0: smt 0, core 0, package 0 mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges cpu0: apic clock running at 199MHz cpu0: mwait min=64, max=64, C-substates=0.2.2.2.2, IBE cpu1 at mainbus0: apid 1 (application processor) cpu1: Intel(R) Celeron(R) CPU E3200 @ 2.40GHz, 1699.96 MHz cpu1: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,XSAVE,NXE,LONG,LAHF,PERF,SENSOR cpu1: 1MB 64b/line 4-way L2 cache cpu1: smt 0, core 1, package 0 ioapic0 at mainbus0: apid 2 pa 0xfec0, version 20, 24 pins acpiprt0 at acpi0: bus 0 (PCI0) acpiprt1 at acpi0: bus 1 (PEX0) acpiprt2 at acpi0: bus 2 (PEX1) acpiprt3 at acpi0: bus -1 (PEX2) acpiprt4 at acpi0: bus -1 (PEX3) acpiprt5 at acpi0: bus -1 (PEX4) acpiprt6 at acpi0: bus -1 (PEX5) acpiprt7 at acpi0: bus 3 (HUB0) acpicpu0 at acpi0: C1(@1 halt!), FVS, 1600, 1200 MHz acpicpu1 at acpi0: C1(@1 halt!), FVS, 1600, 1200 MHz acpibtn0 at acpi0: PWRB "PNP0700" at acpi0 not configured "PNP0501" at acpi0 not configured "PNP0501" at acpi0 not configured "PNP0400" at acpi0 not configured "PNP0F13" at acpi0 not configured "PNP0303" at acpi0 not configured pci0 at mainbus0 bus 0 pchb0 at pci0 dev 0 function 0 "Intel G41 Host" rev 0x03 inteldrm0 at pci0 dev 2 function 0 "Intel G41 Video" rev 0x03 drm0 at inteldrm0 intagp0 at inteldrm0 agp0 at intagp0: aperture at 0xd000, size 0x1000 inteldrm0: msi inteldrm0: 1280x1024, 32bpp wsdisplay0 at inteldrm0 mux 1: console (std, vt100 emulation) wsdisplay0: screen 1-5 added (std, vt100 emulation) azalia0 at pci0 dev 27 function 0 "Intel 82801GB HD Audio" rev 0x01: msi azalia0: codecs: Realtek/0x0887 audio0 at azalia0 ppb0 at pci0 dev 28 function 0 "Intel 82801GB PCIE" rev 0x01: msi pci1 at ppb0 bus 1 ppb1 at pci0 dev 28 function 1 "Intel 82801GB PCIE" rev 0x01: msi pci2 at ppb1 bus 2 re0 at pci2 dev 0 function 0 "Realtek 8168" rev 0x02: RTL8168C/8111C (0x3c00), msi, address 00:24:1d:86:28:95 rgephy0 at re0 phy 7: RTL8169S/8110S/8211 PHY, rev. 2 uhci0 at pci0 dev 29 function 0 "Intel 82801GB USB" rev 0x01: apic 2 int 23 uhci1 at pci0 dev 29 function 1 "Intel 82801GB USB" rev 0x01: apic 2 int 19 uhci2 at pci0 dev 29 function 2 "Intel 82801GB USB" rev 0x01: apic 2 int 18 uhci3 at pci0 dev 29 function 3 "Intel 82801GB USB" rev 0x01: apic 2 int 16 ehci0 at pci0 dev 29 function 7 "Intel 82801GB USB" rev 0x01: apic 2 int 23 usb0 at ehci0: USB revision 2.0 uhub0 at usb0 configuration 1 interface 0 "Intel EHCI root hub" rev 2.00/1.00 addr 1 ppb2 at pci0 dev 30 function 0 "Intel 82801BA Hub-to-PCI" rev 0xe1 pci3 at ppb2 bus 3 pcib0 at pci0 dev 31 function 0 "Intel 82801GB LPC" rev 0x01 pciide0 at pci0 dev 31 function 2 "Intel 82801GB SATA" rev 0x01: DMA, channel 0 wired to compatibility, channel 1 wired to compatibility wd0 at pciide0 channel 0 drive 0: wd0: 16-sector PIO, LBA48, 476938MB, 976771055 sectors wd0(pciide0:0:0): using PIO mode 4, Ultra-DMA mode 6 atapiscsi0 at pciide0
Re: relayd send/expect syntax
> Running the most recent amd64 snapshot on ESXi. > > OpenBSD r1.mwlucas.org 6.0 GENERIC#162 amd64 > > I'm trying to use relayd's check send/expect support to verify a > daemon's banner comes up. After problems I've stripped this down to > the simplest possible config, a single known good mail server. The server > keeps showing up as down, with a TCP timeout. Packet sniffer shows > that the connection opens and that the SMTP banner is returned in less > than a second. > > Am I doing something obviously stupid here? > > Here's the config and the debugging output. > > relayd.conf: > --- > ext_ip="203.0.113.213" > > log updates > timeout 9000 > > > table { 104.236.197.233 } > > redirect smtp { > listen on $ext_ip port 587 interface em0 > forward to check send nothing expect "200 *" > } > > -- > > Why have the "timeout 9000"? Well, because of the error I get: > > relayd -d > pfe: filter init done > startup > socket_rlimit: max open files 1024 > socket_rlimit: max open files 1024 > socket_rlimit: max open files 1024 > socket_rlimit: max open files 1024 > relayd_tls_ticket_rekey: rekeying tickets > init_tables: created 1 tables > hce_notify_done: 104.236.197.233 (tcp read timeout) > host 104.236.197.233, check send expect (9020ms,tcp read timeout), state > unknown -> down, availability 0.00% > pfe_dispatch_hce: state -1 for host 1 104.236.197.233 > ^Chce exiting, pid 12145 > kill_tables: deleted 1 tables > flush_rulesets: flushed rules > pfe exiting, pid 67580 > relay exiting, pid 72564 > ca exiting, pid 19097 > relay exiting, pid 72558 > relay exiting, pid 72790 > ca exiting, pid 1431 > ca exiting, pid 889 > parent terminating, pid 81783 > > Any suggestions, folks? Does the daemon actually return "200"? -- $ nc -vv localhost 25 Connection to localhost 25 port [tcp/smtp] succeeded! 220 elanoir.my.domain ESMTP OpenSMTPD ^C -- Dale
Re: sendsyslog: dropped 4 messages, error 55
Hi, Am 30.01.2017 um 18:17 schrieb Peter Fraser: > My /var/log/messages is filling up with messages like the following: > > Jan 30 10:28:06 gateway sendsyslog: dropped 4 messages, error 55 > Jan 30 10:28:06 gateway sendsyslog: dropped 2 messages, error 55 > Jan 30 10:28:06 gateway sendsyslog: dropped 2 messages, error 55 > Jan 30 10:28:06 gateway sendsyslog: dropped 1 message, error 55 > Jan 30 10:28:06 gateway sendsyslog: dropped 2 messages, error 55 > Jan 30 10:28:06 gateway last message repeated 2 times > Jan 30 10:28:06 gateway sendsyslog: dropped 4 messages, error 55 > Jan 30 10:28:06 gateway sendsyslog: dropped 2 messages, error 55 > Jan 30 10:28:06 gateway last message repeated 2 times > Jan 30 10:28:06 gateway sendsyslog: dropped 1 message, error 55 > Jan 30 10:28:06 gateway sendsyslog: dropped 1 message, error 55 > > The messages occur in bursts with several hundred messages per burst, > and here may be several seconds or hours between the bursts. > > I am quite willing to believe that I have done something stupid, but I have no > idea what. > Any hints to find out what is generating these messages. > We observe the same problem. Our system is logging blocked packets to a remote system using logger and syslog like documented in the faqs (http://www.openbsd.org/faq/pf/logging.html). We got this messages since the upgrade to 5.9 (amd64) stable. After the upgrade to 6.0 the problem remains. I ran some test on a VM running 6.0 stable amd64. I could reproduce it with a pcap which produces around 1000 lines when I piped it through tcpdump: # tcpdump -n -e -s 160 -ttt -r /var/log/pflog2syslog | logger -t pf -p local0.info Feb 8 11:55:02 ares sendsyslog: dropped 8 messages, error 55 Feb 8 11:55:02 ares sendsyslog: dropped 4 messages, error 55 Feb 8 11:55:02 ares sendsyslog: dropped 3 messages, error 55 Feb 8 11:55:02 ares sendsyslog: dropped 8 messages, error 55 Feb 8 11:55:02 ares sendsyslog: dropped 8 messages, error 55 Feb 8 11:55:02 ares sendsyslog: dropped 9 messages, error 55 Feb 8 11:55:02 ares last message repeated 4 times Feb 8 11:55:02 ares sendsyslog: dropped 8 messages, error 55 Feb 8 11:55:02 ares last message repeated 2 times Feb 8 11:55:02 ares sendsyslog: dropped 5 messages, error 55 Feb 8 11:55:02 ares sendsyslog: dropped 1 message, error 55 Feb 8 11:55:02 ares sendsyslog: dropped 8 messages, error 55 Feb 8 11:55:02 ares sendsyslog: dropped 9 messages, error 55 Feb 8 11:55:02 ares last message repeated 5 times Feb 8 11:55:02 ares sendsyslog: dropped 8 messages, error 55 Feb 8 11:55:02 ares last message repeated 2 times Feb 8 11:55:02 ares sendsyslog: dropped 4 messages, error 55 Feb 8 11:55:02 ares sendsyslog: dropped 2 messages, error 55 Feb 8 11:55:02 ares sendsyslog: dropped 8 messages, error 55 Feb 8 11:55:02 ares sendsyslog: dropped 9 messages, error 55 Feb 8 11:55:02 ares last message repeated 5 times Feb 8 11:55:02 ares sendsyslog: dropped 8 messages, error 55 dmesg: OpenBSD 6.0 (GENERIC.MP) #2: Mon Oct 17 10:22:47 CEST 2016 r...@stable-60-amd64.mtier.org:/binpatchng/work-binpatch60-amd64/src/sys/arch /amd64/compile/GENERIC.MP real mem = 4265054208 (4067MB) avail mem = 4131319808 (3939MB) mpath0 at root scsibus0 at mpath0: 256 targets mainbus0 at root bios0 at mainbus0: SMBIOS rev. 2.6 @ 0xbf49c000 (84 entries) bios0: vendor Dell Inc. version "3.0.0" date 01/31/2011 bios0: Dell Inc. PowerEdge R710 acpi0 at bios0: rev 2 acpi0: sleep states S0 S4 S5 acpi0: tables DSDT FACP APIC SPCR HPET DM__ MCFG WD__ SLIC ERST HEST BERT EINJ SRAT TCPA SSDT acpi0: wakeup devices PCI0(S5) acpitimer0 at acpi0: 3579545 Hz, 24 bits acpimadt0 at acpi0 addr 0xfee0: PC-AT compat cpu0 at mainbus0: apid 32 (boot processor) cpu0: Intel(R) Xeon(R) CPU X5647 @ 2.93GHz, 2926.41 MHz cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUS H,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX ,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,POPCNT,AES,NXE,PAGE1 GB,LONG,LAHF,PERF,ITSC,SENSOR,ARAT cpu0: 256KB 64b/line 8-way L2 cache cpu0: smt 0, core 0, package 1 mtrr: Pentium Pro MTRR support, 10 var ranges, 88 fixed ranges cpu0: apic clock running at 132MHz cpu0: mwait min=64, max=64, C-substates=0.2.1.1, IBE cpu1 at mainbus0: apid 34 (application processor) cpu1: Intel(R) Xeon(R) CPU X5647 @ 2.93GHz, 2926.00 MHz cpu1: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUS H,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX ,SMX,EST,TM2,SSSE3,CX16,xTPR,PDCM,PCID,DCA,SSE4.1,SSE4.2,POPCNT,AES,NXE,PAGE1 GB,LONG,LAHF,PERF,ITSC,SENSOR,ARAT cpu1: 256KB 64b/line 8-way L2 cache cpu1: smt 0, core 1, package 1 cpu2 at mainbus0: apid 50 (application processor) cpu2: Intel(R) Xeon(R) CPU X5647 @ 2.93GHz, 2926.00 MHz cpu2: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUS H,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX
jme0: watchdog timeout
Hi, I use OpenBSD 6.0 amd64 (stable) on a Shuttle XS35v2. I've installed "ushare" but same problem with "minidlna" and I don't think the problem comes from these apps... When I try to read a big file (ex.: a 1Go video) from my DLNA player, nothing starts playing and the jme driver on the Shuttle reports these warnings on dmesg: jme0: watchdog timeout jme0: stopping transmitter timeout! jme0: stopping transmitter timeout! jme0: stopping transmitter timeout! jme0: watchdog timeout jme0: stopping transmitter timeout! jme0: stopping transmitter timeout! jme0: watchdog timeout jme0: stopping transmitter timeout! jme0: stopping transmitter timeout! jme0: stopping transmitter timeout! jme0: watchdog timeout jme0: stopping transmitter timeout! jme0: stopping transmitter timeout! jme0: watchdog timeout jme0: watchdog timeout and the NIC sometimes stops working until I reboot the machine. I saw an identical report on the mailing list some time ago, but I didn't manage to find it. I join my dmesg if it can help. Thanks for your help. Morgan OpenBSD 6.0 (GENERIC.MP) #2: Mon Oct 17 10:22:47 CEST 2016 r...@stable-60-amd64.mtier.org:/binpatchng/work-binpatch60-amd64/src/sys/arch/amd64/compile/GENERIC.MP real mem = 2120941568 (2022MB) avail mem = 2052247552 (1957MB) mpath0 at root scsibus0 at mpath0: 256 targets mainbus0 at root bios0 at mainbus0: SMBIOS rev. 2.6 @ 0xfc8b0 (23 entries) bios0: vendor American Megatrends Inc. version "2.01" date 11/14/2012 bios0: Shuttle Inc. XS35 acpi0 at bios0: rev 2 acpi0: sleep states S0 S3 S4 S5 acpi0: tables DSDT FACP APIC MCFG SLIC OEMB HPET GSCI acpi0: wakeup devices P0P1(S4) AZAL(S3) P0P4(S4) P0P5(S4) JLAN(S3) P0P6(S4) RLAN(S3) P0P7(S4) P0P8(S4) P0P9(S4) USB0(S3) USB1(S3) USB2(S3) USB3(S3) EUSB(S3) acpitimer0 at acpi0: 3579545 Hz, 24 bits acpimadt0 at acpi0 addr 0xfee0: PC-AT compat cpu0 at mainbus0: apid 0 (boot processor) cpu0: Intel(R) Atom(TM) CPU D525 @ 1.80GHz, 2154.87 MHz cpu0: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,DTES64,MWAIT,DS-CPL,TM2,SSSE3,CX16,xTPR,PDCM,MOVBE,NXE,LONG,LAHF,PERF,SENSOR cpu0: 512KB 64b/line 8-way L2 cache cpu0: smt 0, core 0, package 0 mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges cpu0: apic clock running at 199MHz cpu0: mwait min=64, max=64, C-substates=0.1, IBE cpu1 at mainbus0: apid 2 (application processor) cpu1: Intel(R) Atom(TM) CPU D525 @ 1.80GHz, 1795.50 MHz cpu1: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,DTES64,MWAIT,DS-CPL,TM2,SSSE3,CX16,xTPR,PDCM,MOVBE,NXE,LONG,LAHF,PERF,SENSOR cpu1: 512KB 64b/line 8-way L2 cache cpu1: smt 0, core 1, package 0 cpu2 at mainbus0: apid 1 (application processor) cpu2: Intel(R) Atom(TM) CPU D525 @ 1.80GHz, 1795.50 MHz cpu2: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,DTES64,MWAIT,DS-CPL,TM2,SSSE3,CX16,xTPR,PDCM,MOVBE,NXE,LONG,LAHF,PERF,SENSOR cpu2: 512KB 64b/line 8-way L2 cache cpu2: smt 1, core 0, package 0 cpu3 at mainbus0: apid 3 (application processor) cpu3: Intel(R) Atom(TM) CPU D525 @ 1.80GHz, 1795.50 MHz cpu3: FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,DTES64,MWAIT,DS-CPL,TM2,SSSE3,CX16,xTPR,PDCM,MOVBE,NXE,LONG,LAHF,PERF,SENSOR cpu3: 512KB 64b/line 8-way L2 cache cpu3: smt 1, core 1, package 0 ioapic0 at mainbus0: apid 4 pa 0xfec0, version 20, 24 pins acpimcfg0 at acpi0 addr 0xe000, bus 0-255 acpihpet0 at acpi0: 14318179 Hz acpiprt0 at acpi0: bus 0 (PCI0) acpiprt1 at acpi0: bus 4 (P0P1) acpiprt2 at acpi0: bus 1 (P0P4) acpiprt3 at acpi0: bus 2 (P0P5) acpiprt4 at acpi0: bus -1 (P0P6) acpiprt5 at acpi0: bus 3 (P0P7) acpiprt6 at acpi0: bus -1 (P0P8) acpiprt7 at acpi0: bus -1 (P0P9) acpiec0 at acpi0 acpicpu0 at acpi0: C1(@1 halt!) acpicpu1 at acpi0: C1(@1 halt!) acpicpu2 at acpi0: C1(@1 halt!) acpicpu3 at acpi0: C1(@1 halt!) acpitz0 at acpi0: critical temperature is 104 degC "PNP0303" at acpi0 not configured "PNP0F03" at acpi0 not configured acpibtn0 at acpi0: SLPB acpibtn1 at acpi0: PWRB "PNP0C14" at acpi0 not configured acpivideo0 at acpi0: GFX0 acpivout0 at acpivideo0: LCD_ pci0 at mainbus0 bus 0 pchb0 at pci0 dev 0 function 0 "Intel Pineview DMI" rev 0x02 inteldrm0 at pci0 dev 2 function 0 "Intel Pineview Video" rev 0x02 drm0 at inteldrm0 intagp0 at inteldrm0 agp0 at intagp0: aperture at 0xd000, size 0x1000 inteldrm0: msi inteldrm0: 1024x768 wsdisplay0 at inteldrm0 mux 1: console (std, vt100 emulation) wsdisplay0: screen 1-5 added (std, vt100 emulation) "Intel Pineview Video" rev 0x02 at pci0 dev 2 function 1 not configured azalia0 at pci0 dev 27 function 0 "Intel 82801GB HD Audio" rev 0x02: msi azalia0: codecs: IDT 92HD81B1X audio0 at azalia0 ppb0 at pci0 dev 28 function 0 "Intel 82801GB PCIE" rev 0x02: msi pci1 at ppb0 bus 1 ppb1