Did the vm previously have a fdc?  I doubt it.  I am surprised fdcprobe()
returns a success.

Klemens Nanni <[email protected]> wrote:

> Just upgraded a standard test install in vmm(4) to the latest snap and
> noticed new and garbled output:
> 
>       fdc0 at isa0 port 0x3f0/6 irq 6 drq 2
>       intr_establish: pic pic0 pin 6: can't share type 3 with 2
>       com0 at isa0 port 0x3f8/8 irq 4: ns8250, no fifo
>       ...
>       reordering libraries:fdcresult: overrun
>        done.
>       ...
> 
> No idea what this means, the VM works and I don't use fdc(4).
> 
> For completeness, the vmm host is the snapshot booting
>         OpenBSD 7.0-current (GENERIC.MP) #52: Mon Oct 25 10:15:58 MDT 2021
> and has vmm-firmware-1.14.0p0 installed.
> 
> I have been using vmm for years, this is the first time this happens.
> 
> Full bsd.sp dmesg:
> 
> Using drive 0, partition 3.
> Loading......
> probing: pc0 com0 mem[638K 510M a20=on] 
> disk: hd0+
> >> OpenBSD/amd64 BOOT 3.53
> \
> com0: 115200 baud
> switching console to com0
> >> OpenBSD/amd64 BOOT 3.53
> boot> 
> booting hd0a:/bsd: 14697752+3376136+347200+0+1163264 
> [1061452+128+1161000+874382]=0x15a3588
> entry point at 0xffffffff81001000
> [ using 3097992 bytes of bsd ELF symbol table ]
> Copyright (c) 1982, 1986, 1989, 1991, 1993
>         The Regents of the University of California.  All rights reserved.
> Copyright (c) 1995-2021 OpenBSD. All rights reserved.  https://www.OpenBSD.org
> 
> OpenBSD 7.0-current (GENERIC) #92: Fri Nov 12 18:23:33 MST 2021
>     [email protected]:/usr/src/sys/arch/amd64/compile/GENERIC
> real mem = 520081408 (495MB)
> avail mem = 488517632 (465MB)
> random: good seed from bootblocks
> mpath0 at root
> scsibus0 at mpath0: 256 targets
> mainbus0 at root
> bios0 at mainbus0: SMBIOS rev. 2.4 @ 0xf36e0 (10 entries)
> bios0: vendor SeaBIOS version "1.14.0p0-OpenBSD-vmm" date 01/01/2011
> bios0: OpenBSD VMM
> acpi at bios0 not configured
> cpu0 at mainbus0: (uniprocessor)
> cpu0: Intel(R) Core(TM) i5-3320M CPU @ 2.60GHz, 2595.33 MHz, 06-3a-09
> cpu0: 
> FPU,VME,DE,PSE,TSC,MSR,PAE,CX8,SEP,PGE,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,PCLMUL,SSSE3,CX16,SSE4.1,SSE4.2,POPCNT,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,LONG,LAHF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,MELTDOWN
> cpu0: 256KB 64b/line 8-way L2 cache
> cpu0: smt 0, core 0, package 0
> cpu0: using VERW MDS workaround
> pvbus0 at mainbus0: OpenBSD
> pvclock0 at pvbus0
> pci0 at mainbus0 bus 0
> pchb0 at pci0 dev 0 function 0 "OpenBSD VMM Host" rev 0x00
> virtio0 at pci0 dev 1 function 0 "Qumranet Virtio RNG" rev 0x00
> viornd0 at virtio0
> virtio0: irq 3
> virtio1 at pci0 dev 2 function 0 "Qumranet Virtio Network" rev 0x00
> vio0 at virtio1: address fe:e1:bb:d1:41:41
> virtio1: irq 5
> virtio2 at pci0 dev 3 function 0 "Qumranet Virtio Storage" rev 0x00
> vioblk0 at virtio2
> scsibus1 at vioblk0: 1 targets
> sd0 at scsibus1 targ 0 lun 0: <VirtIO, Block Device, >
> sd0: 2048MB, 512 bytes/sector, 4194304 sectors
> virtio2: irq 6
> virtio3 at pci0 dev 4 function 0 "OpenBSD VMM Control" rev 0x00
> vmmci0 at virtio3
> virtio3: irq 7
> isa0 at mainbus0
> isadma0 at isa0
> fdc0 at isa0 port 0x3f0/6 irq 6 drq 2
> intr_establish: pic pic0 pin 6: can't share type 3 with 2
> com0 at isa0 port 0x3f8/8 irq 4: ns8250, no fifo
> com0: console
> dt: 445 probes
> vscsi0 at root
> scsibus2 at vscsi0: 256 targets
> softraid0 at root
> scsibus3 at softraid0: 256 targets
> root on sd0a (5f9e458ed30b39ab.a) swap on sd0b dump on sd0b
> Automatic boot in progress: starting file system checks.
> /dev/sd0a (5f9e458ed30b39ab.a): file system is clean; not checking
> pf enabled
> starting network
> reordering libraries:fdcresult: overrun
>  done.
> starting early daemons: syslogd pflogd ntpd.
> starting RPC daemons:.
> savecore: no core dump
> checking quotas: done.
> clearing /tmp
> kern.securelevel: 0 -> 1
> creating runtime link editor directory cache.
> preserving editor files.
> starting network daemons: sshd smtpd sndiod.
> starting local daemons: cron.
> Sat Nov 13 16:13:43 UTC 2021
> 
> OpenBSD/amd64 (test.my.domain) (tty00)
> 
> login:
> 

Reply via email to