vmd writes corrupt qcow2 images

2022-04-18 Thread Thomas L.
Hi,

I recently tried to use qemu-img with qcow2 images of my VMs and
qemu-img finds them corrupted. I can reproduce the issue in the
following way (on -current, but is the same on -stable; tried different
hosts to exclude hardware errors):

marsden# vmctl create -s 300G test.qcow2
vmctl: qcow2 imagefile created
marsden# qemu-img check test.qcow2
No errors were found on the image.
Image end offset: 262144
marsden# vmctl start -cL -B net -b /bsd.rd -d test.qcow2 test
Connected to /dev/ttyp4 (speed 115200)
Copyright (c) 1982, 1986, 1989, 1991, 1993
The Regents of the University of California.  All rights reserved.
Copyright (c) 1995-2022 OpenBSD. All rights reserved.  https://www.OpenBSD.org

OpenBSD 7.1-current (RAMDISK_CD) #444: Sat Apr 16 11:11:27 MDT 2022
dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/RAMDISK_CD
real mem = 520093696 (496MB)
avail mem = 500371456 (477MB)
random: boothowto does not indicate good seed
mainbus0 at root
bios0 at mainbus0
acpi at bios0 not configured
cpu0 at mainbus0: (uniprocessor)
cpu0: Intel(R) Core(TM) i3-6006U CPU @ 2.00GHz, 1992.01 MHz, 06-4e-03
cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,CX8,SEP,PGE,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,PCLMUL,SSSE3,FMA3,CX16,SSE4.1,SSE4.2,MOVBE,POPCNT,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,PAGE1GB,LONG,LAHF,ABM,3DNOWP,ITSC,FSGSBASE,BMI1,AVX2,S
MEP,BMI2,ERMS,RDSEED,ADX,SMAP,CLFLUSHOPT,MD_CLEAR,MELTDOWN
cpu0: 256KB 64b/line 8-way L2 cache
cpu0: using VERW MDS workaround
pvbus0 at mainbus0: OpenBSD
pci0 at mainbus0 bus 0
pchb0 at pci0 dev 0 function 0 "OpenBSD VMM Host" rev 0x00
virtio0 at pci0 dev 1 function 0 "Qumranet Virtio RNG" rev 0x00
viornd0 at virtio0
virtio0: irq 3
virtio1 at pci0 dev 2 function 0 "Qumranet Virtio Network" rev 0x00
vio0 at virtio1: address fe:e1:bb:d1:8a:69
virtio1: irq 5
virtio2 at pci0 dev 3 function 0 "Qumranet Virtio Storage" rev 0x00
vioblk0 at virtio2
scsibus0 at vioblk0: 1 targets
sd0 at scsibus0 targ 0 lun 0: 
sd0: 307200MB, 512 bytes/sector, 629145600 sectors
virtio2: irq 6
virtio3 at pci0 dev 4 function 0 "OpenBSD VMM Control" rev 0x00
vmmci0 at virtio3
virtio3: irq 7
isa0 at mainbus0
com0 at isa0 port 0x3f8/8 irq 4: ns8250, no fifo
com0: console
softraid0 at root
scsibus1 at softraid0: 256 targets
PXE boot MAC address fe:e1:bb:d1:8a:69, interface vio0
root on rd0a swap on rd0b dump on rd0b
erase ^?, werase ^W, kill ^U, intr ^C, status ^T

Welcome to the OpenBSD/amd64 7.1 installation program.
Starting non-interactive mode in 5 seconds...
(I)nstall, (U)pgrade, (A)utoinstall or (S)hell? I
At any prompt except password prompts you can escape to a shell by
typing '!'. Default answers are shown in []'s and are selected by
pressing RETURN.  You can exit this program at any time by pressing
Control-C, but this can leave your system in an inconsistent state.

Terminal type? [vt220]
System hostname? (short form, e.g. 'foo') test

Available network interfaces are: vio0 vlan0.
Which network interface do you wish to configure? (or 'done') [vio0]
IPv4 address for vio0? (or 'autoconf' or 'none') [autoconf]
IPv6 address for vio0? (or 'autoconf' or 'none') [none]
Available network interfaces are: vio0 vlan0.
Which network interface do you wish to configure? (or 'done') [done]
Using DNS domainname my.domain
Using DNS nameservers at 100.64.1.2

Password for root account? (will not echo)
Password for root account? (again)
Start sshd(8) by default? [yes]
Change the default console to com0? [yes]
Available speeds are: 9600 19200 38400 57600 115200.
Which speed should com0 use? (or 'done') [115200]
Setup a user? (enter a lower-case loginname, or 'no') [no]
Since no user was setup, root logins via sshd(8) might be useful.
WARNING: root is targeted by password guessing attacks, pubkeys are safer.
Allow root ssh login? (yes, no, prohibit-password) [no]

Available disks are: sd0.
Which disk is the root disk? ('?' for details) [sd0]
No valid MBR or GPT.
Use (W)hole disk MBR, whole disk (G)PT or (E)dit? [whole]
Setting OpenBSD MBR partition to whole sd0...done.
The auto-allocated layout for sd0 is:
#size   offset  fstype [fsize bsize   cpg]
  a:  1024.0M   64  4.2BSD   2048 16384 1 # /
  b:   752.0M  2097216swap
  c:307200.0M0  unused
  d:  4096.0M  3637312  4.2BSD   2048 16384 1 # /tmp
  e:  5088.0M 12025920  4.2BSD   2048 16384 1 # /var
  f:  6144.0M 22446144  4.2BSD   2048 16384 1 # /usr
  g:  1024.0M 35029056  4.2BSD   2048 16384 1 # /usr/X11R6
  h: 20480.0M 37126208  4.2BSD   2048 16384 1 # /usr/local
  i:  3072.0M 79069248  4.2BSD   2048 16384 1 # /usr/src
  j:  6144.0M 85360704  4.2BSD   2048 16384 1 # /usr/obj
  k:259376.0M 97943616  4.2BSD   4096 32768 1 # /home
Use (A)uto layout, (E)dit auto layout, or create (C)ustom layout? [a]
/dev/rsd0a: 1024.0MB in 

Re: vmd: spurious VM restarts

2021-06-26 Thread Thomas L.
On Wed, 7 Apr 2021 17:00:00 -0700
Mike Larkin  wrote:
> Depends on the exact content that got swapped out (as we didn't handle
> TLB flushes correctly), so a crash was certainly a possibility.
> That's why I wanted to see the VMM_DEBUG output.
>
> In any case, Thomas should try -current and see if this problem is
> even reproducible.
>
> -ml

I've been running -current with VMM_DEBUG since Apr 14 and the problem
has not reproduced, instead I see spurious stops now. Output in
/var/log/messages on the occasion is:

Jun 19 03:31:16 golem vmd[95337]: vcpu_run_loop: vm 8 / vcpu 0 run ioctl 
failed: Invalid argument
Jun 19 03:31:16 golem /bsd: vcpu_run_vmx: can't read procbased ctls on exit
Jun 19 03:31:17 golem /bsd: vmm_free_vpid: freed VPID/ASID 8

There's also a lot of probably unrelated messages for all the VMs:

Jun 19 01:31:10 golem vmd[66318]: vionet_enq_rx: descriptor too small for 
packet data

I realize that this is an old version, so this might be an already
fixed bug. I can upgrade to a newer snapshot, but the bug shows about
once per month, so by the time it shows it will be an old version
again.

Kind regards,

Thomas



Re: vmd: spurious VM restarts

2021-04-07 Thread Thomas L.
> > Thomas: I looked at your host dmesg and your provided vm.conf. It
> > looks like 11 vm's with the default 512M memory and one (minecraft)
> > with 8G. Your host seems to have only 16GB of memory, some of which
> > is probably unavailable as it's used by the integrated gpu. I'm
> > wondering if you are effectively oversusbcribing your memory here.
> >
> > I know we currently don't support swapping guest memory out, but not
> > sure what happens if we don't have the physical memory to fault a
> > page in and wire it.
> >
>
> Something else gets swapped out.

Wire == Can't swap out?
top shows 15G real memory available. That should be enough (8G + 11 *
0.5G = 13.5G), or is this inherently risky with 6.8?
I can try -current as suggested in the other mail. Is this a likely
cause or should I run with VMM_DEBUG for further investigation? Is
"somewhat slower" from VMM_DEBUG still usable? I don't need full
performance, but ~month downtime until the problem shows again would be
too much.

> > Even without a custom kernel with VMM_DEBUG, if it's a uvm_fault
> > issue you should see a message in the kernel buffer. Something like:
> >
> >   vmx_fault_page: uvm_fault returns N, GPA=0x, rip=0x
> >
> > mlarkin: thoughts on my hypothesis? Am I wildly off course?
> >
> > -dv
> >
>
> Yeah I was trying to catch the big dump when a VM resets. That would
> tell us if the vm caused the reset or if vmd(8) crashed for some
> reason.

But if vmd crashed it wouldn't restart automatically or does it?
All VMs down from vmd crashing would have been noticed.
That kernel message would have shown in the dmesg too, wouldn't it?

Kind regards,

Thomas



Re: vmd: spurious VM restarts

2021-04-06 Thread Thomas L.
On Tue, 6 Apr 2021 14:28:09 -0700
Mike Larkin  wrote:

> On Tue, Apr 06, 2021 at 09:15:10PM +0200, Thomas L. wrote:
> > On Tue, 6 Apr 2021 11:11:01 -0700
> > Mike Larkin  wrote:
> > > Anything in the host's dmesg?
> >
>
> *host* dmesg. I think you misread what I was after...

The dmesg of the host was already attached to the first mail below the
vm.conf (I mistakenly called the host hypervisor, which I realize now is
not accurate). I figured since it was already attached, that
you must mean the VM, compounding the confusion ...

Kind regards,

Thomas



Re: vmd: spurious VM restarts

2021-04-06 Thread Thomas L.
On Tue, 6 Apr 2021 11:11:01 -0700
Mike Larkin  wrote:
> Anything in the host's dmesg?

Below is the dmesg and latest syslog from one of the VMs.

OpenBSD 6.8 (GENERIC) #1: Tue Nov  3 09:04:47 MST 2020
r...@syspatch-68-amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC
real mem = 520085504 (495MB)
avail mem = 489435136 (466MB)
random: good seed from bootblocks
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.4 @ 0xf3f40 (10 entries)
bios0: vendor SeaBIOS version "1.11.0p3-OpenBSD-vmm" date 01/01/2011
bios0: OpenBSD VMM
acpi at bios0 not configured
cpu0 at mainbus0: (uniprocessor)
cpu0: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 3403.18 MHz, 06-3a-09
cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,CX8,SEP,PGE,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,PCLMUL,SSSE3,CX16,SSE4.1,SSE4.2,POPCNT,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,LONG,LAHF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,MELTDOWN
cpu0: 256KB 64b/line 8-way L2 cache
cpu0: smt 0, core 0, package 0
cpu0: using VERW MDS workaround
pvbus0 at mainbus0: OpenBSD
pvclock0 at pvbus0
pci0 at mainbus0 bus 0
pchb0 at pci0 dev 0 function 0 "OpenBSD VMM Host" rev 0x00
virtio0 at pci0 dev 1 function 0 "Qumranet Virtio RNG" rev 0x00
viornd0 at virtio0
virtio0: irq 3
virtio1 at pci0 dev 2 function 0 "Qumranet Virtio Network" rev 0x00
vio0 at virtio1: address fe:e1:ba:d0:00:04
virtio1: irq 5
virtio2 at pci0 dev 3 function 0 "Qumranet Virtio Storage" rev 0x00
vioblk0 at virtio2
scsibus1 at vioblk0: 1 targets
sd0 at scsibus1 targ 0 lun 0: 
sd0: 307200MB, 512 bytes/sector, 629145600 sectors
virtio2: irq 6
virtio3 at pci0 dev 4 function 0 "OpenBSD VMM Control" rev 0x00
vmmci0 at virtio3
virtio3: irq 7
isa0 at mainbus0
isadma0 at isa0
com0 at isa0 port 0x3f8/8 irq 4: ns8250, no fifo
com0: console
vscsi0 at root
scsibus2 at vscsi0: 256 targets
softraid0 at root
scsibus3 at softraid0: 256 targets
root on sd0a (c14ce37920a910f7.a) swap on sd0b dump on sd0b
WARNING: / was not properly unmounted

Apr  6 14:39:33 schleuder /bsd: OpenBSD 6.8 (GENERIC) #1: Tue Nov  3 09:04:47 
MST 2020
Apr  6 14:39:33 schleuder /bsd: 
r...@syspatch-68-amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC
Apr  6 14:39:33 schleuder /bsd: real mem = 520085504 (495MB)
Apr  6 14:39:33 schleuder /bsd: avail mem = 489435136 (466MB)
Apr  6 14:39:33 schleuder /bsd: random: good seed from bootblocks
Apr  6 14:39:33 schleuder /bsd: mpath0 at root
Apr  6 14:39:33 schleuder /bsd: scsibus0 at mpath0: 256 targets
Apr  6 14:39:33 schleuder /bsd: mainbus0 at root
Apr  6 14:39:33 schleuder /bsd: bios0 at mainbus0: SMBIOS rev. 2.4 @ 0xf3f40 
(10 entries)
Apr  6 14:39:33 schleuder /bsd: bios0: vendor SeaBIOS version 
"1.11.0p3-OpenBSD-vmm" date 01/01/2011
Apr  6 14:39:33 schleuder /bsd: bios0: OpenBSD VMM
Apr  6 14:39:33 schleuder /bsd: acpi at bios0 not configured
Apr  6 14:39:33 schleuder /bsd: cpu0 at mainbus0: (uniprocessor)
Apr  6 14:39:33 schleuder /bsd: cpu0: Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 
3403.18 MHz, 06-3a-09
Apr  6 14:39:33 schleuder /bsd: cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,CX8,SEP,PGE,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,PCLMUL,SSSE3,CX16,SSE4.1,SSE4.2,POPCNT,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,LONG,LAHF,ITSC,FSGSBASE,SMEP,ERMS,MD_CLEAR,MELTDOWN
Apr  6 14:39:33 schleuder /bsd: cpu0: 256KB 64b/line 8-way L2 cache
Apr  6 14:39:33 schleuder /bsd: cpu0: smt 0, core 0, package 0
Apr  6 14:39:33 schleuder /bsd: cpu0: using VERW MDS workaround
Apr  6 14:39:33 schleuder /bsd: pvbus0 at mainbus0: OpenBSD
Apr  6 14:39:33 schleuder /bsd: pvclock0 at pvbus0
Apr  6 14:39:33 schleuder /bsd: pci0 at mainbus0 bus 0
Apr  6 14:39:33 schleuder /bsd: pchb0 at pci0 dev 0 function 0 "OpenBSD VMM 
Host" rev 0x00
Apr  6 14:39:33 schleuder /bsd: virtio0 at pci0 dev 1 function 0 "Qumranet 
Virtio RNG" rev 0x00
Apr  6 14:39:33 schleuder /bsd: viornd0 at virtio0
Apr  6 14:39:33 schleuder /bsd: virtio0: irq 3
Apr  6 14:39:33 schleuder /bsd: virtio1 at pci0 dev 2 function 0 "Qumranet 
Virtio Network" rev 0x00
Apr  6 14:39:33 schleuder /bsd: vio0 at virtio1: address fe:e1:ba:d0:00:04
Apr  6 14:39:33 schleuder /bsd: virtio1: irq 5
Apr  6 14:39:33 schleuder /bsd: virtio2 at pci0 dev 3 function 0 "Qumranet 
Virtio Storage" rev 0x00
Apr  6 14:39:33 schleuder /bsd: vioblk0 at virtio2
Apr  6 14:39:33 schleuder /bsd: scsibus1 at vioblk0: 1 targets
Apr  6 14:39:33 schleuder /bsd: sd0 at scsibus1 targ 0 lun 0: 
Apr  6 14:39:33 schleuder /bsd: sd0: 307200MB, 512 bytes/sector, 629145600 
sectors
Apr  6 14:39:33 schleuder /bsd: virtio2: irq 6
Apr  6 14:39:33 schleuder /bsd: virtio3 at pci0 dev 4 function 0 "OpenBSD VMM 
Control" rev 0x00
Apr  6 14:39:33 schleuder /bsd: vmmci0 at virtio3
Apr  6 14:39:33 schleuder /bsd: virtio3: irq 7
Apr  6 14:39:33 schleuder /bsd: isa0 at mainbus0
Apr  6 14:39:33 schleuder /bsd: isadma0 at isa0
Apr  6 14:39:33 schleuder /bsd: com0 at isa0 port 0x3f8/8 irq 4: ns8250, no fifo
Apr  6 14:39:33 schleuder /bsd: com0: console
Apr  6 14:39:33 schleuder /bsd: vscsi0 at 

vmd: spurious VM restarts

2021-04-06 Thread Thomas L.
Hi,

I'm running OpenBSD 6.8 as hypervisor with multiple OpenBSD VMs.
Regularly, it happens that all VM are restarted, not at the same time
but clustered. The indication that this happend is reduced uptime on the
VMs, some services that fail to come up again and the following logs:

# grep vmd /var/log/daemon
Apr  1 18:10:35 golem vmd[31367]: wiki: started vm 12 successfully, tty 
/dev/ttyp0
Apr  6 13:24:52 golem vmd[31367]: matrix: started vm 13 successfully, tty 
/dev/ttypb
Apr  6 13:25:55 golem vmd[31367]: matrix: started vm 13 successfully, tty 
/dev/ttypb
Apr  6 13:26:45 golem vmd[18933]: vmd: LSR UART write 0x8203d260 unsupported
Apr  6 13:26:45 golem vmd[31367]: ticketfrei: started vm 5 successfully, tty 
/dev/ttyp5
Apr  6 14:22:34 golem vmd[31367]: www: started vm 4 successfully, tty /dev/ttyp4
Apr  6 14:33:54 golem vmd[31367]: kibicara: started vm 8 successfully, tty 
/dev/ttyp8
Apr  6 14:35:02 golem vmd[31367]: vpn: started vm 3 successfully, tty /dev/ttyp3
Apr  6 14:36:38 golem vmd[31367]: relay: started vm 1 successfully, tty 
/dev/ttyp1
Apr  6 14:37:51 golem vmd[31367]: schleuder: started vm 2 successfully, tty 
/dev/ttyp2
Apr  6 14:40:34 golem vmd[31367]: mumble: started vm 6 successfully, tty 
/dev/ttyp6
Apr  6 14:41:58 golem vmd[31367]: minecraft: started vm 9 successfully, tty 
/dev/ttyp9

The restarts seem to be non-graceful, since the matrix vm needed manual
fsck on /var. Going back over the logs this seems to happen about every
month (not all restarts are this phenomenon, but Mar 8/10 and Feb
17/20/22 seem like it):

# zgrep vmd /var/log/daemon.0.gz
Mar  8 19:43:07 golem vmd[31367]: wiki: started vm 12 successfully, tty 
/dev/ttyp0
Mar  8 19:43:37 golem vmd[31367]: ticketfrei: started vm 5 successfully, tty 
/dev/ttyp5
Mar 10 09:21:20 golem vmd[31367]: www: started vm 4 successfully, tty /dev/ttyp4
Mar 10 09:24:13 golem vmd[31367]: kibicara: started vm 8 successfully, tty 
/dev/ttyp8
Mar 10 09:26:13 golem vmd[31367]: vpn: started vm 3 successfully, tty /dev/ttyp3
Mar 10 09:28:40 golem vmd[31367]: gitea: started vm 7 successfully, tty 
/dev/ttyp7
Mar 10 09:29:01 golem vmd[31367]: relay: started vm 1 successfully, tty 
/dev/ttyp1
Mar 10 09:31:29 golem vmd[31367]: schleuder: started vm 2 successfully, tty 
/dev/ttyp2
Mar 10 09:34:02 golem vmd[31367]: mumble: started vm 6 successfully, tty 
/dev/ttyp6
Mar 10 09:35:44 golem vmd[31367]: minecraft: started vm 9 successfully, tty 
/dev/ttyp9
Mar 13 01:46:37 golem vmd[31367]: gitea: started vm 7 successfully, tty 
/dev/ttyp7
golem# zgrep vmd /var/log/daemon.1.gz
Feb 17 21:18:45 golem vmd[31367]: matrix: started vm 13 successfully, tty 
/dev/ttypc
Feb 20 08:32:28 golem vmd[31367]: wiki: started vm 12 successfully, tty 
/dev/ttyp0
Feb 20 08:33:14 golem vmd[31367]: ticketfrei: started vm 5 successfully, tty 
/dev/ttyp5
Feb 20 08:35:20 golem vmd[31367]: www: started vm 4 successfully, tty /dev/ttyp4
Feb 20 11:09:01 golem vmd[31367]: kibicara: started vm 8 successfully, tty 
/dev/ttyp8
Feb 20 11:10:18 golem vmd[31367]: vpn: started vm 3 successfully, tty /dev/ttyp3
Feb 20 11:11:52 golem vmd[31367]: gitea: started vm 7 successfully, tty 
/dev/ttyp7
Feb 22 00:51:03 golem vmd[31367]: relay: started vm 1 successfully, tty 
/dev/ttyp1
Feb 22 00:52:44 golem vmd[31367]: schleuder: started vm 2 successfully, tty 
/dev/ttyp2
Feb 22 00:53:59 golem vmd[31367]: mumble: started vm 6 successfully, tty 
/dev/ttyp6
Feb 22 00:54:45 golem vmd[31367]: minecraft: started vm 9 successfully, tty 
/dev/ttyp9
Feb 24 23:01:50 golem vmd[31367]: vmd_sighdlr: reload requested with SIGHUP
Feb 24 23:01:51 golem vmd[31367]: test: started vm 10 successfully, tty 
/dev/ttypa
Feb 24 23:01:51 golem vmd[52735]: test: unsupported refcount size
Feb 24 23:06:27 golem vmd[31367]: vmd_sighdlr: reload requested with SIGHUP
Feb 24 23:06:27 golem vmd[1230]: test: unsupported refcount size
Feb 24 23:06:27 golem vmd[31367]: matrix: started vm 13 successfully, tty 
/dev/ttypb
Feb 24 23:06:27 golem vmd[31367]: test: started vm 10 successfully, tty 
/dev/ttypc
Feb 24 23:10:20 golem vmd[31367]: matrix: started vm 13 successfully, tty 
/dev/ttypb

vm.conf and dmesg of the hypervisor are below. How would I go
about debugging this?

Kind regards,

Thomas


switch internal {
interface bridge0
locked lladdr
group internal
}


vm relay {
disk /data/vmd/relay.qcow2
interface {
switch internal
lladdr fe:e1:ba:d0:00:03
}
}

vm schleuder {
disk /data/vmd/schleuder.qcow2
interface {
switch internal
lladdr fe:e1:ba:d0:00:04
}
}

vm vpn {
disk /data/vmd/vpn.qcow2
interface {
switch internal
lladdr fe:e1:ba:d0:00:05
}
}

vm www {
disk /data/vmd/www.qcow2
interface {
switch internal
lladdr fe:e1:ba:d0:00:06
}
}

vm ticketfrei {
disk /data/vmd/ticketfrei.qcow2
   

snapshot upgrade hangs with "iwm0: DAD detected duplicate"

2021-03-02 Thread Thomas L.
Hi,

current snapshot hangs during sysupgrade after rootfs mount with

iwm0: DAD detected duplicate IPv6 address fe80:1::164f:8aff:fe25:dbef: NS 
in/out=1/1, NA in=0
iwm0: DAD complete for fe80:1::164f:8aff:fe25:dbef - duplicate found
iwm0: manual intervention required

The problem persists over multiple trials including powercycle.
Full dmesg from bsd.upgrade below.

Kind regards,

Thomas

OpenBSD 6.9-beta (RAMDISK_CD) #356: Tue Mar  2 10:50:43 MST 2021
dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/RAMDISK_CD
real mem = 8465113088 (8072MB)
avail mem = 8204537856 (7824MB)
random: good seed from bootblocks
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 3.0 @ 0xe5f50 (43 entries)
bios0: vendor FUJITSU // Insyde Software Corp. version "Version 2.02" date 
12/28/2017
bios0: FUJITSU LIFEBOOK A357
acpi0 at bios0: ACPI 5.0
acpi0: tables DSDT FACP UEFI UEFI SSDT SSDT TPM2 SSDT FOAT ASF! ASPT BOOT DBGP 
HPET APIC MCFG LPIT WSMT SSDT SSDT SSDT SSDT DBGP DBG2 SSDT SSDT DMAR FPDT
acpihpet0 at acpi0: 2399 Hz
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Intel(R) Core(TM) i3-6006U CPU @ 2.00GHz, 1996.00 MHz, 06-4e-03
cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,ACPI,MMX,FXSR,SSE,SSE2,SS,HTT,TM,PBE,SSE3,PCLMUL,DTES64,MWAIT,DS-CPL,VMX,EST,TM2,SSSE3,SDBG,FMA3,CX16,xTPR,PDCM,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,3DNOWP,PERF,ITSC,FSGSBASE,TSC_ADJUST,SGX,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,MPX,RDSEED,ADX,SMAP,CLFLUSHOPT,PT,IBRS,IBPB,STIBP,SENSOR,ARAT,XSAVEOPT,XSAVEC,XGETBV1,XSAVES,MELTDOWN
cpu0: 256KB 64b/line 8-way L2 cache
cpu0: apic clock running at 24MHz
cpu0: mwait min=64, max=64, C-substates=0.2.1.2.4.1.1.1, IBE
cpu at mainbus0: not configured
cpu at mainbus0: not configured
cpu at mainbus0: not configured
ioapic0 at mainbus0: apid 2 pa 0xfec0, version 20, 120 pins
acpiprt0 at acpi0: bus 0 (PCI0)
acpiprt1 at acpi0: bus 1 (RP01)
acpiprt2 at acpi0: bus -1 (RP02)
acpiprt3 at acpi0: bus -1 (RP03)
acpiprt4 at acpi0: bus -1 (RP04)
acpiprt5 at acpi0: bus 2 (RP05)
acpiprt6 at acpi0: bus 3 (RP06)
acpiprt7 at acpi0: bus -1 (RP07)
acpiprt8 at acpi0: bus -1 (RP08)
acpiprt9 at acpi0: bus 4 (RP09)
acpiprt10 at acpi0: bus -1 (RP10)
acpiprt11 at acpi0: bus -1 (RP11)
acpiprt12 at acpi0: bus -1 (RP12)
acpiprt13 at acpi0: bus -1 (RP13)
acpiprt14 at acpi0: bus -1 (RP14)
acpiprt15 at acpi0: bus -1 (RP15)
acpiprt16 at acpi0: bus -1 (RP16)
acpiprt17 at acpi0: bus -1 (RP17)
acpiprt18 at acpi0: bus -1 (RP18)
acpiprt19 at acpi0: bus -1 (RP19)
acpiprt20 at acpi0: bus -1 (RP20)
acpiprt21 at acpi0: bus -1 (RP21)
acpiprt22 at acpi0: bus -1 (RP22)
acpiprt23 at acpi0: bus -1 (RP23)
acpiprt24 at acpi0: bus -1 (RP24)
acpiec0 at acpi0
"FUJ02E3" at acpi0 not configured
"FUJ0420" at acpi0 not configured
acpipci0 at acpi0 PCI0: 0x 0x0011 0x0001
acpicmos0 at acpi0
"PNP0C0C" at acpi0 not configured
"PNP0C0D" at acpi0 not configured
"ACPI0003" at acpi0 not configured
"PNP0C0A" at acpi0 not configured
"MSFT0101" at acpi0 not configured
acpicpu at acpi0 not configured
acpitz at acpi0 not configured
cpu0: using Skylake AVX MDS workaround
pci0 at mainbus0 bus 0
pchb0 at pci0 dev 0 function 0 "Intel Core 6G Host" rev 0x08
vga1 at pci0 dev 2 function 0 "Intel HD Graphics 520" rev 0x07
wsdisplay1 at vga1 mux 1: console (80x25, vt100 emulation)
"Intel Core GMM" rev 0x00 at pci0 dev 8 function 0 not configured
xhci0 at pci0 dev 20 function 0 "Intel 100 Series xHCI" rev 0x21: msi, xHCI 1.0
usb0 at xhci0: USB revision 3.0
uhub0 at usb0 configuration 1 interface 0 "Intel xHCI root hub" rev 3.00/1.00 
addr 1
"Intel 100 Series MEI" rev 0x21 at pci0 dev 22 function 0 not configured
ahci0 at pci0 dev 23 function 0 "Intel 100 Series AHCI" rev 0x21: msi, AHCI 
1.3.1
ahci0: port 0: 6.0Gb/s
ahci0: PHY offline on port 1
scsibus0 at ahci0: 32 targets
sd0 at scsibus0 targ 0 lun 0:  naa.500a07511f000e2a
sd0: 244198MB, 512 bytes/sector, 500118192 sectors, thin
ppb0 at pci0 dev 28 function 0 "Intel 100 Series PCIE" rev 0xf1: msi
pci1 at ppb0 bus 1
ppb1 at pci0 dev 28 function 4 "Intel 100 Series PCIE" rev 0xf1: msi
pci2 at ppb1 bus 2
iwm0 at pci2 dev 0 function 0 "Intel Dual Band Wireless AC 7265" rev 0x61, msi
ppb2 at pci0 dev 28 function 5 "Intel 100 Series PCIE" rev 0xf1: msi
pci3 at ppb2 bus 3
re0 at pci3 dev 0 function 0 "Realtek 8168" rev 0x15: RTL8168H/8111H (0x5400), 
msi, address a0:66:10:81:50:0a
rgephy0 at re0 phy 7: RTL8251 PHY, rev. 0
ppb3 at pci0 dev 29 function 0 "Intel 100 Series PCIE" rev 0xf1: msi
pci4 at ppb3 bus 4
vendor "Realtek", unknown product 0x524a (class undefined unknown subclass 
0x00, rev 0x01) at pci4 dev 0 function 0 not configured
"Intel 100 Series LPC" rev 0x21 at pci0 dev 31 function 0 not configured
"Intel 100 Series PMC" rev 0x21 at pci0 dev 31 function 2 not configured
"Intel 100 Series HD Audio" rev 0x21 at pci0 dev 31 function 3 not configured

Re: login_passwd: reject challenge service

2019-12-11 Thread Thomas L.
On Thu, 5 Dec 2019 13:35:40 +
"Lindner, Thomas 1. (Nokia - DE/Nuremberg)"
 wrote:
> The (untested) patch below makes login_passwd behave as described in
> the manpage.

I've now been able to test the patch and login/su/doas/ssh still work
as expected. All the other login_* styles in base are well behaved. Is
there a reason that this should not be done?

Kind regards,

Thomas

diff --git libexec/login_passwd/login.c libexec/login_passwd/login.c
index 09e683a7366..486d8bfcb8a 100644
--- libexec/login_passwd/login.c
+++ libexec/login_passwd/login.c
@@ -137,7 +137,7 @@ main(int argc, char **argv)
password = readpassphrase("Password:", pbuf,
sizeof(pbuf), RPP_ECHO_OFF); break;
case MODE_CHALLENGE:
-   fprintf(back, BI_AUTH "\n");
+   fprintf(back, BI_SILENT "\n");
exit(0);
break;
default:



Re: vmd console freeze and locked (?) qcow2 image

2019-01-06 Thread Thomas L.
On Sat, 5 Jan 2019 17:56:01 -0800
Mike Larkin  wrote:
> Did you kill all the old vmd processes?
> 
> -ml
> 

I tested again and it works now. There were restarts in between.
I will try killing vmd processes if this happens again, thanks.

Kind regards,

Thomas



vmd console freeze and locked (?) qcow2 image

2019-01-04 Thread Thomas L.
Hi,

I am running -current and installed Arch Linux on vmd.
Unfortunatly, after a while the vmd console freezes.
I tried stoping the vm with vmctl stop, but it keept hanging (maybe related to 
the console hanging?).
So I terminated the vm with vmctl stop -f.
Now, the vm won't start up complaining that it can't open the disk (log snippet 
below).
Is the qcow2 image somehow locked and can I unlock it?
Or is it corrupted? If so, is it recommanded using qcow2 images or are raw 
images more robust?

Kind regards,

Thomas


Jan  4 16:00:52 hilbert vmd[76370]: startup
Jan  4 16:00:52 hilbert vmd[76370]: archlinux: started vm 1 successfully, tty 
/dev/ttyp5
Jan  4 16:01:03 hilbert vmd[40912]: vcpu_process_com_data: guest reading com1 
when not ready
Jan  4 16:01:03 hilbert last message repeated 2 times
Jan  4 16:01:04 hilbert vmd[40912]: vioblk_notifyq: unsupported command 0x8
Jan  4 16:01:04 hilbert vmd[40912]: vioblk_notifyq: unsupported command 0x8
Jan  4 16:04:53 hilbert vmd[40912]: vioblk_notifyq: unsupported command 0x8
Jan  4 16:04:53 hilbert vmd[40912]: vioblk_notifyq: unsupported command 0x8
Jan  4 16:04:55 hilbert vmd[40912]: vcpu_process_com_data: guest reading com1 
when not ready
Jan  4 16:04:55 hilbert last message repeated 2 times
Jan  4 16:45:44 hilbert vmd[76370]: parent terminating
Jan  4 16:45:44 hilbert vmd[55480]: vmm exiting, pid 55480
Jan  4 16:45:44 hilbert vmd[76051]: control exiting, pid 76051
Jan  4 16:45:44 hilbert vmd[31650]: priv exiting, pid 31650
Jan  4 16:45:44 hilbert vmd[72119]: startup
Jan  4 16:45:44 hilbert vmd[72119]: can't open disk /var/vmd/archlinux.qcow2: 
Resource temporarily unavailable
Jan  4 16:45:44 hilbert vmd[72119]: failed to start vm archlinux
Jan  4 16:45:44 hilbert vmd[72119]: parent: configuration failed
Jan  4 16:45:44 hilbert vmd[12825]: vmm exiting, pid 12825
Jan  4 16:45:44 hilbert vmd[55512]: control exiting, pid 55512
Jan  4 16:45:44 hilbert vmd[1619]: priv exiting, pid 1619