if_vio: 'Alignment fault' on qemu arm(v7), current

2024-02-06 Thread Krystian Lewandowski
Hi,
I'm running armv7 OpenBSD (current) on qemu (armv7) and when:
-netdev tap,id=net0 -device virtio-net-device,netdev=net0
device is used then OpenBSD panics with:

# sh /etc/netstart vio0
Fatal kernel mode data abort: 'Alignment fault'
trapframe: 0xcdf67d58
DFSR=0001, DFAR=c527b832, spsr=8013
r0 =0015, r1 =c5148000, r2 =c527b81a, r3 =0060
r4 =c4fc1988, r5 =00fe, r6 =, r7 =cdf67eb0
r8 =c4fc1988, r9 =, r10=c09a2748, r11=cdf67db8
r12=cdf67ea0, ssp=cdf67da8, slr=c06cf80c, pc =c06d01a4

Stopped at  ipv6_check+0x88 [/usr/src/sys/netinet6/ip6_input.c:290]:
ldr  r3, [r2, #0x018]

ddb> trace
ipv6_check+0x88 [/usr/src/sys/netinet6/ip6_input.c:290]
rlv=0xc06cf80c rfp=0xcdf67ea0
ip6_input_if+0x6c [/usr/src/sys/netinet6/ip6_input.c:376]
rlv=0xc06cf754 rfp=0xcdf67ec0
ipv6_input+0x3c [/usr/src/sys/netinet6/ip6_input.c:0]
rlv=0xc0373a40 rfp=0xcdf67f30
ether_input+0x5a0 [/usr/src/sys/net/if_ethersubr.c:572]
rlv=0xc062e198 rfp=0xcdf67f48
if_input_process+0x98 [/usr/src/sys/net/if.c:1001]
rlv=0xc07b78cc rfp=0xcdf67f70
ifiq_process+0xc0 [/usr/src/sys/net/ifq.c:848]
Rlv=0xc0627494 rfp=0xcdf67fa8
taskq_thread+0xa4 [/usr/src/sys/kern/kern_task.c:453]
rlv=0xc06acbf0 rfp=0xc0e77ee0
Bad frame pointer: 0xc0e77ee0

$ arm-none-eabi-objdump -D -S ./bsd.gdb
[…]
IN6_IS_ADDR_UNSPECIFIED(>ip6_dst)) {
c06d01a4:   e5923018ldr r3, [r2, #24]
[…]

I’s OK for other re0:
-netdev tap,id=net0 -device rtl8139,netdev=net0

I’m able to reproduce it even for miniroot.img (autoconfiguring vio0).
If anyone would like to look at it and would need some help setting up
qemu, feel free to contact me.

BR,
-- 
Krystian Lewandowski


OpenBSD 7.4/sparc64 does not boot from disk after installation under QEMU

2024-01-13 Thread Bruno Haible
Hi,

OpenBSD 7.4 and 7.3 for sparc64 do not boot from the disk on which the
OpenBSD installer has installed the OS. This is under QEMU, not hardware SPARC.
It worked with OpenBSD 7.2; therefore it must be a regression.

How to reproduce:
- Use GNU/Linux as a guest. (I use a distro on x86_64.)
- Compile qemu-8.2.0 from source and install it. (The QEMU version does not
  matter. The results are identical with older QEMU versions.)
  Update PATH to find these binaries.
- $ wget -O OpenBSD-7.4-sparc64-install.iso 
https://cdn.openbsd.org/pub/OpenBSD/7.4/sparc64/install74.iso
- $ qemu-img create -f qcow2 openbsd74.qcow2 8G
- $ common_args="-m 512 -drive file=openbsd74.qcow2,format=qcow2,index=0 
-netdev type=user,id=net0 -device ne2k_isa,netdev=net0,mac=52:54:00:12:34:56 
-nographic"
- $ qemu-system-sparc64 $common_args -cdrom OpenBSD-7.4-sparc64-install.iso 
-boot d

Perform the steps of the installer. I'm only noting non-default interactive 
input:

(I)nstall.
Terminal type: xterm
Host name: sparc64-openbsd74
DNS domain: MYDOMAINNAME
Root password: 
Setup a user?
  login: MY_USER_NAME
  full name: MY_FULL_NAME
  password: 
Allow root ssh login: yes
Partitioning: (E)dit auto layout
  d e
  d d
  d b
  c a
  14680064 [= 7 GB]
  a b
  enter enter swap
  p
  q
Sets:
  -game74.tgz
  -x*
Continue without verification: yes
Time zone: Europe/Berlin

Halt.

- $ qemu-system-sparc64 $common_args
=>

OpenBIOS for Sparc64
Configuration device id QEMU version 1 machine id 0
kernel cmdline
CPUs: 1 x SUNW,UltraSPARC-IIi
UUID: ----
Welcome to OpenBIOS v1.1 built on Mar 7 2023 22:22
  Type 'help' for detailed information
Trying disk:a...
Not a bootable ELF image
Not a bootable a.out image

Loading FCode image...
Loaded 6888 bytes
entry point is 0x4000
Evaluating FCode...
OpenBSD IEEE 1275 Bootblock 2.1
..>> OpenBSD BOOT 1.25
Trying bsd...
Booting /pci@1fe,0/pci@1,1/ide@3/ide@0/disk@0:a/bsd
10068240@0x100+7920@0x199a110+150292@0x1c0+4044012@0x1c24b14
symbols @ 0xfedca400 488378+165+666912+462908 start=0x100
[ using 1619400 bytes of bsd ELF symbol table ]
Unimplemented service set-symbol-lookup ([2] -- [0])
Unhandled Exception 0x0030
PC = 0x01139d7c NPC = 0x01139d80
Stopping execution






Re: Fatal protection fault when installing under qemu/kvm

2023-07-26 Thread Henryk Paluch

Hello!

> I have tried to install openbsd under qemu/kvm, but when it is installing
> the sets it triggers a protection fault.

It is known bug in OpenBSD 7.3/amd64 kernel when it uses i8254 Timer. As 
workaround you need to enable HPET Timer in QEMU/KVM. If you are using
libvirt (or virt-manager) on top of KVM (very common) you need to edit 
Domain XML (for example using "virsh edit VM_NAME" and enable HPET. Here 
is related snippet:



 


Notice, that by default attribute 'present' is unfortunately set to 'no' 
triggering this kernel bug.


Here is my post with more details on HPET timer:
- https://marc.info/?l=openbsd-bugs=168537353225613=2

And here is my in-depth analysis:
- https://marc.info/?l=openbsd-bugs=168564039922088=2

Original text:
On 7/26/23 15:19, Tom Lawlor wrote:

Hello

I have tried to install openbsd under qemu/kvm, but when it is installing
the sets it triggers a protection fault.

[image: image.png]

The vm has:
1 vCPU
1Gib RAM

It is running under a zen3 processor.

Steps to reproduce

1. Press enter using automatic options for every option
2. Protection fault occurs





Fatal protection fault when installing under qemu/kvm

2023-07-26 Thread Tom Lawlor
Hello

I have tried to install openbsd under qemu/kvm, but when it is installing
the sets it triggers a protection fault.

[image: image.png]

The vm has:
1 vCPU
1Gib RAM

It is running under a zen3 processor.

Steps to reproduce

1. Press enter using automatic options for every option
2. Protection fault occurs


Re: OpenBSD in QEMU KVM: High QEMU CPU usage when OpenBSD is 100% idle

2023-05-29 Thread bruma
>> > OP: what is your sysctl kern.timecounter ?
>> >
>>
>> kern.timecounter.tick=1
>> kern.timecounter.timestepwarnings=0
>> kern.timecounter.hardware=pvclock0
>> kern.timecounter.choice=i8254(0) pvclock0(1500) acpihpet0(1000) 
>> acpitimer0(1000)
>>
>>
> 
> You might try changing this and seeing if the load changes.

I tried setting kern.timecounter.hardware to the alternatives shown in 
timecounter.choice: They all make the situation much worse, at -smp 8 it 
goes from the 4-5% of pvclock0 to 8%-15% of the others.



Re: OpenBSD in QEMU KVM: High QEMU CPU usage when OpenBSD is 100% idle

2023-05-27 Thread Mike Larkin
On Sat, May 27, 2023 at 04:34:27PM +, bruma wrote:
> > On Sat, May 27, 2023 at 10:29:37AM +0200, Claudio Jeker wrote:
> >> On Sat, May 27, 2023 at 09:16:23AM +0100, Stuart Henderson wrote:
> >> > On 2023/05/27 06:36, br...@mailbox.org wrote:
> >> > >
> >> > >
> >> > > On Sat, 27 May 2023, Mike Larkin wrote:
> >> > >
> >> > > > probably IPI traffic then. not sure what else to say. If a few % 
> >> > > > host overhead
> >> > > > is too much fot you with a 16 vCPU VM, I'd suggest reducing that.
> >> > > >
> >> > > > What is your workload for a 16 vcpu openbsd VM anyway?
> >> > >
> >> > > I would like to use the OpenBSD VM as my main workstation. I also need 
> >> > > to
> >> > > use Linux for some graphic intensive stuff, so the ideal OpenBSD on 
> >> > > host
> >> > > with vmm for Linux is not an option unfortunately. I guess I could 
> >> > > accept
> >> > > that CPU usage price, but of course not having to pay it would be 
> >> > > better.
> >> >
> >> > OpenBSD doesn't do brilliantly with that many CPUs yet. Things are
> >> > getting better but I think you're likely to find many workloads are a
> >> > bit less laggy with half that.
> >>
> >> Also a few % CPU on the host can be caused by the interrupts caused by the
> >> clocks on every CPU. It may be that we do not select a cheap clock source
> >> like TSC and the result is much more overhead on the host.
> >>
> >> --
> >> :wq Claudio
> >>
> >
> > yes I did not think of that; thanks Claudio!
> >
> > OP: what is your sysctl kern.timecounter ?
> >
>
> kern.timecounter.tick=1
> kern.timecounter.timestepwarnings=0
> kern.timecounter.hardware=pvclock0
> kern.timecounter.choice=i8254(0) pvclock0(1500) acpihpet0(1000) 
> acpitimer0(1000)
>
>

You might try changing this and seeing if the load changes.



Re: OpenBSD in QEMU KVM: High QEMU CPU usage when OpenBSD is 100% idle

2023-05-27 Thread Stuart Henderson
On 2023/05/27 15:35, Mike Larkin wrote:
> On Sat, May 27, 2023 at 10:29:37AM +0200, Claudio Jeker wrote:
> > On Sat, May 27, 2023 at 09:16:23AM +0100, Stuart Henderson wrote:
> > > On 2023/05/27 06:36, br...@mailbox.org wrote:
> > > >
> > > >
> > > > On Sat, 27 May 2023, Mike Larkin wrote:
> > > >
> > > > > probably IPI traffic then. not sure what else to say. If a few % host 
> > > > > overhead
> > > > > is too much fot you with a 16 vCPU VM, I'd suggest reducing that.
> > > > >
> > > > > What is your workload for a 16 vcpu openbsd VM anyway?
> > > >
> > > > I would like to use the OpenBSD VM as my main workstation. I also need 
> > > > to
> > > > use Linux for some graphic intensive stuff, so the ideal OpenBSD on host
> > > > with vmm for Linux is not an option unfortunately. I guess I could 
> > > > accept
> > > > that CPU usage price, but of course not having to pay it would be 
> > > > better.
> > >
> > > OpenBSD doesn't do brilliantly with that many CPUs yet. Things are
> > > getting better but I think you're likely to find many workloads are a
> > > bit less laggy with half that.
> >
> > Also a few % CPU on the host can be caused by the interrupts caused by the
> > clocks on every CPU. It may be that we do not select a cheap clock source
> > like TSC and the result is much more overhead on the host.
> >
> > --
> > :wq Claudio
> >
> 
> yes I did not think of that; thanks Claudio!
> 
> OP: what is your sysctl kern.timecounter ?

on kvm I would expect pvclock to be preferred if the driver thinks it's
stable, otherwise probably acpihpet. fwiw mine looks like this.

$ sysctl kern.timecounter
kern.timecounter.tick=1
kern.timecounter.timestepwarnings=0
kern.timecounter.hardware=acpihpet0
kern.timecounter.choice=i8254(0) pvclock0(500) acpihpet0(1000) acpitimer0(1000)



Re: OpenBSD in QEMU KVM: High QEMU CPU usage when OpenBSD is 100% idle

2023-05-27 Thread Mike Larkin
On Sat, May 27, 2023 at 10:29:37AM +0200, Claudio Jeker wrote:
> On Sat, May 27, 2023 at 09:16:23AM +0100, Stuart Henderson wrote:
> > On 2023/05/27 06:36, br...@mailbox.org wrote:
> > >
> > >
> > > On Sat, 27 May 2023, Mike Larkin wrote:
> > >
> > > > probably IPI traffic then. not sure what else to say. If a few % host 
> > > > overhead
> > > > is too much fot you with a 16 vCPU VM, I'd suggest reducing that.
> > > >
> > > > What is your workload for a 16 vcpu openbsd VM anyway?
> > >
> > > I would like to use the OpenBSD VM as my main workstation. I also need to
> > > use Linux for some graphic intensive stuff, so the ideal OpenBSD on host
> > > with vmm for Linux is not an option unfortunately. I guess I could accept
> > > that CPU usage price, but of course not having to pay it would be better.
> >
> > OpenBSD doesn't do brilliantly with that many CPUs yet. Things are
> > getting better but I think you're likely to find many workloads are a
> > bit less laggy with half that.
>
> Also a few % CPU on the host can be caused by the interrupts caused by the
> clocks on every CPU. It may be that we do not select a cheap clock source
> like TSC and the result is much more overhead on the host.
>
> --
> :wq Claudio
>

yes I did not think of that; thanks Claudio!

OP: what is your sysctl kern.timecounter ?



Re: OpenBSD in QEMU KVM: High QEMU CPU usage when OpenBSD is 100% idle

2023-05-27 Thread Claudio Jeker
On Sat, May 27, 2023 at 09:16:23AM +0100, Stuart Henderson wrote:
> On 2023/05/27 06:36, br...@mailbox.org wrote:
> > 
> > 
> > On Sat, 27 May 2023, Mike Larkin wrote:
> > 
> > > probably IPI traffic then. not sure what else to say. If a few % host 
> > > overhead
> > > is too much fot you with a 16 vCPU VM, I'd suggest reducing that.
> > > 
> > > What is your workload for a 16 vcpu openbsd VM anyway?
> > 
> > I would like to use the OpenBSD VM as my main workstation. I also need to
> > use Linux for some graphic intensive stuff, so the ideal OpenBSD on host
> > with vmm for Linux is not an option unfortunately. I guess I could accept
> > that CPU usage price, but of course not having to pay it would be better.
> 
> OpenBSD doesn't do brilliantly with that many CPUs yet. Things are
> getting better but I think you're likely to find many workloads are a
> bit less laggy with half that.

Also a few % CPU on the host can be caused by the interrupts caused by the
clocks on every CPU. It may be that we do not select a cheap clock source
like TSC and the result is much more overhead on the host. 

-- 
:wq Claudio



Re: OpenBSD in QEMU KVM: High QEMU CPU usage when OpenBSD is 100% idle

2023-05-27 Thread Stuart Henderson
On 2023/05/27 06:36, br...@mailbox.org wrote:
> 
> 
> On Sat, 27 May 2023, Mike Larkin wrote:
> 
> > probably IPI traffic then. not sure what else to say. If a few % host 
> > overhead
> > is too much fot you with a 16 vCPU VM, I'd suggest reducing that.
> > 
> > What is your workload for a 16 vcpu openbsd VM anyway?
> 
> I would like to use the OpenBSD VM as my main workstation. I also need to
> use Linux for some graphic intensive stuff, so the ideal OpenBSD on host
> with vmm for Linux is not an option unfortunately. I guess I could accept
> that CPU usage price, but of course not having to pay it would be better.

OpenBSD doesn't do brilliantly with that many CPUs yet. Things are
getting better but I think you're likely to find many workloads are a
bit less laggy with half that.



Re: OpenBSD in QEMU KVM: High QEMU CPU usage when OpenBSD is 100% idle

2023-05-27 Thread bruma




On Sat, 27 May 2023, Mike Larkin wrote:


probably IPI traffic then. not sure what else to say. If a few % host overhead
is too much fot you with a 16 vCPU VM, I'd suggest reducing that.

What is your workload for a 16 vcpu openbsd VM anyway?


I would like to use the OpenBSD VM as my main workstation. I also need to 
use Linux for some graphic intensive stuff, so the ideal OpenBSD on host 
with vmm for Linux is not an option unfortunately. I guess I could accept 
that CPU usage price, but of course not having to pay it would be better.




Re: OpenBSD in QEMU KVM: High QEMU CPU usage when OpenBSD is 100% idle

2023-05-26 Thread Mike Larkin
On Sat, May 27, 2023 at 05:07:34AM +, br...@mailbox.org wrote:
>
> On Sat, 27 May 2023, Mike Larkin wrote:
>
> > On Fri, May 26, 2023 at 08:14:23PM +0200, br...@mailbox.org wrote:
> > > On 05/26/2023 8:08 PM CEST Mike Larkin  wrote:
> > >
> > >
> > > > On Fri, May 26, 2023 at 07:16:09PM +0200, br...@mailbox.org wrote:
> > > > > > On 05/26/2023 6:06 PM CEST Mike Larkin  wrote:
> > > > > >
> > > > > > perf top on the linux side to see where qemu is spending it's time?
> > > > >
> > > > > Sure, I ran `perf top -p $PID` with $PID being the PID of the QEMU 
> > > > > process and copied the screen after a few seconds. Let me know if you 
> > > > > intended something different:
> > > > >
> > > > >  PerfTop: 133 irqs/sec  kernel:72.9%  exact:  0.0% lost: 0/0 
> > > > > drop: 0/0 [4000Hz cycles],  (target_pid: 9939)
> > > > > --
> > > > >
> > > > > 25.35%  [kvm_amd]  [k] svm_vcpu_run
> > > > >  4.18%  [kernel]   [k] native_write_msr
> > > > >  4.01%  [kernel]   [k] native_read_msr
> > > > >  3.73%  [kernel]   [k] read_tsc
> > > > >  3.64%  [kvm]  [k] kvm_arch_vcpu_ioctl_run
> > > > >  2.21%  [kvm_amd]  [k] svm_vcpu_load
> > > > >  1.98%  [kernel]   [k] ktime_get
> > > > >  1.47%  [kvm]  [k] kvm_apic_has_interrupt
> > > > >  1.40%  [kernel]   [k] restore_fpregs_from_fpstate
> > > > >  1.29%  [kvm]  [k] apic_has_interrupt_for_ppr
> > > > >  1.18%  [kernel]   [k] check_preemption_disabled
> > > > >  1.10%  [kernel]   [k] x86_pmu_disable_all
> > > > >  1.07%  [kernel]   [k] __srcu_read_lock
> > > > >  1.07%  [kernel]   [k] newidle_balance
> > > > >  1.03%  [kvm]  [k] kvm_pmu_trigger_event
> > > > >  0.98%  [kernel]   [k] amd_pmu_addr_offset
> > > > >
> > > > > I tried this also on the FreeBSD VM and the irqs/sec were between 2 
> > > > > and 4.
> > > > >
> > > >
> > > > you might just be bombarded with ipis. how many vcpus?
> > >
> > > It should be 16, I use `-smp 16 -cpu host`
> > >
> >
> > try with less and see if that works.
>
> With -smp 4 it's better although still worse than FreeBSD/Linux. In htop
> OpenBSD is in the 1.3-2.6% range while the other OSes are at 0-0.7%. As I
> said, the FreeBSD/Linux VMs continue to stay in the 0% even at -smp 16,
> while I've seen OpenBSD idle at up to 7-8% with that.

probably IPI traffic then. not sure what else to say. If a few % host overhead
is too much fot you with a 16 vCPU VM, I'd suggest reducing that.

What is your workload for a 16 vcpu openbsd VM anyway?



Re: OpenBSD in QEMU KVM: High QEMU CPU usage when OpenBSD is 100% idle

2023-05-26 Thread bruma



On Sat, 27 May 2023, Mike Larkin wrote:


On Fri, May 26, 2023 at 08:14:23PM +0200, br...@mailbox.org wrote:

On 05/26/2023 8:08 PM CEST Mike Larkin  wrote:



On Fri, May 26, 2023 at 07:16:09PM +0200, br...@mailbox.org wrote:

On 05/26/2023 6:06 PM CEST Mike Larkin  wrote:

perf top on the linux side to see where qemu is spending it's time?


Sure, I ran `perf top -p $PID` with $PID being the PID of the QEMU process and 
copied the screen after a few seconds. Let me know if you intended something 
different:

 PerfTop: 133 irqs/sec  kernel:72.9%  exact:  0.0% lost: 0/0 drop: 0/0 
[4000Hz cycles],  (target_pid: 9939)
--

25.35%  [kvm_amd]  [k] svm_vcpu_run
 4.18%  [kernel]   [k] native_write_msr
 4.01%  [kernel]   [k] native_read_msr
 3.73%  [kernel]   [k] read_tsc
 3.64%  [kvm]  [k] kvm_arch_vcpu_ioctl_run
 2.21%  [kvm_amd]  [k] svm_vcpu_load
 1.98%  [kernel]   [k] ktime_get
 1.47%  [kvm]  [k] kvm_apic_has_interrupt
 1.40%  [kernel]   [k] restore_fpregs_from_fpstate
 1.29%  [kvm]  [k] apic_has_interrupt_for_ppr
 1.18%  [kernel]   [k] check_preemption_disabled
 1.10%  [kernel]   [k] x86_pmu_disable_all
 1.07%  [kernel]   [k] __srcu_read_lock
 1.07%  [kernel]   [k] newidle_balance
 1.03%  [kvm]  [k] kvm_pmu_trigger_event
 0.98%  [kernel]   [k] amd_pmu_addr_offset

I tried this also on the FreeBSD VM and the irqs/sec were between 2 and 4.



you might just be bombarded with ipis. how many vcpus?


It should be 16, I use `-smp 16 -cpu host`



try with less and see if that works.


With -smp 4 it's better although still worse than FreeBSD/Linux. In htop 
OpenBSD is in the 1.3-2.6% range while the other OSes are at 0-0.7%. As I 
said, the FreeBSD/Linux VMs continue to stay in the 0% even at -smp 16, 
while I've seen OpenBSD idle at up to 7-8% with that.




Re: OpenBSD in QEMU KVM: High QEMU CPU usage when OpenBSD is 100% idle

2023-05-26 Thread Mike Larkin
On Fri, May 26, 2023 at 08:14:23PM +0200, br...@mailbox.org wrote:
> On 05/26/2023 8:08 PM CEST Mike Larkin  wrote:
>
>
> > On Fri, May 26, 2023 at 07:16:09PM +0200, br...@mailbox.org wrote:
> > > > On 05/26/2023 6:06 PM CEST Mike Larkin  wrote:
> > > >
> > > > perf top on the linux side to see where qemu is spending it's time?
> > >
> > > Sure, I ran `perf top -p $PID` with $PID being the PID of the QEMU 
> > > process and copied the screen after a few seconds. Let me know if you 
> > > intended something different:
> > >
> > >  PerfTop: 133 irqs/sec  kernel:72.9%  exact:  0.0% lost: 0/0 drop: 
> > > 0/0 [4000Hz cycles],  (target_pid: 9939)
> > > --
> > >
> > > 25.35%  [kvm_amd]  [k] svm_vcpu_run
> > >  4.18%  [kernel]   [k] native_write_msr
> > >  4.01%  [kernel]   [k] native_read_msr
> > >  3.73%  [kernel]   [k] read_tsc
> > >  3.64%  [kvm]  [k] kvm_arch_vcpu_ioctl_run
> > >  2.21%  [kvm_amd]  [k] svm_vcpu_load
> > >  1.98%  [kernel]   [k] ktime_get
> > >  1.47%  [kvm]  [k] kvm_apic_has_interrupt
> > >  1.40%  [kernel]   [k] restore_fpregs_from_fpstate
> > >  1.29%  [kvm]  [k] apic_has_interrupt_for_ppr
> > >  1.18%  [kernel]   [k] check_preemption_disabled
> > >  1.10%  [kernel]   [k] x86_pmu_disable_all
> > >  1.07%  [kernel]   [k] __srcu_read_lock
> > >  1.07%  [kernel]   [k] newidle_balance
> > >  1.03%  [kvm]  [k] kvm_pmu_trigger_event
> > >  0.98%  [kernel]   [k] amd_pmu_addr_offset
> > >
> > > I tried this also on the FreeBSD VM and the irqs/sec were between 2 and 4.
> > >
> >
> > you might just be bombarded with ipis. how many vcpus?
>
> It should be 16, I use `-smp 16 -cpu host`
>

try with less and see if that works.



Re: OpenBSD in QEMU KVM: High QEMU CPU usage when OpenBSD is 100% idle

2023-05-26 Thread bruma
On 05/26/2023 8:08 PM CEST Mike Larkin  wrote:
 
  
> On Fri, May 26, 2023 at 07:16:09PM +0200, br...@mailbox.org wrote:
> > > On 05/26/2023 6:06 PM CEST Mike Larkin  wrote:
> > >
> > > perf top on the linux side to see where qemu is spending it's time?
> >
> > Sure, I ran `perf top -p $PID` with $PID being the PID of the QEMU process 
> > and copied the screen after a few seconds. Let me know if you intended 
> > something different:
> >
> >  PerfTop: 133 irqs/sec  kernel:72.9%  exact:  0.0% lost: 0/0 drop: 0/0 
> > [4000Hz cycles],  (target_pid: 9939)
> > --
> >
> > 25.35%  [kvm_amd]  [k] svm_vcpu_run
> >  4.18%  [kernel]   [k] native_write_msr
> >  4.01%  [kernel]   [k] native_read_msr
> >  3.73%  [kernel]   [k] read_tsc
> >  3.64%  [kvm]  [k] kvm_arch_vcpu_ioctl_run
> >  2.21%  [kvm_amd]  [k] svm_vcpu_load
> >  1.98%  [kernel]   [k] ktime_get
> >  1.47%  [kvm]  [k] kvm_apic_has_interrupt
> >  1.40%  [kernel]   [k] restore_fpregs_from_fpstate
> >  1.29%  [kvm]  [k] apic_has_interrupt_for_ppr
> >  1.18%  [kernel]   [k] check_preemption_disabled
> >  1.10%  [kernel]   [k] x86_pmu_disable_all
> >  1.07%  [kernel]   [k] __srcu_read_lock
> >  1.07%  [kernel]   [k] newidle_balance
> >  1.03%  [kvm]  [k] kvm_pmu_trigger_event
> >  0.98%  [kernel]   [k] amd_pmu_addr_offset
> >
> > I tried this also on the FreeBSD VM and the irqs/sec were between 2 and 4.
> >
> 
> you might just be bombarded with ipis. how many vcpus?

It should be 16, I use `-smp 16 -cpu host`



Re: OpenBSD in QEMU KVM: High QEMU CPU usage when OpenBSD is 100% idle

2023-05-26 Thread Mike Larkin
On Fri, May 26, 2023 at 07:16:09PM +0200, br...@mailbox.org wrote:
> > On 05/26/2023 6:06 PM CEST Mike Larkin  wrote:
> >
> > perf top on the linux side to see where qemu is spending it's time?
>
> Sure, I ran `perf top -p $PID` with $PID being the PID of the QEMU process 
> and copied the screen after a few seconds. Let me know if you intended 
> something different:
>
>  PerfTop: 133 irqs/sec  kernel:72.9%  exact:  0.0% lost: 0/0 drop: 0/0 
> [4000Hz cycles],  (target_pid: 9939)
> --
>
> 25.35%  [kvm_amd]  [k] svm_vcpu_run
>  4.18%  [kernel]   [k] native_write_msr
>  4.01%  [kernel]   [k] native_read_msr
>  3.73%  [kernel]   [k] read_tsc
>  3.64%  [kvm]  [k] kvm_arch_vcpu_ioctl_run
>  2.21%  [kvm_amd]  [k] svm_vcpu_load
>  1.98%  [kernel]   [k] ktime_get
>  1.47%  [kvm]  [k] kvm_apic_has_interrupt
>  1.40%  [kernel]   [k] restore_fpregs_from_fpstate
>  1.29%  [kvm]  [k] apic_has_interrupt_for_ppr
>  1.18%  [kernel]   [k] check_preemption_disabled
>  1.10%  [kernel]   [k] x86_pmu_disable_all
>  1.07%  [kernel]   [k] __srcu_read_lock
>  1.07%  [kernel]   [k] newidle_balance
>  1.03%  [kvm]  [k] kvm_pmu_trigger_event
>  0.98%  [kernel]   [k] amd_pmu_addr_offset
>
> I tried this also on the FreeBSD VM and the irqs/sec were between 2 and 4.
>

you might just be bombarded with ipis. how many vcpus?



Re: OpenBSD in QEMU KVM: High QEMU CPU usage when OpenBSD is 100% idle

2023-05-26 Thread bruma
> On 05/26/2023 6:06 PM CEST Mike Larkin  wrote:
> 
> perf top on the linux side to see where qemu is spending it's time?

Sure, I ran `perf top -p $PID` with $PID being the PID of the QEMU process and 
copied the screen after a few seconds. Let me know if you intended something 
different:

 PerfTop: 133 irqs/sec  kernel:72.9%  exact:  0.0% lost: 0/0 drop: 0/0 
[4000Hz cycles],  (target_pid: 9939)
--

25.35%  [kvm_amd]  [k] svm_vcpu_run
 4.18%  [kernel]   [k] native_write_msr
 4.01%  [kernel]   [k] native_read_msr
 3.73%  [kernel]   [k] read_tsc
 3.64%  [kvm]  [k] kvm_arch_vcpu_ioctl_run
 2.21%  [kvm_amd]  [k] svm_vcpu_load
 1.98%  [kernel]   [k] ktime_get
 1.47%  [kvm]  [k] kvm_apic_has_interrupt
 1.40%  [kernel]   [k] restore_fpregs_from_fpstate
 1.29%  [kvm]  [k] apic_has_interrupt_for_ppr
 1.18%  [kernel]   [k] check_preemption_disabled
 1.10%  [kernel]   [k] x86_pmu_disable_all
 1.07%  [kernel]   [k] __srcu_read_lock
 1.07%  [kernel]   [k] newidle_balance
 1.03%  [kvm]  [k] kvm_pmu_trigger_event
 0.98%  [kernel]   [k] amd_pmu_addr_offset

I tried this also on the FreeBSD VM and the irqs/sec were between 2 and 4.



Re: OpenBSD in QEMU KVM: High QEMU CPU usage when OpenBSD is 100% idle

2023-05-26 Thread Mike Larkin
On Fri, May 26, 2023 at 09:11:57AM +0200, br...@mailbox.org wrote:
> >Synopsis:OpenBSD in QEMU KVM: High QEMU CPU usage when OpenBSD is 100% 
> >idle
> >Category:kernel
> >Environment:
>   System  : OpenBSD 7.3
>   Details : OpenBSD 7.3 (GENERIC.MP) #1125: Sat Mar 25 10:36:29 MDT 
> 2023
>
> dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
>
>   Architecture: OpenBSD.amd64
>   Machine : amd64
> >Description:
>   I installed OpenBSD 7.3 inside a QEMU QCOW2 disk with QEMU
>   running on Linux. I ran `top -S` in the VM and idle is at 100%.
>   So I expect CPU usage of the host QEMU process to be 0%, instead
>   it ranges from 4% to 6%. I tried other VMs with the same QEMU
>   settings but running Linux and FreeBSD, in both cases the QEMU
>   process stays infact at 0% usage. I attach `vmstat -i` as was
>   suggested on IRC. Not sure if relevant but on FreeBSD the Total
>   rate given by the same command is around 50.
>
> >How-To-Repeat:
>   This is the QEMU args used to run the VM:
>
>   qemu-system-x86_64 -enable-kvm -cpu host -smp 16 -m 2048
>   -nodefaults \ -drive if=virtio,format=qcow2,file=openbsd.qcow2
>   -nic user,model=virtio-net-pci,id=net0 -display curses -vga std
>
>
> vmstat -i:
> interrupt   total rate
> irq114/virtio0  102450
> irq98/virtio1   291362
> irq100/fdc0 10
> irq144/pckbc0  140
> irq0/clock   33566752 3177
> irq0/ipi   164241   15
> Total33770389 3196

perf top on the linux side to see where qemu is spending it's time?



OpenBSD in QEMU KVM: High QEMU CPU usage when OpenBSD is 100% idle

2023-05-26 Thread bruma
>Synopsis:  OpenBSD in QEMU KVM: High QEMU CPU usage when OpenBSD is 100% 
>idle
>Category:  kernel
>Environment:
System  : OpenBSD 7.3
Details : OpenBSD 7.3 (GENERIC.MP) #1125: Sat Mar 25 10:36:29 MDT 
2023
 
dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP

Architecture: OpenBSD.amd64
Machine : amd64
>Description:
I installed OpenBSD 7.3 inside a QEMU QCOW2 disk with QEMU 
running on Linux. I ran `top -S` in the VM and idle is at 100%. 
So I expect CPU usage of the host QEMU process to be 0%, instead 
it ranges from 4% to 6%. I tried other VMs with the same QEMU 
settings but running Linux and FreeBSD, in both cases the QEMU 
process stays infact at 0% usage. I attach `vmstat -i` as was 
suggested on IRC. Not sure if relevant but on FreeBSD the Total 
rate given by the same command is around 50.

>How-To-Repeat:
    This is the QEMU args used to run the VM:

qemu-system-x86_64 -enable-kvm -cpu host -smp 16 -m 2048 
-nodefaults \ -drive if=virtio,format=qcow2,file=openbsd.qcow2 
-nic user,model=virtio-net-pci,id=net0 -display curses -vga std


vmstat -i:
interrupt   total rate
irq114/virtio0  102450
irq98/virtio1   291362
irq100/fdc0 10
irq144/pckbc0  140
irq0/clock   33566752 3177
irq0/ipi   164241   15
Total33770389 3196

dmesg:
OpenBSD 7.3 (GENERIC.MP) #1125: Sat Mar 25 10:36:29 MDT 2023
dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 2130563072 (2031MB)
avail mem = 2046664704 (1951MB)
random: good seed from bootblocks
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.8 @ 0xf59f0 (9 entries)
bios0: vendor SeaBIOS version "rel-1.16.2-0-gea1b7a073390-prebuilt.qemu.org" 
date 04/01/2014
bios0: QEMU Standard PC (i440FX + PIIX, 1996)
acpi0 at bios0: ACPI 1.0
acpi0: sleep states S3 S4 S5
acpi0: tables DSDT FACP APIC HPET WAET
acpi0: wakeup devices
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: AMD Ryzen 7 5700G with Radeon Graphics, 3800.81 MHz, 19-50-00
cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,SSSE3,FMA3,CX16,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,CPCTR,FSGSBASE,TSC_ADJUST,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,RDSEED,ADX,SMAP,CLFLUSHOPT,CLWB,SHA,UMIP,PKU,WAITPKG,IBRS,IBPB,STIBP,SSBD,IBPB,IBRS,STIBP,SSBD,VIRTSSBD,XSAVEOPT,XSAVEC,XGETBV1,XSAVES
cpu0: 64KB 64b/line 2-way D-cache, 64KB 64b/line 2-way I-cache
cpu0: 512KB 64b/line 16-way L2 cache
cpu0: smt 0, core 0, package 0
mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
cpu0: apic clock running at 1000MHz
cpu1 at mainbus0: apid 1 (application processor)
cpu1: AMD Ryzen 7 5700G with Radeon Graphics, 3800.79 MHz, 19-50-00
cpu1: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,SSSE3,FMA3,CX16,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,CPCTR,FSGSBASE,TSC_ADJUST,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,RDSEED,ADX,SMAP,CLFLUSHOPT,CLWB,SHA,UMIP,PKU,IBRS,IBPB,STIBP,SSBD,IBPB,IBRS,STIBP,SSBD,VIRTSSBD,XSAVEOPT,XSAVEC,XGETBV1,XSAVES
cpu1: 64KB 64b/line 2-way D-cache, 64KB 64b/line 2-way I-cache
cpu1: 512KB 64b/line 16-way L2 cache
cpu1: smt 0, core 1, package 0
cpu2 at mainbus0: apid 2 (application processor)
cpu2: AMD Ryzen 7 5700G with Radeon Graphics, 3800.88 MHz, 19-50-00
cpu2: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,SSSE3,FMA3,CX16,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,AMCR8,ABM,SSE4A,MASSE,3DNOWP,OSVW,CPCTR,FSGSBASE,TSC_ADJUST,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,RDSEED,ADX,SMAP,CLFLUSHOPT,CLWB,SHA,UMIP,PKU,IBRS,IBPB,STIBP,SSBD,IBPB,IBRS,STIBP,SSBD,VIRTSSBD,XSAVEOPT,XSAVEC,XGETBV1,XSAVES
cpu2: 64KB 64b/line 2-way D-cache, 64KB 64b/line 2-way I-cache
cpu2: 512KB 64b/line 16-way L2 cache
cpu2: smt 0, core 2, package 0
cpu3 at mainbus0: apid 3 (application processor)
cpu3: AMD Ryzen 7 5700G with Radeon Graphics, 3800.79 MHz, 19-50-00
cpu3: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,SSSE3,FMA3,CX16,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,MMXX,FFXSR,PAGE1GB

Re: VirtIO SCSI qemu kernel panic OpenBSD 7.3 amd64

2023-04-17 Thread Antun Matanović
Adding the trace output from an existing install in the same VM when
install73.img is attached via VirtIO SCSI.

virtio0: apic 0 int 22
virtio1 at pci0 dev 3 function 0 "Qumranet Virtio SCSI" rev 0x00
vioscsi0 at virtio1: qsize 256
scsibus1 at vioscsi0: 255 targets
uvm_fault(0x8247a4b8, 0x8, 0, 1) -> e
kernel: page fault trap, code=0
Stopped at  vioscsi_req_done+0x26:  movl0x8(%r15),%ebx
TIDPIDUID PRFLAGS PFLAGS  CPU  COMMAND
* 0  0  0 0x1  0x2000K swapper
vioscsi_req_done(e,80024800,fd80029ec738,e,80024800,800
f6e28) at vioscsi_req_done+0x26
vioscsi_vq_done(800f6e28,800f6e28,7bc2fbf2d450897e,0,80
024800,2) at vioscsi_vq_done+0x8c
virtio_check_vqs(80024800,80024800,f7ef68482b820e15,0,8
0024800,fd80029ec738) at virtio_check_vqs+0xfe
virtio_pci_poll_intr(80024800,80024800,ca34a4abe2d36ed6,800
24800,fd827f873e58,8243dfc8) at virtio_pci_poll_intr+0x3f
vioscsi_scsi_cmd(fd827f873e58,fd827f873e58,82927538,fd827f8
73e58,0,0) at vioscsi_scsi_cmd+0x2c0
scsi_xs_sync(fd827f873e58,fd827f873e58,7e040e87a558a697,5,fd827f873
e58,e0009) at scsi_xs_sync+0xaf
scsi_test_unit_ready(800aeb00,5,1c3,800aeb00,92ae6eef0a5d1cfa,0
) at scsi_test_unit_ready+0x4a
scsi_probe_link(8009d680,0,0,0,b436f1dad696f18f,0) at scsi_probe_link+0
x250
scsi_get_target_luns(8009d680,0,829276b0,8009d680,4fcba
00e302b1126,800f6c00) at scsi_get_target_luns+0x2d
scsi_probe_bus(8009d680,8009d680,b55317171b57e776,800f6
c00,82927880,8246c2d8) at scsi_probe_bus+0x6e
config_attach(800f6c00,82443a78,82927880,81d88f
a0,6092121cb3c883bd,800f6c00) at config_attach+0x1f4
vioscsi_attach(80024800,800f6c00,80024800,80024
800,4946787a72718913,80024800) at vioscsi_attach+0x29a
config_attach(80024800,824441e8,80024800,0,6092121cb32e
03d2,0) at config_attach+0x1f4
virtio_pci_attach(800ae500,80024800,82927a80,80
0ae500,761be1ef4cf2084,800ae500) at virtio_pci_attach+0x185
end trace frame: 0x82927a70, count: 0

On Tue, 11 Apr 2023 at 13:39, Antun Matanović  wrote:
>
> When trying to boot install73.img using virtio-scsi in qemu the kernel
> panics. Disabling the vioscsi driver lets the installer boot but then
> there is no disk to install to. This happens as long as any disks are
> attached using virtio-scsi but I am specifically trying to boot the
> install image itself because it's my only option for trying to install
> OpenBSD on Oracle Cloud Always Free tier.
>
> I was able to reproduce this problem locally using the following qemu
> parameters:
> -device virtio-scsi-pci,id=scsi -device scsi-hd,drive=hd -drive
> if=none,id=hd,file=install73.img (or disk.qcow2)
>
> Using a virtio-blk device works just fine, using:
> -drive if=virtio,file=install73.img
>
> Refer to my post on the misc mailing list for the Oracle Cloud output:
> https://marc.info/?l=openbsd-misc=16807786300=2
>
> Here is the full output of my local attempt to reproduce the issue,
> the kernel panic is identical on Oracle Cloud:
> C:\Users\matan\qemu\OpenBSD>qemu-system-x86_64 -accel
> whpx,kernel-irqchip=off -machine q35 -cpu EPYC-Rome,-monitor -m 8g
> -smp 6,sockets=1,cores=6 -nic
> user,model=virtio-net-pci,hostfwd=tcp::10022-:22 -vga virtio -drive
> if=virtio,file=disk.qcow2 -nographic -bios ..\OVMF_CODE.fd -device
> virtio-scsi-pci,id=scsi -device scsi-hd,drive=hd -drive
> if=none,id=hd,file=install73.img -boot menu=on
> WARNING: Image format was not specified for 'install73.img' and
> probing guessed raw.
>  Automatically detecting the format is dangerous for raw
> images, write operations on block 0 will be restricted.
>  Specify the 'raw' format explicitly to remove the restrictions.
> Windows Hypervisor Platform accelerator is operational
> BdsDxe: loading Boot0001 "UEFI QEMU QEMU HARDDISK " from
> PciRoot(0x0)/Pci(0x3,0x0)/Scsi(0x0,0x0)
> BdsDxe: starting Boot0001 "UEFI QEMU QEMU HARDDISK " from
> PciRoot(0x0)/Pci(0x3,0x0)/Scsi(0x0,0x0)
> probing: pc0 com0 mem[640K 7M 16K 2031M 3M 6144M]
> disk: hd0 hd1*
> >> OpenBSD/amd64 BOOTX64 3.63
> boot> set tty com0
> switching console to com0
> >> OpenBSD/amd64 BOOTX64 3.63
> boot> boot
> cannot open hd0a:/etc/random.seed: No such file or directory
> booting hd0a:/7.3/amd64/bsd.rd: 3924676+1647616+3886216+0+704512
> [109+440424+293778]=0xa667f0
> entry point at 0x1001000
> Copyright (c) 1982, 1986, 1989, 1991, 1993
> The Regents of the University of California.  All 

VirtIO SCSI qemu kernel panic OpenBSD 7.3 amd64

2023-04-11 Thread Antun Matanović
When trying to boot install73.img using virtio-scsi in qemu the kernel
panics. Disabling the vioscsi driver lets the installer boot but then
there is no disk to install to. This happens as long as any disks are
attached using virtio-scsi but I am specifically trying to boot the
install image itself because it's my only option for trying to install
OpenBSD on Oracle Cloud Always Free tier.

I was able to reproduce this problem locally using the following qemu
parameters:
-device virtio-scsi-pci,id=scsi -device scsi-hd,drive=hd -drive
if=none,id=hd,file=install73.img (or disk.qcow2)

Using a virtio-blk device works just fine, using:
-drive if=virtio,file=install73.img

Refer to my post on the misc mailing list for the Oracle Cloud output:
https://marc.info/?l=openbsd-misc=16807786300=2

Here is the full output of my local attempt to reproduce the issue,
the kernel panic is identical on Oracle Cloud:
C:\Users\matan\qemu\OpenBSD>qemu-system-x86_64 -accel
whpx,kernel-irqchip=off -machine q35 -cpu EPYC-Rome,-monitor -m 8g
-smp 6,sockets=1,cores=6 -nic
user,model=virtio-net-pci,hostfwd=tcp::10022-:22 -vga virtio -drive
if=virtio,file=disk.qcow2 -nographic -bios ..\OVMF_CODE.fd -device
virtio-scsi-pci,id=scsi -device scsi-hd,drive=hd -drive
if=none,id=hd,file=install73.img -boot menu=on
WARNING: Image format was not specified for 'install73.img' and
probing guessed raw.
 Automatically detecting the format is dangerous for raw
images, write operations on block 0 will be restricted.
 Specify the 'raw' format explicitly to remove the restrictions.
Windows Hypervisor Platform accelerator is operational
BdsDxe: loading Boot0001 "UEFI QEMU QEMU HARDDISK " from
PciRoot(0x0)/Pci(0x3,0x0)/Scsi(0x0,0x0)
BdsDxe: starting Boot0001 "UEFI QEMU QEMU HARDDISK " from
PciRoot(0x0)/Pci(0x3,0x0)/Scsi(0x0,0x0)
probing: pc0 com0 mem[640K 7M 16K 2031M 3M 6144M]
disk: hd0 hd1*
>> OpenBSD/amd64 BOOTX64 3.63
boot> set tty com0
switching console to com0
>> OpenBSD/amd64 BOOTX64 3.63
boot> boot
cannot open hd0a:/etc/random.seed: No such file or directory
booting hd0a:/7.3/amd64/bsd.rd: 3924676+1647616+3886216+0+704512
[109+440424+293778]=0xa667f0
entry point at 0x1001000
Copyright (c) 1982, 1986, 1989, 1991, 1993
The Regents of the University of California.  All rights reserved.
Copyright (c) 1995-2023 OpenBSD. All rights reserved.  https://www.OpenBSD.org

OpenBSD 7.3 (RAMDISK_CD) #1063: Sat Mar 25 10:41:49 MDT 2023
dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/RAMDISK_CD
real mem = 8565739520 (8168MB)
avail mem = 8302120960 (7917MB)
random: good seed from bootblocks
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.8 @ 0x7f8f8000 (10 entries)
bios0:
bios0: QEMU Standard PC (Q35 + ICH9, 2009)
acpi0 at bios0: ACPI 3.0
acpi0: tables DSDT FACP APIC HPET MCFG WAET BGRT
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: AMD Ryzen 5 3600 6-Core Processor, 3601.29 MHz, 17-31-00
cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,PCLMUL,SSSE3,FMA3,CX16,SSE4.1,SSE4.2,MOVBE,POPCNT,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,MMXX,FFXSR,PAGE1GB,RDTSCP,LONG,LAHF,CMPLEG,SVM,AMCR8,ABM,SSE4A,MASSE,3DNOWP,TOPEXT,CPCTR,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,RDSEED,ADX,SMAP,CLFLUSHOPT,CLWB,SHA,UMIP,XSAVEOPT,XSAVEC,XGETBV1,XSAVES
cpu0: 32KB 64b/line 8-way D-cache, 32KB 64b/line 8-way I-cache, 512KB
64b/line 8-way L2 cache, 16MB 64b/line 16-way L3 cache
cpu0: apic clock running at 1000MHz
cpu at mainbus0: not configured
cpu at mainbus0: not configured
cpu at mainbus0: not configured
cpu at mainbus0: not configured
cpu at mainbus0: not configured
ioapic0 at mainbus0: apid 0 pa 0xfec0, version 20, 24 pins
acpihpet0 at acpi0: 1 Hz
acpiprt0 at acpi0: bus 0 (PCI0)
"ACPI0006" at acpi0 not configured
acpipci0 at acpi0 PCI0: 0x0010 0x0011 0x
"PNP0A06" at acpi0 not configured
"PNP0A06" at acpi0 not configured
"PNP0A06" at acpi0 not configured
"QEMU0002" at acpi0 not configured
com0 at acpi0 COM1 addr 0x3f8/0x8 irq 4: ns16550a, 16 byte fifo
com0: console
acpicmos0 at acpi0
"ACPI0010" at acpi0 not configured
acpicpu at acpi0 not configured
pci0 at mainbus0 bus 0
0:1:0: mem address conflict 0x7000/0x4000
0:2:0: mem address conflict 0x70004000/0x4000
0:2:0: rom address conflict 0xfffc/0x4
0:3:0: mem address conflict 0x70008000/0x4000
0:4:0: mem address conflict 0x7000c000/0x4000
pchb0 at pci0 dev 0 function 0 "Intel 82G33 Host" rev 0x00
virtio0 at pci0 dev 1 function 0 "Qumranet Virtio 1.x GPU" rev 0x01
virtio0: no matching child driver; not configured
virtio1 at pci0 dev 2 function 0 "Qumranet Virtio Network" rev 0x00
vio0 at virtio1: address 52:54:00:12:34:56
virtio1: apic 0 int 22
virtio2 at pci0 dev 3 function 0 "Qumranet Virtio SCSI&q

bug on VPS server QEMU

2020-12-02 Thread Luca Di Gregorio
Hi, I see this on the console of my VPS Server

Regards


panic on VPS server - QEMU

2020-12-02 Thread Luca Di Gregorio
Hi, I see this on the console of my VPS server.

Regards


Re: OpenBSD Qemu

2020-05-24 Thread abed
maybe, but I'm trying to hook something to kernel somehow to remote
debugging the OpenBSD kernel, maybe we need some trap or something. at
least we need a core dump.
I'm sort of surprised by your quick response thanks.

On 5/25/20 4:14 AM, Stuart Longland wrote:
> On 25/5/20 8:34 am, abed wrote:
>> Sorry what kind of details you guess we need?
>>
>> Host OS: FreeBSD 12.1
>>
>> VMM: Qemu 5.0.0 (compiled from source)
>>
>> Guest OS: OpenBSD 6.7
>>
>> qemu-system-x86_64 -m 2048 \ -cdrom cd67.iso \ -drive
>> if=virtio,file=disk.qcow2,format=qcow2 \ -enable-kvm \ -netdev
>> user,id=mynet0,hostfwd=tcp:127.0.0.1:7922-:22 \ -device
>> virtio-net,netdev=mynet0 \ -smp 2
> Maybe direct a virtual serial port to a telnet port in QEMU, use
> `telnet` within `script` to log everything seen there and tell OpenBSD
> to use serial console on aforementioned serial port?
>
> That might give us a `dmesg` dump to work with at least.  If it really
> is interaction with the video console that causes it, maybe pressing
> some keys on said console will trigger it with the log messages going to
> the virtual serial port?


Re: OpenBSD Qemu

2020-05-24 Thread abed
yeah that's right, for sure it's Qemu but you know I'm curious to find
the reason or any lead. OK, for now, I will test on earlier version.

On 5/25/20 3:48 AM, Aaron Mason wrote:
> On Mon, May 25, 2020 at 8:40 AM abed  wrote:
>> Sorry what kind of details you guess we need?
>>
>> Host OS: FreeBSD 12.1
>>
>> VMM: Qemu 5.0.0 (compiled from source)
>>
>> Guest OS: OpenBSD 6.7
>>
>> qemu-system-x86_64 -m 2048 \ -cdrom cd67.iso \ -drive
>> if=virtio,file=disk.qcow2,format=qcow2 \ -enable-kvm \ -netdev
>> user,id=mynet0,hostfwd=tcp:127.0.0.1:7922-:22 \ -device
>> virtio-net,netdev=mynet0 \ -smp 2
>>
>> On 5/24/20 10:19 PM, Solene Rapenne wrote:
>>> Le Sun, 24 May 2020 21:19:16 +,
>>> abed  a écrit :
>>>
>>>> OpenBSD 6.7 version crashed on Qemu5.0.0. any idea?
>>> I think you forgot to attach some information like the crash details.
>>>
>>>
> Maybe a description of what happens leading up to the crash? Logs? 
> Screenshots?
>
> All being said, don't be too surprised if people aren't in a hurry to
> offer support.  The general consensus here is that if the OS works on
> real hardware but not an emulator, the problem is with the emulator,
> not the OS.
>
> I'd suggest trying an earlier version of QEMU, see if it breaks there as well.
>


Re: OpenBSD Qemu

2020-05-24 Thread abed
Sorry what kind of details you guess we need?

Host OS: FreeBSD 12.1

VMM: Qemu 5.0.0 (compiled from source)

Guest OS: OpenBSD 6.7

qemu-system-x86_64 -m 2048 \ -cdrom cd67.iso \ -drive
if=virtio,file=disk.qcow2,format=qcow2 \ -enable-kvm \ -netdev
user,id=mynet0,hostfwd=tcp:127.0.0.1:7922-:22 \ -device
virtio-net,netdev=mynet0 \ -smp 2

On 5/24/20 10:19 PM, Solene Rapenne wrote:
> Le Sun, 24 May 2020 21:19:16 +,
> abed  a écrit :
>
>> OpenBSD 6.7 version crashed on Qemu5.0.0. any idea?
> I think you forgot to attach some information like the crash details.
>
>


Re: OpenBSD Qemu

2020-05-24 Thread Solene Rapenne
Le Sun, 24 May 2020 21:19:16 +,
abed  a écrit :

> OpenBSD 6.7 version crashed on Qemu5.0.0. any idea?

I think you forgot to attach some information like the crash details.



Re: OpenBSD Qemu

2020-05-24 Thread abed
Even better. no hopefully I was able to press some keys but it crashing
randomly.

On 5/24/20 9:27 PM, Francois Pussault wrote:
> I failed to install it with Qemu because keyboard was "mad" 
> ;) 
>
>
>
>> 
>> From: abed 
>> Sent: Sun May 24 23:19:16 CEST 2020
>> To: 
>> Cc: 
>> Subject: OpenBSD Qemu
>>
>>
>> OpenBSD 6.7 version crashed on Qemu5.0.0. any idea?
>>
>>
>
> Cordialement
> Francois Pussault
> 10 chemin de négo saoumos
> apt 202 - bat 2
> 31300 Toulouse
> +33 6 17 230 820 
> fpussa...@contactoffice.fr
>


OpenBSD Qemu

2020-05-24 Thread abed
OpenBSD 6.7 version crashed on Qemu5.0.0. any idea?




Re: cannot clean-install KVM/QEMU VM that don't support MSR_TSX_CTRL

2020-05-22 Thread Theo de Raadt
Reading that diff, I get a sense they pass on underlying-hardware
cpu flags as-is, and then only write the support code when they feel
like it.

If so, that is ridiculous.  They should immediately mask against a list
of KNOWN and CURRENTLY SUPPORTED bits, and not pass on unknown stuff.

SASANO Takayoshi  wrote:

> > 1) Broken emulator
> > 2) Old broken emulator
> > 
> > A real cpu behaves that way.  The capability bits says the feature
> > exists, and when it exists, it MUST work.
> 
> Yes, I think KVM/QEMU may be malfunction and I found a patch for
> QEMU that supports MSR_TSX_CTRL.
> 
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg660546.html
> 
> > Have you have filed a bug with the authors of the emulator?
> 
> So I didn't write any bug report to them.
> 
> > If newer code emulators have it fixed, then again, how is this our
> > fault for using the feature as advertised in all real hardware?
> 
> It may be no problem that running KVM/QEMU locally (simply install new
> emulator by myself), but I found this MSR_TSX_CTRL problem on VPS service.
> 
> I understand that I have to consult support desk of the service provider and
> ask them to update QEMU.
> 
> Regards,
> -- 
> SASANO Takayoshi (JG1UAA) 



Re: cannot clean-install KVM/QEMU VM that don't support MSR_TSX_CTRL

2020-05-22 Thread SASANO Takayoshi
> 1) Broken emulator
> 2) Old broken emulator
> 
> A real cpu behaves that way.  The capability bits says the feature
> exists, and when it exists, it MUST work.

Yes, I think KVM/QEMU may be malfunction and I found a patch for
QEMU that supports MSR_TSX_CTRL.

https://www.mail-archive.com/qemu-devel@nongnu.org/msg660546.html

> Have you have filed a bug with the authors of the emulator?

So I didn't write any bug report to them.

> If newer code emulators have it fixed, then again, how is this our
> fault for using the feature as advertised in all real hardware?

It may be no problem that running KVM/QEMU locally (simply install new
emulator by myself), but I found this MSR_TSX_CTRL problem on VPS service.

I understand that I have to consult support desk of the service provider and
ask them to update QEMU.

Regards,
-- 
SASANO Takayoshi (JG1UAA) 



Re: cannot clean-install KVM/QEMU VM that don't support MSR_TSX_CTRL

2020-05-21 Thread Theo de Raadt
u...@cvs.openbsd.org wrote:

> >Synopsis:cannot clean-install KVM/QEMU VM that don't support MSR_TSX_CTRL
> >Category:kernel
> >Environment:
>   System  : OpenBSD 6.7
>   Details : OpenBSD 6.7 (GENERIC.MP) #2: Thu May 21 18:28:46 JST 2020
>
> u...@ik1-342-31132.vs.sakura.ne.jp:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> 
>   Architecture: OpenBSD.amd64
>   Machine : amd64
> >Description:
>   cpu_tsx_disable() in sys/arch/amd64/amd64/cpu.c tries to set
>   MSR_TSX_CTRL register, there is no problem with "real" CPU.
>   But under KVM/QEMU, OpenBSD-6.7 will crash if they don't handle
>   that MSR register.
>   There is no way to by-pass cpu_tsx_disable(), we cannot run
>   official binary on old KVM/QEMU host.

1) Broken emulator
2) Old broken emulator

A real cpu behaves that way.  The capability bits says the feature
exists, and when it exists, it MUST work.

If any emulator is passing bits on from a real cpu, and then not
handling them, that emulator is *completely broken*.  It is failing
to emulate what it claims to emulate.

Have you have filed a bug with the authors of the emulator?

If newer code emulators have it fixed, then again, how is this our
fault for using the feature as advertised in all real hardware?




cannot clean-install KVM/QEMU VM that don't support MSR_TSX_CTRL

2020-05-21 Thread uaa
>Synopsis:  cannot clean-install KVM/QEMU VM that don't support MSR_TSX_CTRL
>Category:  kernel
>Environment:
System  : OpenBSD 6.7
Details : OpenBSD 6.7 (GENERIC.MP) #2: Thu May 21 18:28:46 JST 2020
 
u...@ik1-342-31132.vs.sakura.ne.jp:/usr/src/sys/arch/amd64/compile/GENERIC.MP

Architecture: OpenBSD.amd64
Machine : amd64
>Description:
cpu_tsx_disable() in sys/arch/amd64/amd64/cpu.c tries to set
MSR_TSX_CTRL register, there is no problem with "real" CPU.
But under KVM/QEMU, OpenBSD-6.7 will crash if they don't handle
that MSR register.
There is no way to by-pass cpu_tsx_disable(), we cannot run
official binary on old KVM/QEMU host.
>How-To-Repeat:
simply try to boot OpenBSD-6.7/amd64's bsd.rd, bsd.sp and bsd.mp 
>Fix:
update KVM/QEMU. otherwise upgrade from OpenBSD-6.6 and following
workaround procedure.

1) install OpenBSD-6.6
2) fetch OpenBSD-6.7 kernel source code
3) modify sys/arch/amd64/amd64/cpu.c to disable cpu_tsx_disable()
4) build OpenBSD-6.7 GENERIC.MP kernel (as bsd.mp.tmp)
5) build OpenBSD-6.7 RAMDISK_CD kernel (as bsd.rd.tmp)
6) add OpenBSD-6.7 installer from bsd.rd to bsd.rd.tmp by rdsetroot
7) boot bsd.rd.tmp and upgrade
8) boot bsd.mp.tmp
9) rebuild OpenBSD-6.7 GENERIC.MP kernel and install

attached dmesg is modified kernel, by this patch

--- cpu.c~  Thu May 21 20:27:36 2020
+++ cpu.c   Thu May 21 18:12:58 2020
@@ -1175,9 +1175,10 @@
(sefflags_edx & SEFF0EDX_ARCH_CAP)) {
msr = rdmsr(MSR_ARCH_CAPABILITIES);
if (msr & ARCH_CAPABILITIES_TSX_CTRL) {
-   msr = rdmsr(MSR_TSX_CTRL);
-   msr |= TSX_CTRL_RTM_DISABLE | TSX_CTRL_TSX_CPUID_CLEAR;
-   wrmsr(MSR_TSX_CTRL, msr);
+   printf("%s: modifying MSR_TSX_CTRL bypassed\n", 
ci->ci_dev->dv_xname);
+// msr = rdmsr(MSR_TSX_CTRL);
+// msr |= TSX_CTRL_RTM_DISABLE | TSX_CTRL_TSX_CPUID_CLEAR;
+// wrmsr(MSR_TSX_CTRL, msr);
}
}
 }


dmesg:
OpenBSD 6.7 (GENERIC.MP) #2: Thu May 21 18:28:46 JST 2020

u...@ik1-342-31132.vs.sakura.ne.jp:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 1056940032 (1007MB)
avail mem = 1012346880 (965MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.4 @ 0xf6200 (11 entries)
bios0: vendor Seabios version "0.5.1" date 01/01/2011
bios0: Red Hat KVM
acpi0 at bios0: ACPI 1.0
acpi0: sleep states S5
acpi0: tables DSDT FACP SSDT APIC
acpi0: wakeup devices
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz, 344.08 MHz, 06-55-07
cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SS,SSE3,PCLMUL,SSSE3,FMA3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,3DNOWP,PERF,FSGSBASE,TSC_ADJUST,BMI1,HLE,AVX2,SMEP,BMI2,ERMS,INVPCID,RTM,MPX,AVX512F,AVX512DQ,RDSEED,ADX,SMAP,CLFLUSHOPT,CLWB,AVX512CD,AVX512BW,AVX512VL,PKU,MD_CLEAR,IBRS,IBPB,STIBP,SSBD,XSAVEOPT,XSAVEC,XGETBV1
cpu0: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 64b/line 
16-way L2 cache
cpu0: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu0: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu0: smt 0, core 0, package 0
mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
cpu0: apic clock running at 1000MHz
cpu1 at mainbus0: apid 1 (application processor)
cpu1: modifying MSR_TSX_CTRL bypassed
cpu1: Intel(R) Xeon(R) Gold 6212U CPU @ 2.40GHz, 673.84 MHz, 06-55-07
cpu1: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SS,SSE3,PCLMUL,SSSE3,FMA3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,PAGE1GB,RDTSCP,LONG,LAHF,ABM,3DNOWP,PERF,FSGSBASE,TSC_ADJUST,BMI1,HLE,AVX2,SMEP,BMI2,ERMS,INVPCID,RTM,MPX,AVX512F,AVX512DQ,RDSEED,ADX,SMAP,CLFLUSHOPT,CLWB,AVX512CD,AVX512BW,AVX512VL,PKU,MD_CLEAR,IBRS,IBPB,STIBP,SSBD,XSAVEOPT,XSAVEC,XGETBV1
cpu1: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 64b/line 
16-way L2 cache
cpu1: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu1: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu1: smt 0, core 0, package 1
ioapic0 at mainbus0: apid 0 pa 0xfec0, version 11, 24 pins
acpiprt0 at acpi0: bus 0 (PCI0)
acpicpu0 at acpi0: C1(@1 halt!)
acpicpu1 at acpi0: C1(@1 halt!)
"ACPI0006" at acpi0 not configured
acpipci0 at acpi0 PCI0: _OSC failed
a

Re: reN networking hangs in KVM/QEMU

2019-02-09 Thread Sijmen J. Mulder
Juan Francisco Cantero Hurtado wrote:
> > Why rtl8139? It's the worst option available to KVM/QEMU. You would be much
> > better off with virtio or e1000 for the NIC.
>
> Change also the disk to 'virtio' instead of 'ide'.

I went with the defaults out of caution but using virtio makes sense
and switching fixed the issue. Thanks for the tips.

Supposedly that does leave either OpenBSD's rtl8139 driver or QEMU's
emulation broken but I'm glad there's an allround better option.

Sijmen



Re: reN networking hangs in KVM/QEMU

2019-02-09 Thread Juan Francisco Cantero Hurtado
On Fri, Feb 08, 2019 at 08:23:55PM -0500, Brad Smith wrote:
> On 2/8/2019 8:07 PM, Sijmen J. Mulder wrote:
> 
> > Hi,
> > 
> > I run OpenBSD (snapshots) in a KVM+QEMU VM on Debian 9 and use it over
> > SSH. Those SSH sessions stall almost without fail when there's a lot of
> > output.
> > 
> > Repro (or at least my set up):
> > 
> >   1. Create a KVM+QMEU VM on Linux with virt-manager. I assigned two
> >  cores and otherwise used defaults.
> >   2. Install an OpenBSD snapshot.
> >   3. In an SSH session, execute 'find /' until hang. I suspect other
> >  large network transfers may result in the same.
> > 
> > Symptoms:
> > 
> >   - SSH session stalls with no output, no echo, and no response to ^C.
> >   - New communication (SSH, ping) cannot be established: "no route to
> > host"
> >   - The serial console outputs "re0: watchdog timeout"
> >   - From the serial code, pinging the host yields "ping: sendmsg: No
> > buffer space available"
> > 
> > Workaround:
> > 
> >   'ifconfig re0 down; ifconfig re0 up' and a little patience fixes it.
> > 
> > Below, the VM's dmesg and its libvirt XML dump. After seeing the NFS
> > messages I unmounted the shares but the behaviour was the same.
> > 
> > Sijmen
> 
> Why rtl8139? It's the worst option available to KVM/QEMU. You would be much
> better off with virtio or e1000 for the NIC.

Change also the disk to 'virtio' instead of 'ide'.


-- 
Juan Francisco Cantero Hurtado http://juanfra.info



Re: reN networking hangs in KVM/QEMU

2019-02-08 Thread Brad Smith

On 2/8/2019 8:07 PM, Sijmen J. Mulder wrote:


Hi,

I run OpenBSD (snapshots) in a KVM+QEMU VM on Debian 9 and use it over
SSH. Those SSH sessions stall almost without fail when there's a lot of
output.

Repro (or at least my set up):

  1. Create a KVM+QMEU VM on Linux with virt-manager. I assigned two
 cores and otherwise used defaults.
  2. Install an OpenBSD snapshot.
  3. In an SSH session, execute 'find /' until hang. I suspect other
 large network transfers may result in the same.

Symptoms:

  - SSH session stalls with no output, no echo, and no response to ^C.
  - New communication (SSH, ping) cannot be established: "no route to
host"
  - The serial console outputs "re0: watchdog timeout"
  - From the serial code, pinging the host yields "ping: sendmsg: No
buffer space available"

Workaround:

  'ifconfig re0 down; ifconfig re0 up' and a little patience fixes it.

Below, the VM's dmesg and its libvirt XML dump. After seeing the NFS
messages I unmounted the shares but the behaviour was the same.

Sijmen


Why rtl8139? It's the worst option available to KVM/QEMU. You would be much
better off with virtio or e1000 for the NIC.



reN networking hangs in KVM/QEMU

2019-02-08 Thread Sijmen J. Mulder
Hi,

I run OpenBSD (snapshots) in a KVM+QEMU VM on Debian 9 and use it over
SSH. Those SSH sessions stall almost without fail when there's a lot of
output.

Repro (or at least my set up):

 1. Create a KVM+QMEU VM on Linux with virt-manager. I assigned two
cores and otherwise used defaults.
 2. Install an OpenBSD snapshot.
 3. In an SSH session, execute 'find /' until hang. I suspect other
large network transfers may result in the same.

Symptoms:

 - SSH session stalls with no output, no echo, and no response to ^C.
 - New communication (SSH, ping) cannot be established: "no route to
   host"
 - The serial console outputs "re0: watchdog timeout"
 - From the serial code, pinging the host yields "ping: sendmsg: No 
   buffer space available"

Workaround:

 'ifconfig re0 down; ifconfig re0 up' and a little patience fixes it.

Below, the VM's dmesg and its libvirt XML dump. After seeing the NFS
messages I unmounted the shares but the behaviour was the same.

Sijmen


OpenBSD 6.4-current (GENERIC.MP) #689: Fri Feb  8 00:40:27 MST 2019
dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 2130575360 (2031MB)
avail mem = 2056372224 (1961MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.8 @ 0xf6880 (10 entries)
bios0: vendor SeaBIOS version "1.10.2-1" date 04/01/2014
bios0: QEMU Standard PC (i440FX + PIIX, 1996)
acpi0 at bios0: rev 0
acpi0: sleep states S5
acpi0: tables DSDT FACP APIC
acpi0: wakeup devices
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Intel Core Processor (Skylake), 3312.53 MHz, 06-5e-03
cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,PCLMUL,SSSE3,FMA3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,RDTSCP,LONG,LAHF,ABM,3DNOWP,FSGSBASE,BMI1,HLE,AVX2,SMEP,BMI2,ERMS,INVPCID,RTM,MPX,RDSEED,ADX,SMAP,ARAT,XSAVEOPT,XSAVEC,XGETBV1,MELTDOWN
cpu0: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 64b/line 
16-way L2 cache
cpu0: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu0: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu0: smt 0, core 0, package 0
mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
cpu0: apic clock running at 1000MHz
cpu1 at mainbus0: apid 1 (application processor)
cpu1: Intel Core Processor (Skylake), 3312.09 MHz, 06-5e-03
cpu1: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,PCLMUL,SSSE3,FMA3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,RDTSCP,LONG,LAHF,ABM,3DNOWP,FSGSBASE,BMI1,HLE,AVX2,SMEP,BMI2,ERMS,INVPCID,RTM,MPX,RDSEED,ADX,SMAP,ARAT,XSAVEOPT,XSAVEC,XGETBV1,MELTDOWN
cpu1: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 64b/line 
16-way L2 cache
cpu1: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu1: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu1: smt 0, core 0, package 1
ioapic0 at mainbus0: apid 0 pa 0xfec0, version 11, 24 pins
acpiprt0 at acpi0: bus 0 (PCI0)
acpicpu0 at acpi0: C1(@1 halt!)
acpicpu1 at acpi0: C1(@1 halt!)
"ACPI0006" at acpi0 not configured
acpipci0 at acpi0 PCI0: _OSC failed
acpicmos0 at acpi0
"PNP0A06" at acpi0 not configured
"PNP0A06" at acpi0 not configured
"PNP0A06" at acpi0 not configured
"QEMU0002" at acpi0 not configured
"ACPI0010" at acpi0 not configured
pvbus0 at mainbus0: KVM
pvclock0 at pvbus0
pci0 at mainbus0 bus 0
pchb0 at pci0 dev 0 function 0 "Intel 82441FX" rev 0x02
pcib0 at pci0 dev 1 function 0 "Intel 82371SB ISA" rev 0x00
pciide0 at pci0 dev 1 function 1 "Intel 82371SB IDE" rev 0x00: DMA, channel 0 
wired to compatibility, channel 1 wired to compatibility
wd0 at pciide0 channel 0 drive 0: 
wd0: 16-sector PIO, LBA48, 20480MB, 41943040 sectors
atapiscsi0 at pciide0 channel 0 drive 1
scsibus1 at atapiscsi0: 2 targets
cd0 at scsibus1 targ 0 lun 0:  ATAPI 5/cdrom removable
wd0(pciide0:0:0): using PIO mode 4, DMA mode 2
cd0(pciide0:0:1): using PIO mode 4, DMA mode 2
pciide0: channel 1 disabled (no drives)
piixpm0 at pci0 dev 1 function 3 "Intel 82371AB Power" rev 0x03: apic 0 int 9
iic0 at piixpm0
vga1 at pci0 dev 2 function 0 "Red Hat QXL Video" rev 0x04
wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
re0 at pci0 dev 3 function 0 "Realtek 8139" rev 0x20: RTL8139C+ (0x7480), apic 
0 int 11, address 52:54:00:35:78:47
rlphy0 at re0 phy 0: RTL internal PHY
azalia0 at pci0 dev 4 function 0 "Intel 82801FB HD Audio" rev 0x01: apic 0 int 
11
azalia0: No codecs found
uhci0 at pci0 dev 5 function 0 "Intel 82801I USB&quo

Re: QEMU + snapshots - pvclock0: unstable result on stable clock

2018-12-03 Thread RD Thrush
On 12/3/18 5:00 AM, Reyk Floeter wrote:
> Hi,
> 
> thanks for the report.
> 
> We’re going to disable pvclock until I found a solution. It seems that old 
> KVMs or KVM on old CPUs report stable support incorrectly.
> 
> Do you have a dmesg?

I've attached the serial console output from an older proxmox MP with i386 
6.4-release dmesg as well as the install and fail of the recent snapshot.  One 
additional point I noticed was the console hung after entering 'machine ddbcpu 
1' requiring a (simulated) hard reset.

syncing disks... done
rebooting...
>> OpenBSD/i386 BOOT 3.34
DiskBIOS#   TypeCylsHeads   SecsFlags   Checksum
hd0 0x80label   1023255 63  0x2 0xdce59776
Region 0: type 1 at 0x0 for 639KB
Region 1: type 2 at 0x9fc00 for 1KB
Region 2: type 2 at 0xf for 64KB
Region 3: type 1 at 0x10 for 2096000KB
Region 4: type 2 at 0x7ffe for 128KB
Region 5: type 2 at 0xfeffc000 for 16KB
Region 6: type 2 at 0xfffc for 256KB
Low ram: 639KB  High ram: 2096000KB
Total free memory: 2096639KB
boot>
booting hd0a:/bsd: 9210635+2221060+196628+0+1105920 
[711378+98+521184+541602]=0xdd84d0
entry point at 0x2000d4

[ using 1774800 bytes of bsd ELF symbol table ]
Copyright (c) 1982, 1986, 1989, 1991, 1993
The Regents of the University of California.  All rights reserved.
Copyright (c) 1995-2018 OpenBSD. All rights reserved.  https://www.OpenBSD.org

OpenBSD 6.4 (GENERIC.MP) #943: Thu Oct 11 13:51:32 MDT 2018
dera...@i386.openbsd.org:/usr/src/sys/arch/i386/compile/GENERIC.MP
real mem  = 2146844672 (2047MB)
avail mem = 2092564480 (1995MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: date 06/23/99, BIOS32 rev. 0 @ 0xfd4be, SMBIOS rev. 2.8 @ 
0xf0cd0 (9 entries)
bios0: vendor SeaBIOS version 
"rel-1.7.5.1-0-g8936dbb-20141113_115728-nilsson.home.kraxel.org" date 04/01/2014
bios0: QEMU Standard PC (i440FX + PIIX, 1996)
acpi0 at bios0: rev 0
acpi0: sleep states S3 S4 S5
acpi0: tables DSDT FACP SSDT APIC HPET
acpi0: wakeup devices
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Common 32-bit KVM processor ("GenuineIntel" 686-class) 3.41 GHz, 0f-06-01
cpu0: 
FPU,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,x2APIC,HV,MELTDOWN
mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
cpu0: apic clock running at 999MHz
cpu1 at mainbus0: apid 1 (application processor)
cpu1: Common 32-bit KVM processor ("GenuineIntel" 686-class) 3.41 GHz, 0f-06-01
cpu1: 
FPU,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,HTT,SSE3,x2APIC,HV,MELTDOWN
ioapic0 at mainbus0: apid 0 pa 0xfec0, version 11, 24 pins
acpihpet0 at acpi0: 1 Hz
acpiprt0 at acpi0: bus 0 (PCI0)
acpicpu0 at acpi0: C1(@1 halt!)
acpicpu1 at acpi0: C1(@1 halt!)
"ACPI0006" at acpi0 not configured
acpicmos0 at acpi0
"PNP0A06" at acpi0 not configured
"ACPI0007" at acpi0 not configured
"ACPI0007" at acpi0 not configured
bios0: ROM list: 0xc/0x9200 0xc9800/0xa00 0xca800/0x2400 0xed000/0x3000!
pvbus0 at mainbus0: KVM
pci0 at mainbus0 bus 0: configuration mode 1 (bios)
pchb0 at pci0 dev 0 function 0 "Intel 82441FX" rev 0x02
pcib0 at pci0 dev 1 function 0 "Intel 82371SB ISA" rev 0x00
pciide0 at pci0 dev 1 function 1 "Intel 82371SB IDE" rev 0x00: DMA, channel 0 
wired to compatibility, channel 1 wired to compatibility
pciide0: channel 0 disabled (no drives)
atapiscsi0 at pciide0 channel 1 drive 0
scsibus1 at atapiscsi0: 2 targets
cd0 at scsibus1 targ 0 lun 0:  ATAPI 5/cdrom removable
cd0(pciide0:1:0): using PIO mode 4, DMA mode 2
uhci0 at pci0 dev 1 function 2 "Intel 82371SB USB" rev 0x01: apic 0 int 11
piixpm0 at pci0 dev 1 function 3 "Intel 82371AB Power" rev 0x03: apic 0 int 9
iic0 at piixpm0
vga1 at pci0 dev 2 function 0 "Cirrus Logic CL-GD5446" rev 0x00
wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
virtio0 at pci0 dev 3 function 0 "Qumranet Virtio Memory" rev 0x00
viomb0 at virtio0
virtio0: apic 0 int 11
virtio1 at pci0 dev 10 function 0 "Qumranet Virtio Storage" rev 0x00
vioblk0 at virtio1
scsibus2 at vioblk0: 2 targets
sd0 at scsibus2 targ 0 lun 0:  SCSI3 0/direct fixed
sd0: 32768MB, 512 bytes/sector, 67108864 sectors
virtio1: apic 0 int 10
virtio2 at pci0 dev 18 function 0 "Qumranet Virtio Network" rev 0x00
vio0 at virtio2: address 36:31:4d:56:db:75
virtio2: apic 0 int 10
isa0 at pcib0
isadma0 at isa0
fdc0 at isa0 port 0x3f0/6 irq 6 drq 2
com0 at isa0 port 0x3f8/8 irq 4: ns16550a, 16 byte fifo
com0: console
pckbc0 at isa0 port 0x60/5 irq 1 irq 12
pckbd0 at pckbc0 (kbd slot)
wskbd0 at pckbd0: console keyboard, using wsdisplay0
pms0 at

Re: Snapshots in QEMU

2018-08-07 Thread Elias M. Mariani
My bad then, just that I saw this commit and I thought that this was a
sort of workaround:
http://cvsweb.openbsd.org/cgi-bin/cvsweb/src/sys/arch/amd64/amd64/identcpu.c?rev=1.104=text/x-cvsweb-markup


Cheers.
Elias.

2018-08-07 16:59 GMT-03:00 Theo de Raadt :
> Elias M. Mariani  wrote:
>
>> You are right, is KVM's bug.
>> But this was fixed on -snapshots right?
>
> No.
>
>> If so, OpenBSD 6.4 would work well, bypassing KVM's bug.
>
> No.
>



Re: Snapshots in QEMU

2018-08-07 Thread Theo de Raadt
Elias M. Mariani  wrote:

> You are right, is KVM's bug.
> But this was fixed on -snapshots right?

No.

> If so, OpenBSD 6.4 would work well, bypassing KVM's bug.

No.



Re: Snapshots in QEMU

2018-08-07 Thread Elias M. Mariani
You are right, is KVM's bug.
But this was fixed on -snapshots right?
If so, OpenBSD 6.4 would work well, bypassing KVM's bug.

Cheers.
Elias.

2018-08-07 16:27 GMT-03:00 Theo de Raadt :
> We don't agree.
>
> We consider it a bug in KVM.
>
> Doesn't KVM suggest they are emulating a real machine?  AMD has
> documented that MSR.  So KVM should emulate the MSR.  In some
> way, even perhaps act like it is a NOP.
>
> Or they shouldn't claim to be that CPU.
>
> Feel free to report to those other upstreams.
>
> Elias M. Mariani  wrote:
>> Sorry to keep pinging this one.
>> If you install OPENBSD_6_3 in KVM/QEMU and run syspatch and apply the
>> LFENCE patch the machine redeems unable to boot.
>> I think that this should be another patch to fix the problem, anyone
>> using a OpenBSD virtual server with that config will ruin the boot
>> process with the patches.
>>
>> Cheers.
>> Elias.
>>
>> 2018-08-04 16:47 GMT-03:00 Elias M. Mariani :
>> > Hi Mike,
>> > I saw the changes in source about this from Bryan.
>> > This also hit OPENBSD_63 from a patch.
>> > Shouldn't be a another patch or something ?
>> >
>> > Cheers.
>> > Elias.
>> >
>> > 2018-08-01 13:23 GMT-03:00 Mike Larkin :
>> >> On Wed, Aug 01, 2018 at 12:04:55AM -0300, Elias M. Mariani wrote:
>> >>> (sorry, I forgot to add the list)
>> >>> Here you go.
>> >>> Cheers.
>> >>>
>> >>> 2018-07-31 20:47 GMT-03:00 Mike Larkin :
>> >>> > On Tue, Jul 31, 2018 at 04:43:22PM -0700, Mike Larkin wrote:
>> >>> >> On Tue, Jul 31, 2018 at 04:59:54PM -0300, Elias M. Mariani wrote:
>> >>> >> > And segmentation fault after applying the patches on OPENBSD_63.
>> >>> >> > Maybe is just a coincidence, according to the data:
>> >>> >> > HOST: ubuntu
>> >>> >> > QEMU: Running OPENBSD_63 + patches
>> >>> >> > (was working OK until patched).
>> >>> >> >
>> >>> >> > Cheers.
>> >>> >> > Elias.
>> >>> >> >
>> >>> >> > 2018-07-31 16:36 GMT-03:00 Elias M. Mariani 
>> >>> >> > :
>> >>> >> > > Trying to boot in QEMU from snapshots/amd64/cd63.iso or from
>> >>> >> > > snapshots/amd64/bsd.rd from the current disk:
>> >>> >> > > The booting starts but the machine gets rebooted almost 
>> >>> >> > > immediately.
>> >>> >> > >
>> >>> >> > > I'm reporting this pretty bad. But is because I'm using 6.3 in 
>> >>> >> > > that
>> >>> >> > > cloud server and just wanted to test if the snapshot was booting
>> >>> >> > > correctly. QEMU is running in godsknowswhat.
>> >>> >> > > I have never used QEMU myself, could someone try to reproduce ?
>> >>> >> > > using cd63.iso from OPENBSD_63 gives no problems.
>> >>> >> > >
>> >>> >> > > Cheers.
>> >>> >> > > Elias.
>> >>> >>
>> >>> >>
>> >>> >> This may be related to the recent speculation/lfence fix that went in
>> >>> >> a week or so ago. It reads an MSR that should be present on that CPU 
>> >>> >> (and
>> >>> >> if it isn't, we won't read it).
>> >>> >>
>> >>> >> I have one of those same Opterons here, I'll update it to -current 
>> >>> >> and see
>> >>> >> if I can repro this on real hardware. My guess is that kvm is failing 
>> >>> >> the
>> >>> >> RDMSR because the knowledge of it post-dates the time that kvm was 
>> >>> >> built.
>> >>> >>
>> >>> >> -ml
>> >>> >>
>> >>> >
>> >>> > PS, "show registers" at ddb> prompt would confirm that is indeed this 
>> >>> > fix,
>> >>> > if you could do that and report the output it would be appreciated.
>> >>> >
>> >>
>> >> Yes, this is related to the change I thought. I still haven't had a 
>> >> chance to
>> >> check on real hardware yet, though.
>>



Re: Snapshots in QEMU

2018-08-07 Thread Theo de Raadt
We don't agree.

We consider it a bug in KVM.

Doesn't KVM suggest they are emulating a real machine?  AMD has
documented that MSR.  So KVM should emulate the MSR.  In some
way, even perhaps act like it is a NOP.

Or they shouldn't claim to be that CPU.

Feel free to report to those other upstreams.

Elias M. Mariani  wrote:
> Sorry to keep pinging this one.
> If you install OPENBSD_6_3 in KVM/QEMU and run syspatch and apply the
> LFENCE patch the machine redeems unable to boot.
> I think that this should be another patch to fix the problem, anyone
> using a OpenBSD virtual server with that config will ruin the boot
> process with the patches.
> 
> Cheers.
> Elias.
> 
> 2018-08-04 16:47 GMT-03:00 Elias M. Mariani :
> > Hi Mike,
> > I saw the changes in source about this from Bryan.
> > This also hit OPENBSD_63 from a patch.
> > Shouldn't be a another patch or something ?
> >
> > Cheers.
> > Elias.
> >
> > 2018-08-01 13:23 GMT-03:00 Mike Larkin :
> >> On Wed, Aug 01, 2018 at 12:04:55AM -0300, Elias M. Mariani wrote:
> >>> (sorry, I forgot to add the list)
> >>> Here you go.
> >>> Cheers.
> >>>
> >>> 2018-07-31 20:47 GMT-03:00 Mike Larkin :
> >>> > On Tue, Jul 31, 2018 at 04:43:22PM -0700, Mike Larkin wrote:
> >>> >> On Tue, Jul 31, 2018 at 04:59:54PM -0300, Elias M. Mariani wrote:
> >>> >> > And segmentation fault after applying the patches on OPENBSD_63.
> >>> >> > Maybe is just a coincidence, according to the data:
> >>> >> > HOST: ubuntu
> >>> >> > QEMU: Running OPENBSD_63 + patches
> >>> >> > (was working OK until patched).
> >>> >> >
> >>> >> > Cheers.
> >>> >> > Elias.
> >>> >> >
> >>> >> > 2018-07-31 16:36 GMT-03:00 Elias M. Mariani :
> >>> >> > > Trying to boot in QEMU from snapshots/amd64/cd63.iso or from
> >>> >> > > snapshots/amd64/bsd.rd from the current disk:
> >>> >> > > The booting starts but the machine gets rebooted almost 
> >>> >> > > immediately.
> >>> >> > >
> >>> >> > > I'm reporting this pretty bad. But is because I'm using 6.3 in that
> >>> >> > > cloud server and just wanted to test if the snapshot was booting
> >>> >> > > correctly. QEMU is running in godsknowswhat.
> >>> >> > > I have never used QEMU myself, could someone try to reproduce ?
> >>> >> > > using cd63.iso from OPENBSD_63 gives no problems.
> >>> >> > >
> >>> >> > > Cheers.
> >>> >> > > Elias.
> >>> >>
> >>> >>
> >>> >> This may be related to the recent speculation/lfence fix that went in
> >>> >> a week or so ago. It reads an MSR that should be present on that CPU 
> >>> >> (and
> >>> >> if it isn't, we won't read it).
> >>> >>
> >>> >> I have one of those same Opterons here, I'll update it to -current and 
> >>> >> see
> >>> >> if I can repro this on real hardware. My guess is that kvm is failing 
> >>> >> the
> >>> >> RDMSR because the knowledge of it post-dates the time that kvm was 
> >>> >> built.
> >>> >>
> >>> >> -ml
> >>> >>
> >>> >
> >>> > PS, "show registers" at ddb> prompt would confirm that is indeed this 
> >>> > fix,
> >>> > if you could do that and report the output it would be appreciated.
> >>> >
> >>
> >> Yes, this is related to the change I thought. I still haven't had a chance 
> >> to
> >> check on real hardware yet, though.
> 



Re: Snapshots in QEMU

2018-08-07 Thread Elias M. Mariani
Sorry to keep pinging this one.
If you install OPENBSD_6_3 in KVM/QEMU and run syspatch and apply the
LFENCE patch the machine redeems unable to boot.
I think that this should be another patch to fix the problem, anyone
using a OpenBSD virtual server with that config will ruin the boot
process with the patches.

Cheers.
Elias.

2018-08-04 16:47 GMT-03:00 Elias M. Mariani :
> Hi Mike,
> I saw the changes in source about this from Bryan.
> This also hit OPENBSD_63 from a patch.
> Shouldn't be a another patch or something ?
>
> Cheers.
> Elias.
>
> 2018-08-01 13:23 GMT-03:00 Mike Larkin :
>> On Wed, Aug 01, 2018 at 12:04:55AM -0300, Elias M. Mariani wrote:
>>> (sorry, I forgot to add the list)
>>> Here you go.
>>> Cheers.
>>>
>>> 2018-07-31 20:47 GMT-03:00 Mike Larkin :
>>> > On Tue, Jul 31, 2018 at 04:43:22PM -0700, Mike Larkin wrote:
>>> >> On Tue, Jul 31, 2018 at 04:59:54PM -0300, Elias M. Mariani wrote:
>>> >> > And segmentation fault after applying the patches on OPENBSD_63.
>>> >> > Maybe is just a coincidence, according to the data:
>>> >> > HOST: ubuntu
>>> >> > QEMU: Running OPENBSD_63 + patches
>>> >> > (was working OK until patched).
>>> >> >
>>> >> > Cheers.
>>> >> > Elias.
>>> >> >
>>> >> > 2018-07-31 16:36 GMT-03:00 Elias M. Mariani :
>>> >> > > Trying to boot in QEMU from snapshots/amd64/cd63.iso or from
>>> >> > > snapshots/amd64/bsd.rd from the current disk:
>>> >> > > The booting starts but the machine gets rebooted almost immediately.
>>> >> > >
>>> >> > > I'm reporting this pretty bad. But is because I'm using 6.3 in that
>>> >> > > cloud server and just wanted to test if the snapshot was booting
>>> >> > > correctly. QEMU is running in godsknowswhat.
>>> >> > > I have never used QEMU myself, could someone try to reproduce ?
>>> >> > > using cd63.iso from OPENBSD_63 gives no problems.
>>> >> > >
>>> >> > > Cheers.
>>> >> > > Elias.
>>> >>
>>> >>
>>> >> This may be related to the recent speculation/lfence fix that went in
>>> >> a week or so ago. It reads an MSR that should be present on that CPU (and
>>> >> if it isn't, we won't read it).
>>> >>
>>> >> I have one of those same Opterons here, I'll update it to -current and 
>>> >> see
>>> >> if I can repro this on real hardware. My guess is that kvm is failing the
>>> >> RDMSR because the knowledge of it post-dates the time that kvm was built.
>>> >>
>>> >> -ml
>>> >>
>>> >
>>> > PS, "show registers" at ddb> prompt would confirm that is indeed this fix,
>>> > if you could do that and report the output it would be appreciated.
>>> >
>>
>> Yes, this is related to the change I thought. I still haven't had a chance to
>> check on real hardware yet, though.



Re: Snapshots in QEMU

2018-08-04 Thread Elias M. Mariani
Hi Mike,
I saw the changes in source about this from Bryan.
This also hit OPENBSD_63 from a patch.
Shouldn't be a another patch or something ?

Cheers.
Elias.

2018-08-01 13:23 GMT-03:00 Mike Larkin :
> On Wed, Aug 01, 2018 at 12:04:55AM -0300, Elias M. Mariani wrote:
>> (sorry, I forgot to add the list)
>> Here you go.
>> Cheers.
>>
>> 2018-07-31 20:47 GMT-03:00 Mike Larkin :
>> > On Tue, Jul 31, 2018 at 04:43:22PM -0700, Mike Larkin wrote:
>> >> On Tue, Jul 31, 2018 at 04:59:54PM -0300, Elias M. Mariani wrote:
>> >> > And segmentation fault after applying the patches on OPENBSD_63.
>> >> > Maybe is just a coincidence, according to the data:
>> >> > HOST: ubuntu
>> >> > QEMU: Running OPENBSD_63 + patches
>> >> > (was working OK until patched).
>> >> >
>> >> > Cheers.
>> >> > Elias.
>> >> >
>> >> > 2018-07-31 16:36 GMT-03:00 Elias M. Mariani :
>> >> > > Trying to boot in QEMU from snapshots/amd64/cd63.iso or from
>> >> > > snapshots/amd64/bsd.rd from the current disk:
>> >> > > The booting starts but the machine gets rebooted almost immediately.
>> >> > >
>> >> > > I'm reporting this pretty bad. But is because I'm using 6.3 in that
>> >> > > cloud server and just wanted to test if the snapshot was booting
>> >> > > correctly. QEMU is running in godsknowswhat.
>> >> > > I have never used QEMU myself, could someone try to reproduce ?
>> >> > > using cd63.iso from OPENBSD_63 gives no problems.
>> >> > >
>> >> > > Cheers.
>> >> > > Elias.
>> >>
>> >>
>> >> This may be related to the recent speculation/lfence fix that went in
>> >> a week or so ago. It reads an MSR that should be present on that CPU (and
>> >> if it isn't, we won't read it).
>> >>
>> >> I have one of those same Opterons here, I'll update it to -current and see
>> >> if I can repro this on real hardware. My guess is that kvm is failing the
>> >> RDMSR because the knowledge of it post-dates the time that kvm was built.
>> >>
>> >> -ml
>> >>
>> >
>> > PS, "show registers" at ddb> prompt would confirm that is indeed this fix,
>> > if you could do that and report the output it would be appreciated.
>> >
>
> Yes, this is related to the change I thought. I still haven't had a chance to
> check on real hardware yet, though.



Re: Snapshots in QEMU

2018-08-01 Thread Elias M. Mariani
(sorry, I forgot to add the list)
Here you go.
Cheers.

2018-07-31 20:47 GMT-03:00 Mike Larkin :
> On Tue, Jul 31, 2018 at 04:43:22PM -0700, Mike Larkin wrote:
>> On Tue, Jul 31, 2018 at 04:59:54PM -0300, Elias M. Mariani wrote:
>> > And segmentation fault after applying the patches on OPENBSD_63.
>> > Maybe is just a coincidence, according to the data:
>> > HOST: ubuntu
>> > QEMU: Running OPENBSD_63 + patches
>> > (was working OK until patched).
>> >
>> > Cheers.
>> > Elias.
>> >
>> > 2018-07-31 16:36 GMT-03:00 Elias M. Mariani :
>> > > Trying to boot in QEMU from snapshots/amd64/cd63.iso or from
>> > > snapshots/amd64/bsd.rd from the current disk:
>> > > The booting starts but the machine gets rebooted almost immediately.
>> > >
>> > > I'm reporting this pretty bad. But is because I'm using 6.3 in that
>> > > cloud server and just wanted to test if the snapshot was booting
>> > > correctly. QEMU is running in godsknowswhat.
>> > > I have never used QEMU myself, could someone try to reproduce ?
>> > > using cd63.iso from OPENBSD_63 gives no problems.
>> > >
>> > > Cheers.
>> > > Elias.
>>
>>
>> This may be related to the recent speculation/lfence fix that went in
>> a week or so ago. It reads an MSR that should be present on that CPU (and
>> if it isn't, we won't read it).
>>
>> I have one of those same Opterons here, I'll update it to -current and see
>> if I can repro this on real hardware. My guess is that kvm is failing the
>> RDMSR because the knowledge of it post-dates the time that kvm was built.
>>
>> -ml
>>
>
> PS, "show registers" at ddb> prompt would confirm that is indeed this fix,
> if you could do that and report the output it would be appreciated.
>


Re: Snapshots in QEMU

2018-07-31 Thread Mike Larkin
On Tue, Jul 31, 2018 at 04:43:22PM -0700, Mike Larkin wrote:
> On Tue, Jul 31, 2018 at 04:59:54PM -0300, Elias M. Mariani wrote:
> > And segmentation fault after applying the patches on OPENBSD_63.
> > Maybe is just a coincidence, according to the data:
> > HOST: ubuntu
> > QEMU: Running OPENBSD_63 + patches
> > (was working OK until patched).
> > 
> > Cheers.
> > Elias.
> > 
> > 2018-07-31 16:36 GMT-03:00 Elias M. Mariani :
> > > Trying to boot in QEMU from snapshots/amd64/cd63.iso or from
> > > snapshots/amd64/bsd.rd from the current disk:
> > > The booting starts but the machine gets rebooted almost immediately.
> > >
> > > I'm reporting this pretty bad. But is because I'm using 6.3 in that
> > > cloud server and just wanted to test if the snapshot was booting
> > > correctly. QEMU is running in godsknowswhat.
> > > I have never used QEMU myself, could someone try to reproduce ?
> > > using cd63.iso from OPENBSD_63 gives no problems.
> > >
> > > Cheers.
> > > Elias.
> 
> 
> This may be related to the recent speculation/lfence fix that went in
> a week or so ago. It reads an MSR that should be present on that CPU (and
> if it isn't, we won't read it).
> 
> I have one of those same Opterons here, I'll update it to -current and see
> if I can repro this on real hardware. My guess is that kvm is failing the
> RDMSR because the knowledge of it post-dates the time that kvm was built.
> 
> -ml
> 

PS, "show registers" at ddb> prompt would confirm that is indeed this fix,
if you could do that and report the output it would be appreciated.



Re: Snapshots in QEMU

2018-07-31 Thread Mike Larkin
On Tue, Jul 31, 2018 at 04:59:54PM -0300, Elias M. Mariani wrote:
> And segmentation fault after applying the patches on OPENBSD_63.
> Maybe is just a coincidence, according to the data:
> HOST: ubuntu
> QEMU: Running OPENBSD_63 + patches
> (was working OK until patched).
> 
> Cheers.
> Elias.
> 
> 2018-07-31 16:36 GMT-03:00 Elias M. Mariani :
> > Trying to boot in QEMU from snapshots/amd64/cd63.iso or from
> > snapshots/amd64/bsd.rd from the current disk:
> > The booting starts but the machine gets rebooted almost immediately.
> >
> > I'm reporting this pretty bad. But is because I'm using 6.3 in that
> > cloud server and just wanted to test if the snapshot was booting
> > correctly. QEMU is running in godsknowswhat.
> > I have never used QEMU myself, could someone try to reproduce ?
> > using cd63.iso from OPENBSD_63 gives no problems.
> >
> > Cheers.
> > Elias.


This may be related to the recent speculation/lfence fix that went in
a week or so ago. It reads an MSR that should be present on that CPU (and
if it isn't, we won't read it).

I have one of those same Opterons here, I'll update it to -current and see
if I can repro this on real hardware. My guess is that kvm is failing the
RDMSR because the knowledge of it post-dates the time that kvm was built.

-ml



Snapshots in QEMU

2018-07-31 Thread Elias M. Mariani
Trying to boot in QEMU from snapshots/amd64/cd63.iso or from
snapshots/amd64/bsd.rd from the current disk:
The booting starts but the machine gets rebooted almost immediately.

I'm reporting this pretty bad. But is because I'm using 6.3 in that
cloud server and just wanted to test if the snapshot was booting
correctly. QEMU is running in godsknowswhat.
I have never used QEMU myself, could someone try to reproduce ?
using cd63.iso from OPENBSD_63 gives no problems.

Cheers.
Elias.



Re: openbsd-current recent snapshots fail to boot on virtualbox/qemu

2018-04-15 Thread faisal saadatmand
Hi, just wanted to confirm that the latest snapshot (Mp#187) has fixed
the issue for me on both VirtualBox and Qemu.  Thank you!

On Wed, Apr 11, 2018 at 10:56 AM, faisal saadatmand <cdude...@gmail.com> wrote:
> MBR in my case.
>
> Thank you, Janne.   "boot hd0a:/bsd" worked with cd63.iso.  The issue
> is increasing pointing in the direction of the bootloader.  cd63.iso
> uses CDBOOT 3.29 instead of BOOT 3.34 that is included in the recent
> snapshots.
>
> On Wed, Apr 11, 2018 at 10:23 AM, Joel Sing <j...@sing.id.au> wrote:
>> On Wednesday 11 April 2018 13:13:15 Janne Johansson wrote:
>>> 2018-04-11 12:29 GMT+02:00 Stuart Henderson <s...@spacehopper.org>:
>>> > Posting for additional information - relating to the same problem, David
>>> >
>>> > Higgs reports seeing it with the following VirtualBox configuration:
>>> > > VM setup:
>>> > > - "64-bit Other" OS
>>> > > - VT-x/AMD-V, Nested Paging, PAE/NX
>>> > > - Intel PRO/1000 MT Desktop adapter
>>> > >
>>> > >=20
>>> > >
>>> > > Running VirtualBox 5.2.8 on High Sierra MBP w/ Intel Core i5.
>>>
>>> Having almost exact the same setup at work and the same issue.
>>>
>>> Got a workaround booting cd63.iso then asking that boot> prompt
>>> to "boot hd0a:/bsd". If anyone gets stuck and don't want to downgrade.
>>
>> Is this booting with MBR or EFI?



Re: openbsd-current recent snapshots fail to boot on virtualbox/qemu

2018-04-11 Thread faisal saadatmand
MBR in my case.

Thank you, Janne.   "boot hd0a:/bsd" worked with cd63.iso.  The issue
is increasing pointing in the direction of the bootloader.  cd63.iso
uses CDBOOT 3.29 instead of BOOT 3.34 that is included in the recent
snapshots.

On Wed, Apr 11, 2018 at 10:23 AM, Joel Sing  wrote:
> On Wednesday 11 April 2018 13:13:15 Janne Johansson wrote:
>> 2018-04-11 12:29 GMT+02:00 Stuart Henderson :
>> > Posting for additional information - relating to the same problem, David
>> >
>> > Higgs reports seeing it with the following VirtualBox configuration:
>> > > VM setup:
>> > > - "64-bit Other" OS
>> > > - VT-x/AMD-V, Nested Paging, PAE/NX
>> > > - Intel PRO/1000 MT Desktop adapter
>> > >
>> > >=20
>> > >
>> > > Running VirtualBox 5.2.8 on High Sierra MBP w/ Intel Core i5.
>>
>> Having almost exact the same setup at work and the same issue.
>>
>> Got a workaround booting cd63.iso then asking that boot> prompt
>> to "boot hd0a:/bsd". If anyone gets stuck and don't want to downgrade.
>
> Is this booting with MBR or EFI?



Re: openbsd-current recent snapshots fail to boot on virtualbox/qemu

2018-04-11 Thread Janne Johansson
2018-04-11 16:23 GMT+02:00 Joel Sing :

> > Having almost exact the same setup at work and the same issue.
> >
> > Got a workaround booting cd63.iso then asking that boot> prompt
> > to "boot hd0a:/bsd". If anyone gets stuck and don't want to downgrade.
>
> Is this booting with MBR or EFI?
>

EFI in VBox turned off


-- 
May the most significant bit of your life be positive.


Re: openbsd-current recent snapshots fail to boot on virtualbox/qemu

2018-04-11 Thread Joel Sing
On Wednesday 11 April 2018 13:13:15 Janne Johansson wrote:
> 2018-04-11 12:29 GMT+02:00 Stuart Henderson :
> > Posting for additional information - relating to the same problem, David
> > 
> > Higgs reports seeing it with the following VirtualBox configuration:
> > > VM setup:
> > > - "64-bit Other" OS
> > > - VT-x/AMD-V, Nested Paging, PAE/NX
> > > - Intel PRO/1000 MT Desktop adapter
> > >
> > >=20
> > >
> > > Running VirtualBox 5.2.8 on High Sierra MBP w/ Intel Core i5.
> 
> Having almost exact the same setup at work and the same issue.
> 
> Got a workaround booting cd63.iso then asking that boot> prompt
> to "boot hd0a:/bsd". If anyone gets stuck and don't want to downgrade.

Is this booting with MBR or EFI?



Re: openbsd-current recent snapshots fail to boot on virtualbox/qemu

2018-04-11 Thread Janne Johansson
2018-04-11 12:29 GMT+02:00 Stuart Henderson :

> Posting for additional information - relating to the same problem, David
> Higgs reports seeing it with the following VirtualBox configuration:
>
> > VM setup:
> > - "64-bit Other" OS
> > - VT-x/AMD-V, Nested Paging, PAE/NX
> > - Intel PRO/1000 MT Desktop adapter
> >=20
> > Running VirtualBox 5.2.8 on High Sierra MBP w/ Intel Core i5.
>


Having almost exact the same setup at work and the same issue.

Got a workaround booting cd63.iso then asking that boot> prompt
to "boot hd0a:/bsd". If anyone gets stuck and don't want to downgrade.

-- 
May the most significant bit of your life be positive.


Re: Degraded timing performance - QEMU, KVM - OpenBSD 6.2

2018-02-02 Thread Edd Barrett
Hi,

I'm experiencing this issue too.

On Tue, Dec 26, 2017 at 12:27:31PM -0500, Andrew Davis wrote:
> Virtualization software: QEMU + KVM (2.10.0-1.fc27)

FWIW, there are reports that this bug is absent from qemu-2.11.0.

-- 
Best Regards
Edd Barrett

http://www.theunixzoo.co.uk



Re: Unable to boot OpenBSD within QEMU on an Intel Platinum 8176M

2018-01-30 Thread Todd T. Fries
Penned by Matthieu Herrb on 20180102 14:55.22, we have:
| On Tue, Jan 02, 2018 at 08:37:04PM +0100, Landry Breuil wrote:
| > On Sat, Dec 30, 2017 at 09:23:03PM -0800, Mike Larkin wrote:
| > > On Tue, Jan 02, 2018 at 11:30:47AM -0500, Brian Rak wrote:
| > > > 
| > > > 
| > > The only thing I can say is that recently I've been noticing an uptick in 
the
| > > quantity of KVM related issues on OpenBSD. Whether this is due to some 
recent
| > > changes in KVM, or maybe due to more people running OpenBSD on KVM (and 
thus
| > > increasing the number of reports), I'm not sure. But kettenis@ did note a 
few
| > > days ago in a reply to a different KVM related issue that it seems their 
local
| > > APIC emulation code isn't behaving exactly as we expect. But that code 
hasn't
| > > changed in OpenBSD since, well, forever, so it's likely a KVM issue there.
| > > Whether this is your issue or not I don't know. You might bring this up on
| > > the KVM mailing lists and see if someone can shed light on it. If you 
search
| > > the tech@/misc@ archives for proxmox related threads, there was a KVM 
option
| > > reported a week or so back that seemed to fix the issue kettenis@ was 
commenting
| > > on; perhaps this can help you.
| > 
| > ftr that option was kvm-intel.preemption_timer=0 on the host kernel
| > commandline.
| 
| FWIW, I'm running OpenBSD-current in qemu / KVM using libvirt and
| virt-manager for some years now. Current -current still works OK for
| me. /var/run/ntpd.drift is 0.47 which doesn't look bad.
| 
| Here are dmesg from the VM and the xml file for libvirt.
| 
| OpenBSD 6.2-current (GENERIC.MP) #28: Fri Dec 29 18:10:50 CET 2017
| matth...@obsd64-current.herrb.net:/usr/obj/GENERIC.MP
| real mem = 2130698240 (2031MB)
| avail mem = 2059264000 (1963MB)
| mpath0 at root
| scsibus0 at mpath0: 256 targets
| mainbus0 at root
| bios0 at mainbus0: SMBIOS rev. 2.4 @ 0xf1430 (13 entries)
| bios0: vendor Bochs version "Bochs" date 01/01/2011
| bios0: Bochs Bochs
| acpi0 at bios0: rev 0
| acpi0: sleep states S3 S4 S5
| acpi0: tables DSDT FACP SSDT APIC HPET
| acpi0: wakeup devices
| acpitimer0 at acpi0: 3579545 Hz, 24 bits
| acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
| cpu0 at mainbus0: apid 0 (boot processor)
| cpu0: Intel Xeon E3-12xx v2 (Ivy Bridge), 772.12 MHz
| cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SS,SSE3,PCLMUL,VMX,SSSE3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,RDTSCP,LONG,LAHF,PERF,FSGSBASE,SMEP,ERMS
| cpu0: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 
64b/line 16-way L2 cache
| cpu0: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
| cpu0: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
| cpu0: smt 0, core 0, package 0
| mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
| cpu0: apic clock running at 1000MHz
| cpu1 at mainbus0: apid 1 (application processor)
| cpu1: Intel Xeon E3-12xx v2 (Ivy Bridge), 919.08 MHz
| cpu1: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SS,SSE3,PCLMUL,VMX,SSSE3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,RDTSCP,LONG,LAHF,PERF,FSGSBASE,SMEP,ERMS
| cpu1: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 
64b/line 16-way L2 cache
| cpu1: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
| cpu1: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
| cpu1: smt 0, core 0, package 1
| cpu2 at mainbus0: apid 2 (application processor)
| cpu2: Intel Xeon E3-12xx v2 (Ivy Bridge), 1016.77 MHz
| cpu2: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SS,SSE3,PCLMUL,VMX,SSSE3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,RDTSCP,LONG,LAHF,PERF,FSGSBASE,SMEP,ERMS
| cpu2: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 
64b/line 16-way L2 cache
| cpu2: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
| cpu2: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
| cpu2: smt 0, core 0, package 2
| cpu3 at mainbus0: apid 3 (application processor)
| cpu3: Intel Xeon E3-12xx v2 (Ivy Bridge), 968.46 MHz
| cpu3: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SS,SSE3,PCLMUL,VMX,SSSE3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,RDTSCP,LONG,LAHF,PERF,FSGSBASE,SMEP,ERMS
| cpu3: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 
64b/line 16-way L2 cache
| cpu3: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
| cpu3: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
| cpu3: smt 0, core 0, package 3
| ioapic0 at mainbus0: api

Re: Unable to boot OpenBSD within QEMU on an Intel Platinum 8176M

2018-01-12 Thread Mike Larkin
On Sat, Dec 30, 2017 at 09:23:03PM -0800, Mike Larkin wrote:
> On Tue, Jan 02, 2018 at 11:30:47AM -0500, Brian Rak wrote:
> > 
> > 
> > On 12/30/2017 5:13 PM, Mike Larkin wrote:
> > > On Wed, Dec 27, 2017 at 01:10:13PM -0500, Brian Rak wrote:
> > > > I have a server with an Intel Platinum CPU: 
> > > > https://ark.intel.com/products/120505/Intel-Xeon-Platinum-8176M-Processor-38_5M-Cache-2_10-GHz
> > > > 
> > > > It's running Fedora 27 Server, kernel version 4.14.8-300.fc27.x86_64,
> > > > qemu-system-x86-core-2.10.1-2.fc27.x86_64
> > > > 
> > > > I'm starting qemu like this:
> > > > 
> > > > /usr/bin/qemu-system-x86_64 -machine accel=kvm -name
> > > > guest=test,debug-threads=on -S -object 
> > > > secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-6-test/master-key.aes
> > > > -machine pc-i440fx-2.10,accel=kvm,usb=off,dump-guest-core=off -cpu
> > > > Skylake-Client,hypervisor=on -m 32768 -realtime mlock=off -smp
> > > > 16,sockets=2,cores=8,threads=1 -uuid 
> > > > 6427e485-5aee-4fb6-b5e5-a80c1dc0f4af
> > > > -no-user-config -nodefaults -chardev 
> > > > socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-6-test/monitor.sock,server,nowait
> > > > -mon chardev=charmonitor,id=monitor,mode=control -rtc 
> > > > base=utc,driftfix=slew
> > > > -global kvm-pit.lost_tick_policy=delay -no-shutdown -boot strict=on 
> > > > -device
> > > > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
> > > > file=/var/tmp/cd62.iso,format=raw,if=none,id=drive-ide0-1-0,readonly=on
> > > > -device 
> > > > ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
> > > > -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=32 -device 
> > > > virtio-net-pci,netdev=hostnet0,id=net0,mac=56:00:00:27:d6:3f,bus=pci.0,addr=0x3,rombar=0,bootindex=3
> > > > -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc
> > > > 127.0.0.1:4788,websocket=40688 -device
> > > > cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device
> > > > virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -object
> > > > rng-random,id=objrng0,filename=/dev/random -device
> > > > virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -msg timestamp=on
> > > Could you try a simpler config? There's a lot in there that's not needed 
> > > for
> > > OpenBSD, and I'm wondering if there is some option you've chosen that's
> > > causing problems.
> > > 
> > > For what it's worth, there have been a number of reports of OpenBSD guests
> > > recently failing when run on KVM (mainly clock related issues but other
> > > things as well). You are using a very new CPU on a very new host OS, I'm 
> > > not
> > > surprised some things are behaving a bit strangely.
> > > 
> > > -ml
> > The simplest command line I can come up with is:
> > 
> > /usr/bin/qemu-system-x86_64 -machine accel=kvm -name guest=test -machine
> > pc-i440fx-2.10 -cpu Skylake-Client -m 32768 -drive
> > file=/var/tmp/cd62.iso,format=raw,if=none,id=drive-ide0-1-0,readonly=on
> > -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
> > -device cirrus-vga,id=video0,bus=pci.0,addr=0x3 -vnc 127.0.0.1:4788
> > 
> > I still see the same hang.  Even using more ancient virtual CPUs (-cpu
> > pentium3) doesn't seem to help.
> > 
> > Switching to '-machine pc-q35-2.10' doesn't appear to help either.
> > 
> > If I disable KVM, the instance appears to boot normally.  However, I'm not
> > sure if this points to a KVM issue or if it's just masking a timing issue
> > (because it runs so much slower emulated)
> > 
> > In this case, we went with Fedora 27 to rule out a problem that would be
> > fixed by upgrading some part of the host OS.  This isn't an OS we normally
> > use, and we've seen issues with older versions of qemu.
> > 
> 
> The only thing I can say is that recently I've been noticing an uptick in the
> quantity of KVM related issues on OpenBSD. Whether this is due to some recent
> changes in KVM, or maybe due to more people running OpenBSD on KVM (and thus
> increasing the number of reports), I'm not sure. But kettenis@ did note a few
> days ago in a reply to a different KVM related issue that it seems their local
> APIC emulation code isn't behaving exactly as we expect. But that code hasn't
> changed in OpenBSD since, well, forever, so it's likely a KVM issue there.
> Whether this is your issue or not I don't know. You might bring this up on
> the KVM mailing lists and see if someone can shed light on it. If you search
> the tech@/misc@ archives for proxmox related threads, there was a KVM option
> reported a week or so back that seemed to fix the issue kettenis@ was 
> commenting
> on; perhaps this can help you.
> 
> -ml
> 

Following up on old email threads; see the recent message by sf@ regarding
disabling certain KVM features that may help here.

-ml



Re: Degraded timing performance - QEMU, KVM - OpenBSD 6.2

2018-01-10 Thread srutherford
Would this be consistent with the PIT taking longer to respond? The mode of
KVM used here (mentioned on the KVM list) moves the PIT to userspace and
would make it less accurate. If I'm reading OpenBSD's LAPIC calibration code
right, this might be the case. I believe Linux uses one of the PM Timer or
TSC to do the calibration.

(The obvious solution here is to just disable that mode if you are using
OpenBSD, which apparently works.)



--
Sent from: 
http://openbsd-archive.7691.n7.nabble.com/openbsd-dev-bugs-f183916.html



Re: Unable to boot OpenBSD within QEMU on an Intel Platinum 8176M

2018-01-02 Thread Matthieu Herrb
On Tue, Jan 02, 2018 at 08:37:04PM +0100, Landry Breuil wrote:
> On Sat, Dec 30, 2017 at 09:23:03PM -0800, Mike Larkin wrote:
> > On Tue, Jan 02, 2018 at 11:30:47AM -0500, Brian Rak wrote:
> > > 
> > > 
> > The only thing I can say is that recently I've been noticing an uptick in 
> > the
> > quantity of KVM related issues on OpenBSD. Whether this is due to some 
> > recent
> > changes in KVM, or maybe due to more people running OpenBSD on KVM (and thus
> > increasing the number of reports), I'm not sure. But kettenis@ did note a 
> > few
> > days ago in a reply to a different KVM related issue that it seems their 
> > local
> > APIC emulation code isn't behaving exactly as we expect. But that code 
> > hasn't
> > changed in OpenBSD since, well, forever, so it's likely a KVM issue there.
> > Whether this is your issue or not I don't know. You might bring this up on
> > the KVM mailing lists and see if someone can shed light on it. If you search
> > the tech@/misc@ archives for proxmox related threads, there was a KVM option
> > reported a week or so back that seemed to fix the issue kettenis@ was 
> > commenting
> > on; perhaps this can help you.
> 
> ftr that option was kvm-intel.preemption_timer=0 on the host kernel
> commandline.

FWIW, I'm running OpenBSD-current in qemu / KVM using libvirt and
virt-manager for some years now. Current -current still works OK for
me. /var/run/ntpd.drift is 0.47 which doesn't look bad.

Here are dmesg from the VM and the xml file for libvirt.

OpenBSD 6.2-current (GENERIC.MP) #28: Fri Dec 29 18:10:50 CET 2017
matth...@obsd64-current.herrb.net:/usr/obj/GENERIC.MP
real mem = 2130698240 (2031MB)
avail mem = 2059264000 (1963MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.4 @ 0xf1430 (13 entries)
bios0: vendor Bochs version "Bochs" date 01/01/2011
bios0: Bochs Bochs
acpi0 at bios0: rev 0
acpi0: sleep states S3 S4 S5
acpi0: tables DSDT FACP SSDT APIC HPET
acpi0: wakeup devices
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Intel Xeon E3-12xx v2 (Ivy Bridge), 772.12 MHz
cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SS,SSE3,PCLMUL,VMX,SSSE3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,RDTSCP,LONG,LAHF,PERF,FSGSBASE,SMEP,ERMS
cpu0: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 64b/line 
16-way L2 cache
cpu0: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu0: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu0: smt 0, core 0, package 0
mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
cpu0: apic clock running at 1000MHz
cpu1 at mainbus0: apid 1 (application processor)
cpu1: Intel Xeon E3-12xx v2 (Ivy Bridge), 919.08 MHz
cpu1: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SS,SSE3,PCLMUL,VMX,SSSE3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,RDTSCP,LONG,LAHF,PERF,FSGSBASE,SMEP,ERMS
cpu1: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 64b/line 
16-way L2 cache
cpu1: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu1: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu1: smt 0, core 0, package 1
cpu2 at mainbus0: apid 2 (application processor)
cpu2: Intel Xeon E3-12xx v2 (Ivy Bridge), 1016.77 MHz
cpu2: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SS,SSE3,PCLMUL,VMX,SSSE3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,RDTSCP,LONG,LAHF,PERF,FSGSBASE,SMEP,ERMS
cpu2: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 64b/line 
16-way L2 cache
cpu2: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu2: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu2: smt 0, core 0, package 2
cpu3 at mainbus0: apid 3 (application processor)
cpu3: Intel Xeon E3-12xx v2 (Ivy Bridge), 968.46 MHz
cpu3: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SS,SSE3,PCLMUL,VMX,SSSE3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,POPCNT,DEADLINE,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,RDTSCP,LONG,LAHF,PERF,FSGSBASE,SMEP,ERMS
cpu3: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 64b/line 
16-way L2 cache
cpu3: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu3: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu3: smt 0, core 0, package 3
ioapic0 at mainbus0: apid 0 pa 0xfec0, version 11, 24 pins
acpihpet0 at acpi0: 1 Hz
acpiprt0 at acpi0: bus 0 (PCI0)
acpicpu0 at acpi0: C1(@1 halt!)
acpicpu1 at

Re: Unable to boot OpenBSD within QEMU on an Intel Platinum 8176M

2018-01-02 Thread Mike Larkin
On Tue, Jan 02, 2018 at 08:37:04PM +0100, Landry Breuil wrote:
> On Sat, Dec 30, 2017 at 09:23:03PM -0800, Mike Larkin wrote:
> > On Tue, Jan 02, 2018 at 11:30:47AM -0500, Brian Rak wrote:
> > > 
> > > 
> > The only thing I can say is that recently I've been noticing an uptick in 
> > the
> > quantity of KVM related issues on OpenBSD. Whether this is due to some 
> > recent
> > changes in KVM, or maybe due to more people running OpenBSD on KVM (and thus
> > increasing the number of reports), I'm not sure. But kettenis@ did note a 
> > few
> > days ago in a reply to a different KVM related issue that it seems their 
> > local
> > APIC emulation code isn't behaving exactly as we expect. But that code 
> > hasn't
> > changed in OpenBSD since, well, forever, so it's likely a KVM issue there.
> > Whether this is your issue or not I don't know. You might bring this up on
> > the KVM mailing lists and see if someone can shed light on it. If you search
> > the tech@/misc@ archives for proxmox related threads, there was a KVM option
> > reported a week or so back that seemed to fix the issue kettenis@ was 
> > commenting
> > on; perhaps this can help you.
> 
> ftr that option was kvm-intel.preemption_timer=0 on the host kernel
> commandline.
> 

Thanks Landry, I knew someone would chime in :)



Re: Unable to boot OpenBSD within QEMU on an Intel Platinum 8176M

2018-01-02 Thread Landry Breuil
On Sat, Dec 30, 2017 at 09:23:03PM -0800, Mike Larkin wrote:
> On Tue, Jan 02, 2018 at 11:30:47AM -0500, Brian Rak wrote:
> > 
> > 
> The only thing I can say is that recently I've been noticing an uptick in the
> quantity of KVM related issues on OpenBSD. Whether this is due to some recent
> changes in KVM, or maybe due to more people running OpenBSD on KVM (and thus
> increasing the number of reports), I'm not sure. But kettenis@ did note a few
> days ago in a reply to a different KVM related issue that it seems their local
> APIC emulation code isn't behaving exactly as we expect. But that code hasn't
> changed in OpenBSD since, well, forever, so it's likely a KVM issue there.
> Whether this is your issue or not I don't know. You might bring this up on
> the KVM mailing lists and see if someone can shed light on it. If you search
> the tech@/misc@ archives for proxmox related threads, there was a KVM option
> reported a week or so back that seemed to fix the issue kettenis@ was 
> commenting
> on; perhaps this can help you.

ftr that option was kvm-intel.preemption_timer=0 on the host kernel
commandline.



Re: Unable to boot OpenBSD within QEMU on an Intel Platinum 8176M

2018-01-02 Thread Mike Larkin
On Tue, Jan 02, 2018 at 11:30:47AM -0500, Brian Rak wrote:
> 
> 
> On 12/30/2017 5:13 PM, Mike Larkin wrote:
> > On Wed, Dec 27, 2017 at 01:10:13PM -0500, Brian Rak wrote:
> > > I have a server with an Intel Platinum CPU: 
> > > https://ark.intel.com/products/120505/Intel-Xeon-Platinum-8176M-Processor-38_5M-Cache-2_10-GHz
> > > 
> > > It's running Fedora 27 Server, kernel version 4.14.8-300.fc27.x86_64,
> > > qemu-system-x86-core-2.10.1-2.fc27.x86_64
> > > 
> > > I'm starting qemu like this:
> > > 
> > > /usr/bin/qemu-system-x86_64 -machine accel=kvm -name
> > > guest=test,debug-threads=on -S -object 
> > > secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-6-test/master-key.aes
> > > -machine pc-i440fx-2.10,accel=kvm,usb=off,dump-guest-core=off -cpu
> > > Skylake-Client,hypervisor=on -m 32768 -realtime mlock=off -smp
> > > 16,sockets=2,cores=8,threads=1 -uuid 6427e485-5aee-4fb6-b5e5-a80c1dc0f4af
> > > -no-user-config -nodefaults -chardev 
> > > socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-6-test/monitor.sock,server,nowait
> > > -mon chardev=charmonitor,id=monitor,mode=control -rtc 
> > > base=utc,driftfix=slew
> > > -global kvm-pit.lost_tick_policy=delay -no-shutdown -boot strict=on 
> > > -device
> > > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
> > > file=/var/tmp/cd62.iso,format=raw,if=none,id=drive-ide0-1-0,readonly=on
> > > -device 
> > > ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
> > > -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=32 -device 
> > > virtio-net-pci,netdev=hostnet0,id=net0,mac=56:00:00:27:d6:3f,bus=pci.0,addr=0x3,rombar=0,bootindex=3
> > > -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc
> > > 127.0.0.1:4788,websocket=40688 -device
> > > cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device
> > > virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -object
> > > rng-random,id=objrng0,filename=/dev/random -device
> > > virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -msg timestamp=on
> > Could you try a simpler config? There's a lot in there that's not needed for
> > OpenBSD, and I'm wondering if there is some option you've chosen that's
> > causing problems.
> > 
> > For what it's worth, there have been a number of reports of OpenBSD guests
> > recently failing when run on KVM (mainly clock related issues but other
> > things as well). You are using a very new CPU on a very new host OS, I'm not
> > surprised some things are behaving a bit strangely.
> > 
> > -ml
> The simplest command line I can come up with is:
> 
> /usr/bin/qemu-system-x86_64 -machine accel=kvm -name guest=test -machine
> pc-i440fx-2.10 -cpu Skylake-Client -m 32768 -drive
> file=/var/tmp/cd62.iso,format=raw,if=none,id=drive-ide0-1-0,readonly=on
> -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
> -device cirrus-vga,id=video0,bus=pci.0,addr=0x3 -vnc 127.0.0.1:4788
> 
> I still see the same hang.  Even using more ancient virtual CPUs (-cpu
> pentium3) doesn't seem to help.
> 
> Switching to '-machine pc-q35-2.10' doesn't appear to help either.
> 
> If I disable KVM, the instance appears to boot normally.  However, I'm not
> sure if this points to a KVM issue or if it's just masking a timing issue
> (because it runs so much slower emulated)
> 
> In this case, we went with Fedora 27 to rule out a problem that would be
> fixed by upgrading some part of the host OS.  This isn't an OS we normally
> use, and we've seen issues with older versions of qemu.
> 

The only thing I can say is that recently I've been noticing an uptick in the
quantity of KVM related issues on OpenBSD. Whether this is due to some recent
changes in KVM, or maybe due to more people running OpenBSD on KVM (and thus
increasing the number of reports), I'm not sure. But kettenis@ did note a few
days ago in a reply to a different KVM related issue that it seems their local
APIC emulation code isn't behaving exactly as we expect. But that code hasn't
changed in OpenBSD since, well, forever, so it's likely a KVM issue there.
Whether this is your issue or not I don't know. You might bring this up on
the KVM mailing lists and see if someone can shed light on it. If you search
the tech@/misc@ archives for proxmox related threads, there was a KVM option
reported a week or so back that seemed to fix the issue kettenis@ was commenting
on; perhaps this can help you.

-ml



Re: Unable to boot OpenBSD within QEMU on an Intel Platinum 8176M

2018-01-02 Thread Brian Rak



On 12/30/2017 5:13 PM, Mike Larkin wrote:

On Wed, Dec 27, 2017 at 01:10:13PM -0500, Brian Rak wrote:

I have a server with an Intel Platinum CPU: 
https://ark.intel.com/products/120505/Intel-Xeon-Platinum-8176M-Processor-38_5M-Cache-2_10-GHz

It's running Fedora 27 Server, kernel version 4.14.8-300.fc27.x86_64,
qemu-system-x86-core-2.10.1-2.fc27.x86_64

I'm starting qemu like this:

/usr/bin/qemu-system-x86_64 -machine accel=kvm -name
guest=test,debug-threads=on -S -object 
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-6-test/master-key.aes
-machine pc-i440fx-2.10,accel=kvm,usb=off,dump-guest-core=off -cpu
Skylake-Client,hypervisor=on -m 32768 -realtime mlock=off -smp
16,sockets=2,cores=8,threads=1 -uuid 6427e485-5aee-4fb6-b5e5-a80c1dc0f4af
-no-user-config -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-6-test/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew
-global kvm-pit.lost_tick_policy=delay -no-shutdown -boot strict=on -device
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/var/tmp/cd62.iso,format=raw,if=none,id=drive-ide0-1-0,readonly=on
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
-netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=32 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=56:00:00:27:d6:3f,bus=pci.0,addr=0x3,rombar=0,bootindex=3
-device usb-tablet,id=input0,bus=usb.0,port=1 -vnc
127.0.0.1:4788,websocket=40688 -device
cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -object
rng-random,id=objrng0,filename=/dev/random -device
virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -msg timestamp=on

Could you try a simpler config? There's a lot in there that's not needed for
OpenBSD, and I'm wondering if there is some option you've chosen that's
causing problems.

For what it's worth, there have been a number of reports of OpenBSD guests
recently failing when run on KVM (mainly clock related issues but other
things as well). You are using a very new CPU on a very new host OS, I'm not
surprised some things are behaving a bit strangely.

-ml

The simplest command line I can come up with is:

/usr/bin/qemu-system-x86_64 -machine accel=kvm -name guest=test -machine 
pc-i440fx-2.10 -cpu Skylake-Client -m 32768 -drive 
file=/var/tmp/cd62.iso,format=raw,if=none,id=drive-ide0-1-0,readonly=on 
-device 
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1 
-device cirrus-vga,id=video0,bus=pci.0,addr=0x3 -vnc 127.0.0.1:4788


I still see the same hang.  Even using more ancient virtual CPUs (-cpu 
pentium3) doesn't seem to help.


Switching to '-machine pc-q35-2.10' doesn't appear to help either.

If I disable KVM, the instance appears to boot normally.  However, I'm 
not sure if this points to a KVM issue or if it's just masking a timing 
issue (because it runs so much slower emulated)


In this case, we went with Fedora 27 to rule out a problem that would be 
fixed by upgrading some part of the host OS.  This isn't an OS we 
normally use, and we've seen issues with older versions of qemu.






I do not have a functional OpenBSD install here, this happens even when I
try to boot off the ISO.

OpenBSD hangs after printing:

pciide0 at pci0 dev 1 function 1 "Intel 82371SB IDE" rev 0x00: DMA, channel
0 wired to compatibility, channel 1 wired to compatibility

There is a flashing "_" cursor, but I'm unable to interact with it in any
way.

I tried setting this to use an older CPU type, which changed the -cpu flag
to be "-cpu Nehalem,vme=off,x2apic=off,hypervisor=off", but this did not
seem to have any effect.

If I boot the VM with 'boot -c', then 'disable pciide*', I can actually get
to the 'Welcome to the OpenBSD/amd64 6.2 installation program' prompt, but
then the machine hangs whenever I type 'A'. If I choose another option
('I'), I can get partially through the install before it hangs.  The hangs
at that point seem to be random.

I've attached a screenshot of where it's hanging during the initial boot.

So far we've been able to reproduce this on all of our Intel Scalable
processors, which includes a few other Gold CPUs.  This does work ok on our
older E5 CPUs





Re: Unable to boot OpenBSD within QEMU on an Intel Platinum 8176M

2018-01-02 Thread Mike Larkin
On Wed, Dec 27, 2017 at 01:10:13PM -0500, Brian Rak wrote:
> I have a server with an Intel Platinum CPU: 
> https://ark.intel.com/products/120505/Intel-Xeon-Platinum-8176M-Processor-38_5M-Cache-2_10-GHz
> 
> It's running Fedora 27 Server, kernel version 4.14.8-300.fc27.x86_64,
> qemu-system-x86-core-2.10.1-2.fc27.x86_64
> 
> I'm starting qemu like this:
> 
> /usr/bin/qemu-system-x86_64 -machine accel=kvm -name
> guest=test,debug-threads=on -S -object 
> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-6-test/master-key.aes
> -machine pc-i440fx-2.10,accel=kvm,usb=off,dump-guest-core=off -cpu
> Skylake-Client,hypervisor=on -m 32768 -realtime mlock=off -smp
> 16,sockets=2,cores=8,threads=1 -uuid 6427e485-5aee-4fb6-b5e5-a80c1dc0f4af
> -no-user-config -nodefaults -chardev 
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-6-test/monitor.sock,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew
> -global kvm-pit.lost_tick_policy=delay -no-shutdown -boot strict=on -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
> file=/var/tmp/cd62.iso,format=raw,if=none,id=drive-ide0-1-0,readonly=on
> -device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=1
> -netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=32 -device 
> virtio-net-pci,netdev=hostnet0,id=net0,mac=56:00:00:27:d6:3f,bus=pci.0,addr=0x3,rombar=0,bootindex=3
> -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc
> 127.0.0.1:4788,websocket=40688 -device
> cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -object
> rng-random,id=objrng0,filename=/dev/random -device
> virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x6 -msg timestamp=on

Could you try a simpler config? There's a lot in there that's not needed for
OpenBSD, and I'm wondering if there is some option you've chosen that's
causing problems.

For what it's worth, there have been a number of reports of OpenBSD guests
recently failing when run on KVM (mainly clock related issues but other
things as well). You are using a very new CPU on a very new host OS, I'm not
surprised some things are behaving a bit strangely.

-ml


> 
> I do not have a functional OpenBSD install here, this happens even when I
> try to boot off the ISO.
> 
> OpenBSD hangs after printing:
> 
> pciide0 at pci0 dev 1 function 1 "Intel 82371SB IDE" rev 0x00: DMA, channel
> 0 wired to compatibility, channel 1 wired to compatibility
> 
> There is a flashing "_" cursor, but I'm unable to interact with it in any
> way.
> 
> I tried setting this to use an older CPU type, which changed the -cpu flag
> to be "-cpu Nehalem,vme=off,x2apic=off,hypervisor=off", but this did not
> seem to have any effect.
> 
> If I boot the VM with 'boot -c', then 'disable pciide*', I can actually get
> to the 'Welcome to the OpenBSD/amd64 6.2 installation program' prompt, but
> then the machine hangs whenever I type 'A'. If I choose another option
> ('I'), I can get partially through the install before it hangs.  The hangs
> at that point seem to be random.
> 
> I've attached a screenshot of where it's hanging during the initial boot.
> 
> So far we've been able to reproduce this on all of our Intel Scalable
> processors, which includes a few other Gold CPUs.  This does work ok on our
> older E5 CPUs
> 



Re: Degraded timing performance - QEMU, KVM - OpenBSD 6.2

2017-12-27 Thread Mark Kettenis
> From: Andrew Davis <ada...@gameservers.com>
> Date: Wed, 27 Dec 2017 11:39:54 -0500
> 
> Hello again,
> 
> I tested with each of the "acpihpet0", "acpitimer0", and "i8254" timers. 
> The timing problem manifested when using all 3 timers. I ran the date 
> loop with "acpihpet0" and "acpitimer0" until the issue manifested, and 
> let "i8254" run overnight.
> 
> Here are some snippets from the date logs from where I started logging 
> the date loop, and where the timing issue became present.
> 
> acpitimer0:
> 
>      Tue Dec 26 23:57:57 UTC 2017
>      Tue Dec 26 23:57:58 UTC 2017
>      ...
>      Wed Dec 27 00:10:10 UTC 2017
>      Wed Dec 27 00:10:12 UTC 2017
>      Wed Dec 27 00:10:14 UTC 2017
> 
> i8254:
> 
>      Wed Dec 27 00:14:23 UTC 2017
>      Wed Dec 27 00:14:24 UTC 2017
>      ...
>      Wed Dec 27 00:59:30 UTC 2017
>      Wed Dec 27 00:59:31 UTC 2017
>      Wed Dec 27 00:59:33 UTC 2017
> 
> acpihpet0:
> 
>      Wed Dec 27 16:20:54 UTC 2017
>      Wed Dec 27 16:20:55 UTC 2017
>      ...
>      Wed Dec 27 16:32:44 UTC 2017
>      Wed Dec 27 16:32:45 UTC 2017
>      Wed Dec 27 16:32:47 UTC 2017
>      Wed Dec 27 16:32:49 UTC 2017
> 
> The i8254 timer hit a point where the system stopped reporting the 
> proper time altogether. I ran these commands this morning after my 
> OpenBSD VM ran with i8254 overnight, and this is what the "date" command 
> displayed. The proper time is shown below.
> 
>      # sysctl | grep -i timecounter
>      kern.timecounter.tick=1
>      kern.timecounter.timestepwarnings=0
>      kern.timecounter.hardware=i8254
>      kern.timecounter.choice=i8254(0) acpihpet0(1000) acpitimer0(1000) 
> dummy(-100)
> 
>      # date
>      Wed Dec 27 01:35:51 UTC 2017
> 
>      [root@local-linux ~]# date
>      Wed Dec 27 16:11:05 UTC 2017

Your test results are consistent with the local APIC emulation being
broken in Linux/KVM.  Regardless of what hardware is used for the
timecounter, the clock interrupts use the local APIC timer in OpenBSD.

OpenBSD programs the local APIC to interrupt every 10ms in so-called
repeated mode.  The clock interrupt is then responsable for reading
the timecounter to update the current wall clock time and for running
things like timeouts that wake up tasks that are sleeping.  If we get
no clock interrupts those wakeups don't happen, and your sleeps take
longer than what you intended.  But as long as the timecounter doesn't
wrap the wall clock time will be correctly updated once another clock
interrupt comes in.  And that's what happens with the i8524
timecounter.  It wraps fairly quickly, so if the clock interrupts
don't come in for a while, OpenBSD's idea of wall clock time starts to
get out of sync with reality.

So why do other systems not suffer from this problem?  I'm fairly
certain they also use the local APIC for clock interrupts.  But the
systems you tested (Linux, FreeBSD) probably don't run it in repeated
mode.  Some people consider running the local APIC in repeated mode a
bad idea.  And they might even be right.  Waking a system up at
regular intervals even if there is no real work to do is a bit silly
and wastes power.  Although one could argue that 10ms between wakeups
is long enough for this to matter much on modern systems.

Maybe we'll change the way we do clock interrupts at some point in the
future.  It would probably help vmm(4).  But this is not a trivial
task and won't happen overnight.  Working around bugs in someone
else's software certainly isn't enough motivation for me to implement
it.  

Cheers,

Mark


> On 12/26/2017 5:44 PM, Mike Larkin wrote:
> > On Tue, Dec 26, 2017 at 03:24:03PM -0500, Andrew Davis wrote:
> >> Hello,
> >>
> >> No, I didn't changing the kern.timecounter selection directly. I had tried
> >> disabling the HPET on qemu/kvm (which may have affected this selection?).
> >>
> >> Two of my boxes, both OpenBSD 6.1 report this:
> >>
> >> # sysctl kern.timecounter
> >> kern.timecounter.tick=1
> >> kern.timecounter.timestepwarnings=0
> >> kern.timecounter.hardware=acpihpet0
> >> kern.timecounter.choice=i8254(0) acpihpet0(1000) acpitimer0(1000)
> >> dummy(-100)
> >>
> >> Best,
> >> Andrew
> >>
> > Could you try one of the others and let us know if it helps, please?
> >
> > -ml
> >
> >> On 12/26/2017 2:36 PM, Mike Larkin wrote:
> >>> On Tue, Dec 26, 2017 at 12:27:31PM -0500, Andrew Davis wrote:
> >>>> Hello,
> >>>>
> >>>> I'm experi

Re: Degraded timing performance - QEMU, KVM - OpenBSD 6.2

2017-12-26 Thread Mike Larkin
On Tue, Dec 26, 2017 at 03:24:03PM -0500, Andrew Davis wrote:
> Hello,
> 
> No, I didn't changing the kern.timecounter selection directly. I had tried
> disabling the HPET on qemu/kvm (which may have affected this selection?).
> 
> Two of my boxes, both OpenBSD 6.1 report this:
> 
> # sysctl kern.timecounter
> kern.timecounter.tick=1
> kern.timecounter.timestepwarnings=0
> kern.timecounter.hardware=acpihpet0
> kern.timecounter.choice=i8254(0) acpihpet0(1000) acpitimer0(1000)
> dummy(-100)
> 
> Best,
> Andrew
> 

Could you try one of the others and let us know if it helps, please?

-ml

> On 12/26/2017 2:36 PM, Mike Larkin wrote:
> > On Tue, Dec 26, 2017 at 12:27:31PM -0500, Andrew Davis wrote:
> > > Hello,
> > > 
> > > I'm experiencing some odd timing issues on OpenBSD 6.2 (and 6.1) on the
> > > system listed below. This is preventing me from running OpenBSD on my
> > > servers. Can you determine if this is a bug in the OpenBSD operating 
> > > system?
> > > I can provide more information if needed.
> > > 
> > > Virtualized environment.
> > > 
> > > Host CPU: 2 x Intel E5-2630 v3 2.4 Ghz
> > > Host OS: Fedora 27
> > > Virtualization software: QEMU + KVM (2.10.0-1.fc27)
> > > Guest Machine: default (pc-i440fx-2.10)
> > > Guest OS: OpenBSD 6.2 (and 6.1).
> > > 
> > > Basically, OpenBSD processes degrade over time to the point where they're
> > > completely unresponsive. This simple date printout script is a good 
> > > example.
> > > It should print out the date once per second, but after roughly ~20 mins 
> > > on
> > > this hardware configuration, it takes 2 seconds to print each line, then 4
> > > seconds to print each line, and so on. After running for about 24 hours, 
> > > the
> > > delay is about 1 minute between line printouts.
> > > 
> > >      while sleep 1; do date; done
> > > 
> > > I've tried tweaking some different settings on the guest and host, such as
> > > disabling the HPET timer and x2apic, neither of which has proven 
> > > effective.
> > > 
> > > I saw mention of adding "kvm-intel.preemption_timer=0" in another recent
> > > thread. This seems to resolve the slowdown issue.
> > > 
> > > However, I have run other guest operating systems on this hardware
> > > configuration (CentOS, Ubuntu, FreeBSD) - neither of which required any of
> > > these tweaks, or experienced timing issues. This leads me to believe that 
> > > it
> > > could be related to a bug in OpenBSD.
> > > 
> > > I have access to several machines with this hardware configuration and
> > > tested on multiple machines, to rule out a possible one-off hardware 
> > > issue.
> > > Each host displayed the same behavior.
> > > 
> > > Best regards,
> > > Andrew
> > > 
> > What timecounter source did the OpenBSD guests pick? Did you try selecting
> > one of the other choices to see if this helps?
> > 
> > sysctl kern.timecounterif you're not sure what I'm talking about.
> > 
> > -ml
> 



Re: Degraded timing performance - QEMU, KVM - OpenBSD 6.2

2017-12-26 Thread Andrew Davis

Hello,

No, I didn't changing the kern.timecounter selection directly. I had 
tried disabling the HPET on qemu/kvm (which may have affected this 
selection?).


Two of my boxes, both OpenBSD 6.1 report this:

# sysctl kern.timecounter
kern.timecounter.tick=1
kern.timecounter.timestepwarnings=0
kern.timecounter.hardware=acpihpet0
kern.timecounter.choice=i8254(0) acpihpet0(1000) acpitimer0(1000) 
dummy(-100)


Best,
Andrew

On 12/26/2017 2:36 PM, Mike Larkin wrote:

On Tue, Dec 26, 2017 at 12:27:31PM -0500, Andrew Davis wrote:

Hello,

I'm experiencing some odd timing issues on OpenBSD 6.2 (and 6.1) on the
system listed below. This is preventing me from running OpenBSD on my
servers. Can you determine if this is a bug in the OpenBSD operating system?
I can provide more information if needed.

Virtualized environment.

Host CPU: 2 x Intel E5-2630 v3 2.4 Ghz
Host OS: Fedora 27
Virtualization software: QEMU + KVM (2.10.0-1.fc27)
Guest Machine: default (pc-i440fx-2.10)
Guest OS: OpenBSD 6.2 (and 6.1).

Basically, OpenBSD processes degrade over time to the point where they're
completely unresponsive. This simple date printout script is a good example.
It should print out the date once per second, but after roughly ~20 mins on
this hardware configuration, it takes 2 seconds to print each line, then 4
seconds to print each line, and so on. After running for about 24 hours, the
delay is about 1 minute between line printouts.

     while sleep 1; do date; done

I've tried tweaking some different settings on the guest and host, such as
disabling the HPET timer and x2apic, neither of which has proven effective.

I saw mention of adding "kvm-intel.preemption_timer=0" in another recent
thread. This seems to resolve the slowdown issue.

However, I have run other guest operating systems on this hardware
configuration (CentOS, Ubuntu, FreeBSD) - neither of which required any of
these tweaks, or experienced timing issues. This leads me to believe that it
could be related to a bug in OpenBSD.

I have access to several machines with this hardware configuration and
tested on multiple machines, to rule out a possible one-off hardware issue.
Each host displayed the same behavior.

Best regards,
Andrew


What timecounter source did the OpenBSD guests pick? Did you try selecting
one of the other choices to see if this helps?

sysctl kern.timecounterif you're not sure what I'm talking about.

-ml




Re: Degraded timing performance - QEMU, KVM - OpenBSD 6.2

2017-12-26 Thread Mike Larkin
On Tue, Dec 26, 2017 at 12:27:31PM -0500, Andrew Davis wrote:
> Hello,
> 
> I'm experiencing some odd timing issues on OpenBSD 6.2 (and 6.1) on the
> system listed below. This is preventing me from running OpenBSD on my
> servers. Can you determine if this is a bug in the OpenBSD operating system?
> I can provide more information if needed.
> 
> Virtualized environment.
> 
> Host CPU: 2 x Intel E5-2630 v3 2.4 Ghz
> Host OS: Fedora 27
> Virtualization software: QEMU + KVM (2.10.0-1.fc27)
> Guest Machine: default (pc-i440fx-2.10)
> Guest OS: OpenBSD 6.2 (and 6.1).
> 
> Basically, OpenBSD processes degrade over time to the point where they're
> completely unresponsive. This simple date printout script is a good example.
> It should print out the date once per second, but after roughly ~20 mins on
> this hardware configuration, it takes 2 seconds to print each line, then 4
> seconds to print each line, and so on. After running for about 24 hours, the
> delay is about 1 minute between line printouts.
> 
>     while sleep 1; do date; done
> 
> I've tried tweaking some different settings on the guest and host, such as
> disabling the HPET timer and x2apic, neither of which has proven effective.
> 
> I saw mention of adding "kvm-intel.preemption_timer=0" in another recent
> thread. This seems to resolve the slowdown issue.
> 
> However, I have run other guest operating systems on this hardware
> configuration (CentOS, Ubuntu, FreeBSD) - neither of which required any of
> these tweaks, or experienced timing issues. This leads me to believe that it
> could be related to a bug in OpenBSD.
> 
> I have access to several machines with this hardware configuration and
> tested on multiple machines, to rule out a possible one-off hardware issue.
> Each host displayed the same behavior.
> 
> Best regards,
> Andrew
> 

What timecounter source did the OpenBSD guests pick? Did you try selecting
one of the other choices to see if this helps?

sysctl kern.timecounterif you're not sure what I'm talking about.

-ml



Degraded timing performance - QEMU, KVM - OpenBSD 6.2

2017-12-26 Thread Andrew Davis

Hello,

I'm experiencing some odd timing issues on OpenBSD 6.2 (and 6.1) on the 
system listed below. This is preventing me from running OpenBSD on my 
servers. Can you determine if this is a bug in the OpenBSD operating 
system? I can provide more information if needed.


Virtualized environment.

Host CPU: 2 x Intel E5-2630 v3 2.4 Ghz
Host OS: Fedora 27
Virtualization software: QEMU + KVM (2.10.0-1.fc27)
Guest Machine: default (pc-i440fx-2.10)
Guest OS: OpenBSD 6.2 (and 6.1).

Basically, OpenBSD processes degrade over time to the point where 
they're completely unresponsive. This simple date printout script is a 
good example. It should print out the date once per second, but after 
roughly ~20 mins on this hardware configuration, it takes 2 seconds to 
print each line, then 4 seconds to print each line, and so on. After 
running for about 24 hours, the delay is about 1 minute between line 
printouts.


    while sleep 1; do date; done

I've tried tweaking some different settings on the guest and host, such 
as disabling the HPET timer and x2apic, neither of which has proven 
effective.


I saw mention of adding "kvm-intel.preemption_timer=0" in another recent 
thread. This seems to resolve the slowdown issue.


However, I have run other guest operating systems on this hardware 
configuration (CentOS, Ubuntu, FreeBSD) - neither of which required any 
of these tweaks, or experienced timing issues. This leads me to believe 
that it could be related to a bug in OpenBSD.


I have access to several machines with this hardware configuration and 
tested on multiple machines, to rule out a possible one-off hardware 
issue. Each host displayed the same behavior.


Best regards,
Andrew



Re: QEMU

2017-11-20 Thread Mike Larkin
On Mon, Nov 20, 2017 at 02:42:36PM -0500, Joshua Brand wrote:
> Hello,
> 
> I'm experiencing performance issues with OpenBSD 6.1 and 6.2 when running on
> qemu 2.10 (machine type "pc-i440fx-2.10").
> 
> The symptoms are that some simple commands become quite slow or
> unresponsive. For example: "vmstat -w 1", or "iostat -w 1", and "top -s 1".
> I assume this is something related to timing within the kernel. I've tried
> compiling a "CUSTOM" kernel to change kern.clockrate "hz" from "100" to
> "1000", but this didn't eliminate the latency.
> I'm not sure if you generally support qemu, but any advice or suggestions
> would be appreciated. This behavior wasn't present when running OpenBSD 6.1
> or 6.2 with previous versions of qemu 2.x.
> 
> Below are the steps taken to reproduce this issue:
> 
> 1. Install qemu on a host server.
> 2. Create a VM with hardware type "pc-i440fx-2.10".
> 3. Install OpenBSD from the "install61.iso" or "install62.iso"
> 
> Let me know if you require any additional information.
> 

What is the host system?

> Thanks,
> 
> -- 
> Joshua Brand
> Systems Administrator
> 



QEMU

2017-11-20 Thread Joshua Brand

Hello,

I'm experiencing performance issues with OpenBSD 6.1 and 6.2 when 
running on qemu 2.10 (machine type "pc-i440fx-2.10").


The symptoms are that some simple commands become quite slow or 
unresponsive. For example: "vmstat -w 1", or "iostat -w 1", and "top -s 
1". I assume this is something related to timing within the kernel. I've 
tried compiling a "CUSTOM" kernel to change kern.clockrate "hz" from 
"100" to "1000", but this didn't eliminate the latency.
I'm not sure if you generally support qemu, but any advice or 
suggestions would be appreciated. This behavior wasn't present when 
running OpenBSD 6.1 or 6.2 with previous versions of qemu 2.x.


Below are the steps taken to reproduce this issue:

1. Install qemu on a host server.
2. Create a VM with hardware type "pc-i440fx-2.10".
3. Install OpenBSD from the "install61.iso" or "install62.iso"

Let me know if you require any additional information.

Thanks,

--
Joshua Brand
Systems Administrator



Re: crash with performance counter (RDPMC) on OpenBSD as QEMU quest VM

2016-03-15 Thread Hiltjo Posthuma
On Tue, Mar 15, 2016 at 12:08 PM, Mike Larkin <mlar...@azathoth.net> wrote:
> On Sat, Mar 12, 2016 at 01:49:08PM +0100, Hiltjo Posthuma wrote:
>> >Synopsis:crash with performance counter (RDPMC) on OpenBSD as QEMU 
>> >quest VM
>> >Category:Crash / system hang
>> >Environment:
>>   System  : OpenBSD 5.8 and -current (snapshot: 2016-03-11)
>>   Details : OpenBSD 5.8 (GENERIC) #0: Fri Oct 23 11:15:05 CEST 2015
>>
>> hil...@cow.my.domain:/usr/src/sys/arch/amd64/compile/GENERIC
>>
>>   Architecture: OpenBSD.amd64
>>   Machine : amd64
>> >Description:
>>   I run OpenBSD on my VPS as a QEMU quest VM, when I run `pctr` as a user
>>   the system hangs and shows the ddb console.
>> >How-To-Repeat:
>>   run as user in QEMU VM the command: pctr
>> >Fix:
>>   I don't know the correct fix for this issue, below is a workaround
>> and additional information:
>>
>>   trace from ddb console:
>>
>>   kernel: protection fault trap, code=0
>>   Stopped at  pctrioctl+0x140:rdpmc
>>   ddb> trace
>>   pctrioctl() at pctrioctl+0x140
>>   VOP_IOCTL() at VOP_IOCTL+0x44
>>   vn_ioctl() at vn_ioctl+0x77
>>   sys_ioctl() at sys_ioctl+0x18b
>>   syscall() at syscall+0x19e
>>   --- syscall (number 54) ---
>>   end of kernel
>>   end trace frame: 0x3, count: -5
>>
>
> I looked through the pctr code and it appears that we are only querying
> counters 0 and 1, which appear to be valid counters on all supported CPUs.
> (At least for Intel, which is what your qemu instance reports)
>
> Can you repeat this crash and do a "show registers" after it breaks into
> DDB? I'm interested in ECX/RCX content.
>
> Possible related issue: http://www.spinics.net/lists/kvm/msg128775.html
>
> -ml

Thanks for looking into this. The rcx register is 0:

kernel: privileged instruction fault trap, code=0
Stopped at  pctrioctl+0x140:rdpmc
ddb{0}> show registers
rdi  0x4
rsi   0x8e4cad90
rbp   0x8e4cac10
rbx 0x19
rdx 0xca27d5dda2
rcx0
rax   0x27d5dda2
r80xc001
r90x8e4cad90
r10   0x10ec0a0b428a
r11   0x815b4880pctrioctl
r12   0xff001e7e3390
r13   0xff001d6269f0
r14   0x40386301
r15   0xff001f531010
rip   0x815b49c0pctrioctl+0x140
cs   0x8
rflags  0x46
rsp   0x8e4cac00
ss  0x10
pctrioctl+0x140:rdpmc

Kind regards,
Hiltjo



crash with performance counter (RDPMC) on OpenBSD as QEMU quest VM

2016-03-12 Thread Hiltjo Posthuma
>Synopsis:  crash with performance counter (RDPMC) on OpenBSD as QEMU quest 
>VM
>Category:  Crash / system hang
>Environment:
System  : OpenBSD 5.8 and -current (snapshot: 2016-03-11)
Details : OpenBSD 5.8 (GENERIC) #0: Fri Oct 23 11:15:05 CEST 2015
 
hil...@cow.my.domain:/usr/src/sys/arch/amd64/compile/GENERIC

Architecture: OpenBSD.amd64
Machine : amd64
>Description:
I run OpenBSD on my VPS as a QEMU quest VM, when I run `pctr` as a user
the system hangs and shows the ddb console.
>How-To-Repeat:
    run as user in QEMU VM the command: pctr
>Fix:
I don't know the correct fix for this issue, below is a workaround
and additional information:

trace from ddb console:

kernel: protection fault trap, code=0
Stopped at  pctrioctl+0x140:rdpmc
ddb> trace
pctrioctl() at pctrioctl+0x140
VOP_IOCTL() at VOP_IOCTL+0x44
vn_ioctl() at vn_ioctl+0x77
sys_ioctl() at sys_ioctl+0x18b
syscall() at syscall+0x19e
--- syscall (number 54) ---
end of kernel
end trace frame: 0x3, count: -5

When changing the macro `usepctr` CPU family check in
/usr/src/sys/arch/amd64/amd64/pctr.c to #define usepctr 0 it doesn't
hang (workaround).

I also found CPU Erratum 26 ("RDPMC cannot be used in conjunction with 
SMM"),
but it's probably unrelated.

Linux commit: 
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=e97df76377b8b3b1f7dfd5d6f8a1d5a31438b140
Errata PDF: 
http://download.intel.com/design/archives/processors/pro/docs/24268935.pdf

Additional system info (of VM) which might be useful:

$ sysctl | grep cpuid
machdep.cpuid=132801

$ sysctl | grep hw
hw.machine=amd64
hw.model=Westmere E56xx/L56xx/X56xx (Nehalem-C)
hw.ncpu=1
hw.byteorder=1234
hw.pagesize=4096
hw.disknames=sd0:a9def653718cd57f,fd0:
hw.diskcount=2
hw.sensors.viomb0.raw0=0 (desired)
hw.sensors.viomb0.raw1=0 (current)
hw.cpuspeed=2200
hw.vendor=QEMU
hw.product=Standard PC (i440FX + PIIX, 1996)
hw.version=pc-i440fx-2.4
hw.uuid=a5b29157-fdc3-2806-6b94-1fd2b1b4
hw.physmem=1056833536
hw.usermem=1056821248
hw.ncpufound=1
hw.allowpowerdown=1

dmesg:
OpenBSD 5.8 (GENERIC) #0: Fri Oct 23 11:15:05 CEST 2015
hil...@cow.my.domain:/usr/src/sys/arch/amd64/compile/GENERIC
real mem = 1056833536 (1007MB)
avail mem = 1021014016 (973MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.8 @ 0xf6300 (9 entries)
bios0: vendor SeaBIOS version "rel-1.8.2-0-g33fbe13 by
qemu-project.org" date 04/01/2014
bios0: QEMU Standard PC (i440FX + PIIX, 1996)
acpi0 at bios0: rev 0
acpi0: sleep states S3 S4 S5
acpi0: tables DSDT FACP SSDT APIC HPET
acpi0: wakeup devices
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Westmere E56xx/L56xx/X56xx (Nehalem-C), 2200.34 MHz
cpu0: 
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,PCLMUL,SSSE3,CX16,SSE4.1,SSE4.2,x2APIC,POPCNT,AES,HV,NXE,LONG,LAHF,ARAT
cpu0: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB
64b/line 16-way L2 cache
cpu0: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu0: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
cpu0: smt 0, core 0, package 0
mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
cpu0: apic clock running at 999MHz
ioapic0 at mainbus0: apid 0 pa 0xfec0, version 11, 24 pins
acpihpet0 at acpi0: 1 Hz
acpiprt0 at acpi0: bus 0 (PCI0)
acpicpu0 at acpi0: C1(@1 halt!)
pvbus0 at mainbus0: KVM
pci0 at mainbus0 bus 0
pchb0 at pci0 dev 0 function 0 "Intel 82441FX" rev 0x02
pcib0 at pci0 dev 1 function 0 "Intel 82371SB ISA" rev 0x00
pciide0 at pci0 dev 1 function 1 "Intel 82371SB IDE" rev 0x00: DMA,
channel 0 wired to compatibility, channel 1 wired to compatibility
pciide0: channel 0 disabled (no drives)
pciide0: channel 1 disabled (no drives)
uhci0 at pci0 dev 1 function 2 "Intel 82371SB USB" rev 0x01: apic 0 int 11
piixpm0 at pci0 dev 1 function 3 "Intel 82371AB Power" rev 0x03: apic 0 int 9
iic0 at piixpm0
vga1 at pci0 dev 2 function 0 "Bochs VGA" rev 0x02
wsdisplay0 at vga1 mux 1: console (80x25, vt100 emulation)
wsdisplay0: screen 1-5 added (80x25, vt100 emulation)
vi

Re: SMP OpenBSD VM on OpenBSD host freezes when qemu sends it the ACPI power-button-pressed event

2014-02-13 Thread Robert Urban
Hello Folks,

this bug can be reproduced using OpenBSD 5.4 as the host, with the stock qemu
package, and an OpenBSD 5.4 guest.

Booting the GENERIC.MP kernel in the VM was painfully slow, but it eventually
came up to multi-user mode.

The script to run the OpenBSD 5.4 guest on the OpenBSD 5.4 host:
-snip-
#!/bin/sh

qemu-system-x86_64 \
-S \
-m 2048 \
-smp 2,sockets=2,cores=1,threads=1 \
-monitor stdio \
-vnc :0 \
-no-fd-bootchk \
-net nic \
-net user \
-cdrom /space/install54.iso \
-drive 
file=/space/obsd54test.raw,index=0,media=disk,cache=none,format=raw
-snip-

qemu will start, but not start the VM until c is entered in the monitor. A vnc
client must be started to get the console before starting the VM:

vncviewer localhost:5900

As usual, issuing the system_powerdown command in the monitor caused the guest
to freeze totally.

The guest VM container file can be found here:

http://www.spielwiese.de/OpenBSD/bsd-host-obsd54test.raw.7z

It's 143MB compressed, and 10G uncompressed.  The root password is x.

Rob Urban

original bug report:
 Synopsis:OpenBSD VM freezes when qemu sends it the ACPI
 power-button-pressed event
 Category:kernel
 Environment:
 System  : OpenBSD 5.4
 Details : OpenBSD 5.4 (GENERIC.MP) #0: Mon Jan 20 19:07:21 MET 2014
  root at 
 dna54.y42.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP

 Architecture: OpenBSD.amd64
 Machine : amd64
 Description:
 On an SMP v5.4 system running in a qemu/KVM VM, the qemu
 system_powerdown command (which is the equivalent of issuing
 virsh shutdown guest if qemu is under libvirt control) will
 cause the OpenBSD guest to freeze totally.
 The host is linux x86_64 (Debian and Fedora). This can be reproduced with
 OpenBSD (amd64) versions 5.3, 5.4 and 5.5 (snapshot), and also with qemu
 versions 0.12.5, 1.6.1, 1.7.5 (snapshot from 2014-02-12).

 I do not know if qemu or OpenBSD is at fault. I also opened a qemu bug,
 which can be found here:
 https://bugs.launchpad.net/qemu/+bug/1279500
 How-To-Repeat:
 Install a SMP OpenBSD in a qemu VM and send it an ACPI 
 power-button-pressed
 event.
 Fix:
 The only workaround is to disable mpbios(4), but this is no workaround
 on an SMP system.
 

 dmesg:
 OpenBSD 5.4 (GENERIC.MP) #0: Mon Jan 20 19:07:21 MET 2014
 root at dna54.y42.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
 real mem = 3204427776 (3055MB)
 avail mem = 3111464960 (2967MB)
 mainbus0 at root
 bios0 at mainbus0: SMBIOS rev. 2.4  at  0xbec0 (11 entries)
 bios0: vendor Bochs version Bochs date 01/01/2007
 bios0: Bochs Bochs
 acpi0 at bios0: rev 0
 acpi0: sleep states S3 S4 S5
 acpi0: tables DSDT FACP SSDT APIC HPET
 acpi0: wakeup devices
 acpitimer0 at acpi0: 3579545 Hz, 24 bits
 acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
 acpihpet0 at acpi0: 1 Hz
 acpiprt0 at acpi0: bus 0 (PCI0)
 mpbios0 at bios0: Intel MP Specification 1.4
 cpu0 at mainbus0: apid 0 (boot processor)
 cpu0: QEMU Virtual CPU version 0.12.5, 3411.92 MHz
 cpu0:
 FPU,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,CX16,POPCNT,NXE,LONG,LAHF,PERF
 cpu0: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 64b/line
 16-way L2 cache
 cpu0: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
 cpu0: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
 cpu0: smt 0, core 0, package 0
 cpu0: apic clock running at 1000MHz
 cpu1 at mainbus0: apid 1 (application processor)
 cpu1: QEMU Virtual CPU version 0.12.5, 3411.60 MHz
 cpu1:
 FPU,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,CX16,POPCNT,NXE,LONG,LAHF,PERF
 cpu1: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 64b/line
 16-way L2 cache
 cpu1: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
 cpu1: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
 cpu1: smt 0, core 0, package 1
 mpbios0: bus 0 is type PCI   
 mpbios0: bus 1 is type ISA   
 ioapic0 at mainbus0: apid 2 pa 0xfec0, version 11, 24 pins
 ioapic0: misconfigured as apic 0, remapped to apid 2
 pci0 at mainbus0 bus 0
 pchb0 at pci0 dev 0 function 0 Intel 82441FX rev 0x02
 pcib0 at pci0 dev 1 function 0 Intel 82371SB ISA rev 0x00
 pciide0 at pci0 dev 1 function 1 Intel 82371SB IDE rev 0x00: DMA, channel 0
 wired to compatibility,
 channel 1 wired to compatibility
 pciide0: channel 0 disabled (no drives)
 atapiscsi0 at pciide0 channel 1 drive 0
 scsibus0 at atapiscsi0: 2 targets
 cd0 at scsibus0 targ 0 lun 0: QEMU, QEMU DVD-ROM, 0.12 ATAPI 5/cdrom 
 removable
 cd0(pciide0:1:0): using PIO mode 0
 uhci0 at pci0 dev 1 function 2 Intel 82371SB USB rev 0x01: apic 2 int 11
 piixpm0 at pci0 dev 1 function 3 Intel 82371AB Power rev 0x03: apic 2 int 9
 iic0 at piixpm0
 iic0: addr 0x4c 48=00 words 00

Re: SMP OpenBSD VM on OpenBSD host freezes when qemu sends it the ACPI power-button-pressed event

2014-02-13 Thread Robert Urban
Hello again,

it occurred to me that the problem is problably reproducible using the
uniprocesesor kernel, as long as mpbios is enabled, and this is indeed the case.

So change the line to -smp 1,sockets=1,cores=1,threads=1 and boot /bsd --
this will speed up execution enormously.

Rob Urban

On 02/14/2014 01:26 AM, Robert Urban wrote:
 Hello Folks,

 this bug can be reproduced using OpenBSD 5.4 as the host, with the stock qemu
 package, and an OpenBSD 5.4 guest.

 Booting the GENERIC.MP kernel in the VM was painfully slow, but it eventually
 came up to multi-user mode.

 The script to run the OpenBSD 5.4 guest on the OpenBSD 5.4 host:
 -snip-
 #!/bin/sh

 qemu-system-x86_64 \
 -S \
 -m 2048 \
 -smp 2,sockets=2,cores=1,threads=1 \
 -monitor stdio \
 -vnc :0 \
 -no-fd-bootchk \
 -net nic \
 -net user \
 -cdrom /space/install54.iso \
 -drive 
 file=/space/obsd54test.raw,index=0,media=disk,cache=none,format=raw
 -snip-

 qemu will start, but not start the VM until c is entered in the monitor. A 
 vnc
 client must be started to get the console before starting the VM:

 vncviewer localhost:5900

 As usual, issuing the system_powerdown command in the monitor caused the 
 guest
 to freeze totally.

 The guest VM container file can be found here:

 http://www.spielwiese.de/OpenBSD/bsd-host-obsd54test.raw.7z

 It's 143MB compressed, and 10G uncompressed.  The root password is x.

 Rob Urban

 original bug report:
 Synopsis:OpenBSD VM freezes when qemu sends it the ACPI
 power-button-pressed event
 Category:kernel
 Environment:
 System  : OpenBSD 5.4
 Details : OpenBSD 5.4 (GENERIC.MP) #0: Mon Jan 20 19:07:21 MET 2014
  root at 
 dna54.y42.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP

 Architecture: OpenBSD.amd64
 Machine : amd64
 Description:
 On an SMP v5.4 system running in a qemu/KVM VM, the qemu
 system_powerdown command (which is the equivalent of issuing
 virsh shutdown guest if qemu is under libvirt control) will
 cause the OpenBSD guest to freeze totally.
 The host is linux x86_64 (Debian and Fedora). This can be reproduced with
 OpenBSD (amd64) versions 5.3, 5.4 and 5.5 (snapshot), and also with qemu
 versions 0.12.5, 1.6.1, 1.7.5 (snapshot from 2014-02-12).

 I do not know if qemu or OpenBSD is at fault. I also opened a qemu bug,
 which can be found here:
 https://bugs.launchpad.net/qemu/+bug/1279500
 How-To-Repeat:
 Install a SMP OpenBSD in a qemu VM and send it an ACPI 
 power-button-pressed
 event.
 Fix:
 The only workaround is to disable mpbios(4), but this is no workaround
 on an SMP system.
 

 dmesg:
 OpenBSD 5.4 (GENERIC.MP) #0: Mon Jan 20 19:07:21 MET 2014
 root at dna54.y42.org:/usr/src/sys/arch/amd64/compile/GENERIC.MP
 real mem = 3204427776 (3055MB)
 avail mem = 3111464960 (2967MB)
 mainbus0 at root
 bios0 at mainbus0: SMBIOS rev. 2.4  at  0xbec0 (11 entries)
 bios0: vendor Bochs version Bochs date 01/01/2007
 bios0: Bochs Bochs
 acpi0 at bios0: rev 0
 acpi0: sleep states S3 S4 S5
 acpi0: tables DSDT FACP SSDT APIC HPET
 acpi0: wakeup devices
 acpitimer0 at acpi0: 3579545 Hz, 24 bits
 acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
 acpihpet0 at acpi0: 1 Hz
 acpiprt0 at acpi0: bus 0 (PCI0)
 mpbios0 at bios0: Intel MP Specification 1.4
 cpu0 at mainbus0: apid 0 (boot processor)
 cpu0: QEMU Virtual CPU version 0.12.5, 3411.92 MHz
 cpu0:
 FPU,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,CX16,POPCNT,NXE,LONG,LAHF,PERF
 cpu0: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 
 64b/line
 16-way L2 cache
 cpu0: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
 cpu0: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
 cpu0: smt 0, core 0, package 0
 cpu0: apic clock running at 1000MHz
 cpu1 at mainbus0: apid 1 (application processor)
 cpu1: QEMU Virtual CPU version 0.12.5, 3411.60 MHz
 cpu1:
 FPU,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SSE3,CX16,POPCNT,NXE,LONG,LAHF,PERF
 cpu1: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB 
 64b/line
 16-way L2 cache
 cpu1: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
 cpu1: DTLB 255 4KB entries direct-mapped, 255 4MB entries direct-mapped
 cpu1: smt 0, core 0, package 1
 mpbios0: bus 0 is type PCI   
 mpbios0: bus 1 is type ISA   
 ioapic0 at mainbus0: apid 2 pa 0xfec0, version 11, 24 pins
 ioapic0: misconfigured as apic 0, remapped to apid 2
 pci0 at mainbus0 bus 0
 pchb0 at pci0 dev 0 function 0 Intel 82441FX rev 0x02
 pcib0 at pci0 dev 1 function 0 Intel 82371SB ISA rev 0x00
 pciide0 at pci0 dev 1 function 1 Intel 82371SB IDE rev 0x00: DMA, channel 0
 wired to compatibility,
 channel 1 wired to compatibility
 pciide0: channel 0 disabled (no drives)
 atapiscsi0

kernel bug: On boot in KVM/QEMU, crash in Intel temperature detection routine (how necessary??) in source/arch/amd64/amd64/identcpu.c, + workaround patch.

2012-11-15 Thread Mikael
(Emailing from this email address as to help you not publish my actual
email address by mistake. I'm reachable on this email address the
next months as it looks.)


Hi,

On a QEMU/KVM 1.1.2-r2 machine with libvirt CPU setting cpu
mode='host-model'model fallback='allow'//cpu running on a host
cpu0: Intel Core i7 9xx (Nehalem Class Core i7), 2800.70 Mhz, I get the
following startup behavior from OpenBSD 5.2 from the
newly installed system booted the first time (funnily enough the
installation CD's kernel boots fine, perhaps it runs in a
compat mode though because I don't find neither the nvram nor the mtrr row
there, on first glance, anyhow):

 nvram: invalid checksum
 mtrr: Pentium Pro MTRR support
 kernel: protection fault trap, code=0
 Stopped at intelcore_update_sensor+0x17: rdmsr
 ddb


This bug is caused in
http://fxr.watson.org/fxr/source/arch/amd64/amd64/identcpu.c?v=OPENBSD row
132.

It was introduced 2007 here
http://marc.info/?l=openbsd-techm=118063274617707 .

The patch at the bottom of this file provides a fix, though who knows maybe
it disables more things than necessary.

Please note that this bug kind of could be anticipated, given the comments
at the bugging procedure:

   115 /*
   116  * Temperature read on the CPU is relative to the maximum
   117  * temperature supported by the CPU, Tj(Max).
   118  * Poorly documented, refer to:
   119  * http://softwarecommunity.intel.com/isn/Community/
   120  * en-US/forums/thread/30228638.aspx
 ..
   124  */


By this I conclude this bug reported to you. From reading Intel's specs of
the RDMSR instruction, I do not get to clarity
about whether this should be considered an OpenBSD bug (i.e. the fact that
OpenBSD uses the instruction), or a QEMU/KVM bug
(i.e. the fact that KVM causes segmentation fault on its invocation).

For a side note, there seems to be some Linux systems that run into this
instruction and they report it in the syslog
and do not crash.

Thank you!
Mikael


Refs:
http://libvirt.org/formatdomain.html#elementsCPU
http://code.metager.de/source/xref/libvirt/src/cpu/cpu_map.xml


 --

Patch:

These are rows 125-147 of /source/arch/amd64/amd64/identcpu.c :

void
intelcore_update_sensor(void *args)
{
struct cpu_info *ci = (struct cpu_info *) args;
u_int64_t msr;
int max = 100;

if (rdmsr(MSR_TEMPERATURE_TARGET)  MSR_TEMPERATURE_TARGET_LOW_BIT)
max = 85;

msr = rdmsr(MSR_THERM_STATUS);
if (msr  MSR_THERM_STATUS_VALID_BIT) {
ci-ci_sensor.value = max - MSR_THERM_STATUS_TEMP(msr);
/* micro degrees */
ci-ci_sensor.value *= 100;
/* kelvin */
ci-ci_sensor.value += 27315;
ci-ci_sensor.flags = ~SENSOR_FINVALID;
} else {
ci-ci_sensor.value = 0;
ci-ci_sensor.flags |= SENSOR_FINVALID;
}
}



Change them to:


void
intelcore_update_sensor(void *args)
{
struct cpu_info *ci = (struct cpu_info *) args;
// u_int64_t msr; - as not to produce unused variable = error
// int max = 100; - 

// if (rdmsr(MSR_TEMPERATURE_TARGET) 
MSR_TEMPERATURE_TARGET_LOW_BIT)
// max = 85;

// msr = rdmsr(MSR_THERM_STATUS);
// if (msr  MSR_THERM_STATUS_VALID_BIT) {
// ci-ci_sensor.value = max - MSR_THERM_STATUS_TEMP(msr);
// /* micro degrees */
// ci-ci_sensor.value *= 100;
// /* kelvin */
// ci-ci_sensor.value += 27315;
// ci-ci_sensor.flags = ~SENSOR_FINVALID;
// } else {
ci-ci_sensor.value = 0;
ci-ci_sensor.flags |= SENSOR_FINVALID;
// }
}







When running with workaround-patched kernel:

$ cat /var/run/dmesg.boot
OpenBSD 5.2-current (GENERIC.MP) #2: Thu Nov 15 13:05:13 CET 2012
r...@c67.it.su.se:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 1072680960 (1022MB)
avail mem = 1021698048 (974MB)
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.4 @ 0x3980 (41 entries)
bios0: vendor Bochs version Bochs date 01/01/2007
bios0: Bochs Bochs
acpi0 at bios0: rev 0
acpi0: sleep states S3 S4 S5
acpi0: tables DSDT FACP SSDT APIC HPET SSDT
acpi0: wakeup devices
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
acpihpet0 at acpi0: 1 Hz
acpiprt0 at acpi0: bus 0 (PCI0)
mpbios0 at bios0: Intel MP Specification 1.4
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Intel Core i7 9xx (Nehalem Class Core i7), 2963.42 MHz
cpu0:
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,MMX,FXSR,SSE,SSE2,SS,SSE3,SSSE3,CX16,SSE4.1,SSE4.2,POPCNT,NXE,LONG,LAHF,PERF
cpu0: 64KB 64b/line 2-way I-cache, 64KB 64b/line 2-way D-cache, 512KB
64b/line 16-way L2 cache
cpu0: ITLB 255 4KB entries direct-mapped, 255 4MB entries direct