Re: OpenBSD 6-stable vmd

2016-10-27 Thread Mike Larkin
On Thu, Oct 27, 2016 at 11:02:12AM -0500, Ax0n wrote:
> To circle back: I can reproduce the VM lock-up 100% of the time by typing
> too quickly into the VM virtual serial console, such as my password and
> longer command strings that I know by muscle memory.
> 
> I tried a few things such as slowly typing several kilobytes of text into
> the console, one character at a time.
> 
> If I mash the keyboard inside cu, the VM locks up. I went to the text
> console of the VM host (my daily-driver laptop), and slowly decreased the
> keyboard repeat time with:
> 
> wsconsctl keyboard.repeat.deln=
> 
> And then attached to the vm virtual console using "doas vmctl console 1"
> 
> I proceeded to hold down a key and let a few lines of text show up before
> exiting the console, decreasing the deln delay further, and repeating the
> experiment.
> 
> 100 is the default value, so holding a key down (longer than the default
> 400msec value of del1) will result in a 100msec delay between repeat
> keystrokes on input.
> 
> I reduced this first to 75, then to 50, 25, 15, 10, and 5.
> 
> With a repeat delay of 5msec on the virtual console, I was able to reliably
> lock up vms in a few dozen "keystrokes" (a matter of a second or two
> holding a key down).
> 
> I was able to get three different vms to lock up, one running the october
> 22 snapshot, and two others running OpenBSD-6.0 Release, one i386, the
> other amd64.
> 
> I cannot reproduce this, even with a high keyboard repeat rate, though an
> SSH session to any of the VMs.
> 
> Mike and I have been in touch off-list (Thanks again!), but I thought the
> results of my testing were relevant to misc@.
> 

Thanks for testing. I'll see about repro'ing it and take a look presently.

-ml



Re: OpenBSD 6-stable vmd

2016-10-27 Thread Ax0n
To circle back: I can reproduce the VM lock-up 100% of the time by typing
too quickly into the VM virtual serial console, such as my password and
longer command strings that I know by muscle memory.

I tried a few things such as slowly typing several kilobytes of text into
the console, one character at a time.

If I mash the keyboard inside cu, the VM locks up. I went to the text
console of the VM host (my daily-driver laptop), and slowly decreased the
keyboard repeat time with:

wsconsctl keyboard.repeat.deln=

And then attached to the vm virtual console using "doas vmctl console 1"

I proceeded to hold down a key and let a few lines of text show up before
exiting the console, decreasing the deln delay further, and repeating the
experiment.

100 is the default value, so holding a key down (longer than the default
400msec value of del1) will result in a 100msec delay between repeat
keystrokes on input.

I reduced this first to 75, then to 50, 25, 15, 10, and 5.

With a repeat delay of 5msec on the virtual console, I was able to reliably
lock up vms in a few dozen "keystrokes" (a matter of a second or two
holding a key down).

I was able to get three different vms to lock up, one running the october
22 snapshot, and two others running OpenBSD-6.0 Release, one i386, the
other amd64.

I cannot reproduce this, even with a high keyboard repeat rate, though an
SSH session to any of the VMs.

Mike and I have been in touch off-list (Thanks again!), but I thought the
results of my testing were relevant to misc@.



On Wed, Oct 26, 2016 at 7:15 PM, Mike Larkin  wrote:

> On Wed, Oct 26, 2016 at 06:36:25PM -0500, Ax0n wrote:
> > I'm running vmd with the options you specified, and using tee(1) to peel
> it
> > off to a file while I can still watch what happens in the foreground. It
> > hasn't happened again yet, but I haven't been messing with the VMs as
> much
> > this week as I was over the weekend.
> >
> > One thing of interest: inside the VM running the Oct 22 snapshot, top(1)
> > reports the CPU utilization hovering over 1.0 load, with nearly 100% in
> > interrupt state, which seems pretty odd to me.  I am also running an i386
> > and amd64 vm at the same time, both on 6.0-Release and neither of them
> are
> > exhibiting this high load. I'll probably update the snapshot of the
> > -CURRENT(ish) VM tonight, and the snapshot of my host system (which is
> also
> > my daily driver) this weekend.
> >
>
> I've seen that (and have seen it reported) from time to time as well. This
> is unlikely time being spent in interrupt, it's more likely a time
> accounting
> error that's making the guest think it's spending more in interrupt
> servicing
> than it actually is. This is due to the fact that both the statclock and
> hardclock are running at 100Hz (or close to it) because the host is unable
> to inject more frequent interrupts.
>
> You might try running the host at 1000Hz and see if that fixes the problem.
> It did, for me. Note that such an adjustment is really a hack and should
> just be viewed as a temporary workaround. Of course, don't run your guests
> at 1000Hz as well (that would defeat the purpose of cranking the host).
> That
> parameter can be adjusted in param.c.
>
> -ml
>
> > load averages:  1.07,  1.09,  0.94   vmmbsd.labs.h-i-r.net
> > 05:05:27
> > 26 processes: 1 running, 24 idle, 1 on processor   up
> >  0:28
> > CPU states:  0.0% user,  0.0% nice,  0.4% system, 99.6% interrupt,  0.0%
> > idle
> > Memory: Real: 21M/130M act/tot Free: 355M Cache: 74M Swap: 0K/63M
> >
> >   PID USERNAME PRI NICE  SIZE   RES STATE WAIT  TIMECPU
> COMMAND
> > 1 root  100  420K  496K idle  wait  0:01  0.00% init
> > 13415 _ntp   2  -20  888K 2428K sleep poll  0:00  0.00% ntpd
> > 15850 axon   30  724K  760K sleep ttyin 0:00  0.00% ksh
> > 42990 _syslogd   20  972K 1468K sleep kqread0:00  0.00%
> syslogd
> > 89057 _pflogd40  672K  424K sleep bpf   0:00  0.00%
> pflogd
> >  2894 root   20  948K 3160K sleep poll  0:00  0.00% sshd
> > 85054 _ntp   20  668K 2316K idle  poll  0:00  0.00% ntpd
> >
> >
> >
> > On Tue, Oct 25, 2016 at 2:09 AM, Mike Larkin 
> wrote:
> >
> > > On Mon, Oct 24, 2016 at 11:07:32PM -0500, Ax0n wrote:
> > > > Thanks for the update, ml.
> > > >
> > > > The VM Just did it again in the middle of backspacing over uname
> -a...
> > > >
> > > > $ uname -a
> > > > OpenBSD vmmbsd.labs.h-i-r.net 6.0 GENERIC.MP#0 amd64
> > > > $ un   <-- frozen
> > > >
> > > > Spinning like mad.
> > > >
> > >
> > > Bizarre. If it were I, I'd next try killing all vmd processes and
> > > running vmd -dvvv from a root console window and look for what it dumps
> > > out when it hangs like this (if anything).
> > >
> > > You'll see a fair number of "vmd: unknown exit code 1" (and 48), those
> > > are harmless and can be ignored, as can anything that vmd dumps out
> > > before the vm gets stu

Re: OpenBSD 6-stable vmd

2016-10-26 Thread Mike Larkin
On Wed, Oct 26, 2016 at 06:36:25PM -0500, Ax0n wrote:
> I'm running vmd with the options you specified, and using tee(1) to peel it
> off to a file while I can still watch what happens in the foreground. It
> hasn't happened again yet, but I haven't been messing with the VMs as much
> this week as I was over the weekend.
> 
> One thing of interest: inside the VM running the Oct 22 snapshot, top(1)
> reports the CPU utilization hovering over 1.0 load, with nearly 100% in
> interrupt state, which seems pretty odd to me.  I am also running an i386
> and amd64 vm at the same time, both on 6.0-Release and neither of them are
> exhibiting this high load. I'll probably update the snapshot of the
> -CURRENT(ish) VM tonight, and the snapshot of my host system (which is also
> my daily driver) this weekend.
> 

I've seen that (and have seen it reported) from time to time as well. This
is unlikely time being spent in interrupt, it's more likely a time accounting
error that's making the guest think it's spending more in interrupt servicing
than it actually is. This is due to the fact that both the statclock and
hardclock are running at 100Hz (or close to it) because the host is unable
to inject more frequent interrupts.

You might try running the host at 1000Hz and see if that fixes the problem.
It did, for me. Note that such an adjustment is really a hack and should
just be viewed as a temporary workaround. Of course, don't run your guests
at 1000Hz as well (that would defeat the purpose of cranking the host). That
parameter can be adjusted in param.c.

-ml

> load averages:  1.07,  1.09,  0.94   vmmbsd.labs.h-i-r.net
> 05:05:27
> 26 processes: 1 running, 24 idle, 1 on processor   up
>  0:28
> CPU states:  0.0% user,  0.0% nice,  0.4% system, 99.6% interrupt,  0.0%
> idle
> Memory: Real: 21M/130M act/tot Free: 355M Cache: 74M Swap: 0K/63M
> 
>   PID USERNAME PRI NICE  SIZE   RES STATE WAIT  TIMECPU COMMAND
> 1 root  100  420K  496K idle  wait  0:01  0.00% init
> 13415 _ntp   2  -20  888K 2428K sleep poll  0:00  0.00% ntpd
> 15850 axon   30  724K  760K sleep ttyin 0:00  0.00% ksh
> 42990 _syslogd   20  972K 1468K sleep kqread0:00  0.00% syslogd
> 89057 _pflogd40  672K  424K sleep bpf   0:00  0.00% pflogd
>  2894 root   20  948K 3160K sleep poll  0:00  0.00% sshd
> 85054 _ntp   20  668K 2316K idle  poll  0:00  0.00% ntpd
> 
> 
> 
> On Tue, Oct 25, 2016 at 2:09 AM, Mike Larkin  wrote:
> 
> > On Mon, Oct 24, 2016 at 11:07:32PM -0500, Ax0n wrote:
> > > Thanks for the update, ml.
> > >
> > > The VM Just did it again in the middle of backspacing over uname -a...
> > >
> > > $ uname -a
> > > OpenBSD vmmbsd.labs.h-i-r.net 6.0 GENERIC.MP#0 amd64
> > > $ un   <-- frozen
> > >
> > > Spinning like mad.
> > >
> >
> > Bizarre. If it were I, I'd next try killing all vmd processes and
> > running vmd -dvvv from a root console window and look for what it dumps
> > out when it hangs like this (if anything).
> >
> > You'll see a fair number of "vmd: unknown exit code 1" (and 48), those
> > are harmless and can be ignored, as can anything that vmd dumps out
> > before the vm gets stuck like this.
> >
> > If you capture this and post somewhere I can take a look. You may need to
> > extract the content out of /var/log/messages if a bunch gets printed.
> >
> > If this fails to diagnose what happens, I can work with you off-list on
> > how to debug further.
> >
> > -ml
> >
> > > [axon@transient ~]$ vmctl status
> > >ID   PID VCPUSMAXMEMCURMEM  TTY NAME
> > > 2  2769 1 512MB 149MB   /dev/ttyp3 -c
> > > 1 48245 1 512MB 211MB   /dev/ttyp0 obsdvmm.vm
> > > [axon@transient ~]$ ps aux | grep 48245
> > > _vmd 48245 98.5  2.3 526880 136956 ??  Rp 1:54PM   47:08.30 vmd:
> > > obsdvmm.vm (vmd)
> > >
> > > load averages:  2.43,  2.36,
> > > 2.26
> > > transient.my.domain 18:29:10
> > > 56 processes: 53 idle, 3 on
> > > processor
> > > up  4:35
> > > CPU0 states:  3.8% user,  0.0% nice, 15.4% system,  0.6% interrupt, 80.2%
> > > idle
> > > CPU1 states: 15.3% user,  0.0% nice, 49.3% system,  0.0% interrupt, 35.4%
> > > idle
> > > CPU2 states:  6.6% user,  0.0% nice, 24.3% system,  0.0% interrupt, 69.1%
> > > idle
> > > CPU3 states:  4.7% user,  0.0% nice, 18.1% system,  0.0% interrupt, 77.2%
> > > idle
> > > Memory: Real: 1401M/2183M act/tot Free: 3443M Cache: 536M Swap: 0K/4007M
> > >
> > >   PID USERNAME PRI NICE  SIZE   RES STATE WAIT  TIMECPU
> > COMMAND
> > > 48245 _vmd  430  515M  134M onprocthrslee  47:37 98.00% vmd
> > >  7234 axon   20  737M  715M sleep poll 33:18 19.14%
> > firefox
> > > 42481 _x11  550   16M   42M onproc- 2:53  9.96% Xorg
> > >  2769 _vmd  290  514M   62M idle  thrslee   2:29  9.62% vmd
> > > 13503 axon  100  512K 2496K sleep   

Re: OpenBSD 6-stable vmd

2016-10-26 Thread Ax0n
I'm running vmd with the options you specified, and using tee(1) to peel it
off to a file while I can still watch what happens in the foreground. It
hasn't happened again yet, but I haven't been messing with the VMs as much
this week as I was over the weekend.

One thing of interest: inside the VM running the Oct 22 snapshot, top(1)
reports the CPU utilization hovering over 1.0 load, with nearly 100% in
interrupt state, which seems pretty odd to me.  I am also running an i386
and amd64 vm at the same time, both on 6.0-Release and neither of them are
exhibiting this high load. I'll probably update the snapshot of the
-CURRENT(ish) VM tonight, and the snapshot of my host system (which is also
my daily driver) this weekend.

load averages:  1.07,  1.09,  0.94   vmmbsd.labs.h-i-r.net
05:05:27
26 processes: 1 running, 24 idle, 1 on processor   up
 0:28
CPU states:  0.0% user,  0.0% nice,  0.4% system, 99.6% interrupt,  0.0%
idle
Memory: Real: 21M/130M act/tot Free: 355M Cache: 74M Swap: 0K/63M

  PID USERNAME PRI NICE  SIZE   RES STATE WAIT  TIMECPU COMMAND
1 root  100  420K  496K idle  wait  0:01  0.00% init
13415 _ntp   2  -20  888K 2428K sleep poll  0:00  0.00% ntpd
15850 axon   30  724K  760K sleep ttyin 0:00  0.00% ksh
42990 _syslogd   20  972K 1468K sleep kqread0:00  0.00% syslogd
89057 _pflogd40  672K  424K sleep bpf   0:00  0.00% pflogd
 2894 root   20  948K 3160K sleep poll  0:00  0.00% sshd
85054 _ntp   20  668K 2316K idle  poll  0:00  0.00% ntpd



On Tue, Oct 25, 2016 at 2:09 AM, Mike Larkin  wrote:

> On Mon, Oct 24, 2016 at 11:07:32PM -0500, Ax0n wrote:
> > Thanks for the update, ml.
> >
> > The VM Just did it again in the middle of backspacing over uname -a...
> >
> > $ uname -a
> > OpenBSD vmmbsd.labs.h-i-r.net 6.0 GENERIC.MP#0 amd64
> > $ un   <-- frozen
> >
> > Spinning like mad.
> >
>
> Bizarre. If it were I, I'd next try killing all vmd processes and
> running vmd -dvvv from a root console window and look for what it dumps
> out when it hangs like this (if anything).
>
> You'll see a fair number of "vmd: unknown exit code 1" (and 48), those
> are harmless and can be ignored, as can anything that vmd dumps out
> before the vm gets stuck like this.
>
> If you capture this and post somewhere I can take a look. You may need to
> extract the content out of /var/log/messages if a bunch gets printed.
>
> If this fails to diagnose what happens, I can work with you off-list on
> how to debug further.
>
> -ml
>
> > [axon@transient ~]$ vmctl status
> >ID   PID VCPUSMAXMEMCURMEM  TTY NAME
> > 2  2769 1 512MB 149MB   /dev/ttyp3 -c
> > 1 48245 1 512MB 211MB   /dev/ttyp0 obsdvmm.vm
> > [axon@transient ~]$ ps aux | grep 48245
> > _vmd 48245 98.5  2.3 526880 136956 ??  Rp 1:54PM   47:08.30 vmd:
> > obsdvmm.vm (vmd)
> >
> > load averages:  2.43,  2.36,
> > 2.26
> > transient.my.domain 18:29:10
> > 56 processes: 53 idle, 3 on
> > processor
> > up  4:35
> > CPU0 states:  3.8% user,  0.0% nice, 15.4% system,  0.6% interrupt, 80.2%
> > idle
> > CPU1 states: 15.3% user,  0.0% nice, 49.3% system,  0.0% interrupt, 35.4%
> > idle
> > CPU2 states:  6.6% user,  0.0% nice, 24.3% system,  0.0% interrupt, 69.1%
> > idle
> > CPU3 states:  4.7% user,  0.0% nice, 18.1% system,  0.0% interrupt, 77.2%
> > idle
> > Memory: Real: 1401M/2183M act/tot Free: 3443M Cache: 536M Swap: 0K/4007M
> >
> >   PID USERNAME PRI NICE  SIZE   RES STATE WAIT  TIMECPU
> COMMAND
> > 48245 _vmd  430  515M  134M onprocthrslee  47:37 98.00% vmd
> >  7234 axon   20  737M  715M sleep poll 33:18 19.14%
> firefox
> > 42481 _x11  550   16M   42M onproc- 2:53  9.96% Xorg
> >  2769 _vmd  290  514M   62M idle  thrslee   2:29  9.62% vmd
> > 13503 axon  100  512K 2496K sleep nanosle   0:52  1.12% wmapm
> > 76008 axon  100  524K 2588K sleep nanosle   0:10  0.73% wmmon
> > 57059 axon  100  248M  258M sleep nanosle   0:08  0.34% wmnet
> > 23088 axon   20  580K 2532K sleep select0:10  0.00%
> > wmclockmon
> > 64041 axon   20 3752K   10M sleep poll  0:05  0.00%
> wmaker
> > 16919 axon   20 7484K   20M sleep poll  0:04  0.00%
> > xfce4-terminal
> > 1 root  100  408K  460K idle  wait  0:01  0.00% init
> > 80619 _ntp   2  -20  880K 2480K sleep poll  0:01  0.00% ntpd
> >  9014 _pflogd40  672K  408K sleep bpf   0:01  0.00%
> pflogd
> > 58764 root  100 2052K 7524K idle  wait  0:01  0.00% slim
> >
> >
> >
> > On Mon, Oct 24, 2016 at 10:47 PM, Mike Larkin 
> wrote:
> >
> > > On Mon, Oct 24, 2016 at 07:36:48PM -0500, Ax0n wrote:
> > > > I suppose I'll ask here since it seems on-topic for this thread. Let
> me
> > > > know if I shouldn't d

Re: OpenBSD 6-stable vmd

2016-10-25 Thread Mike Larkin
On Mon, Oct 24, 2016 at 11:07:32PM -0500, Ax0n wrote:
> Thanks for the update, ml.
> 
> The VM Just did it again in the middle of backspacing over uname -a...
> 
> $ uname -a
> OpenBSD vmmbsd.labs.h-i-r.net 6.0 GENERIC.MP#0 amd64
> $ un   <-- frozen
> 
> Spinning like mad.
> 

Bizarre. If it were I, I'd next try killing all vmd processes and
running vmd -dvvv from a root console window and look for what it dumps
out when it hangs like this (if anything).

You'll see a fair number of "vmd: unknown exit code 1" (and 48), those
are harmless and can be ignored, as can anything that vmd dumps out
before the vm gets stuck like this.

If you capture this and post somewhere I can take a look. You may need to
extract the content out of /var/log/messages if a bunch gets printed.

If this fails to diagnose what happens, I can work with you off-list on
how to debug further.

-ml

> [axon@transient ~]$ vmctl status
>ID   PID VCPUSMAXMEMCURMEM  TTY NAME
> 2  2769 1 512MB 149MB   /dev/ttyp3 -c
> 1 48245 1 512MB 211MB   /dev/ttyp0 obsdvmm.vm
> [axon@transient ~]$ ps aux | grep 48245
> _vmd 48245 98.5  2.3 526880 136956 ??  Rp 1:54PM   47:08.30 vmd:
> obsdvmm.vm (vmd)
> 
> load averages:  2.43,  2.36,
> 2.26
> transient.my.domain 18:29:10
> 56 processes: 53 idle, 3 on
> processor
> up  4:35
> CPU0 states:  3.8% user,  0.0% nice, 15.4% system,  0.6% interrupt, 80.2%
> idle
> CPU1 states: 15.3% user,  0.0% nice, 49.3% system,  0.0% interrupt, 35.4%
> idle
> CPU2 states:  6.6% user,  0.0% nice, 24.3% system,  0.0% interrupt, 69.1%
> idle
> CPU3 states:  4.7% user,  0.0% nice, 18.1% system,  0.0% interrupt, 77.2%
> idle
> Memory: Real: 1401M/2183M act/tot Free: 3443M Cache: 536M Swap: 0K/4007M
> 
>   PID USERNAME PRI NICE  SIZE   RES STATE WAIT  TIMECPU COMMAND
> 48245 _vmd  430  515M  134M onprocthrslee  47:37 98.00% vmd
>  7234 axon   20  737M  715M sleep poll 33:18 19.14% firefox
> 42481 _x11  550   16M   42M onproc- 2:53  9.96% Xorg
>  2769 _vmd  290  514M   62M idle  thrslee   2:29  9.62% vmd
> 13503 axon  100  512K 2496K sleep nanosle   0:52  1.12% wmapm
> 76008 axon  100  524K 2588K sleep nanosle   0:10  0.73% wmmon
> 57059 axon  100  248M  258M sleep nanosle   0:08  0.34% wmnet
> 23088 axon   20  580K 2532K sleep select0:10  0.00%
> wmclockmon
> 64041 axon   20 3752K   10M sleep poll  0:05  0.00% wmaker
> 16919 axon   20 7484K   20M sleep poll  0:04  0.00%
> xfce4-terminal
> 1 root  100  408K  460K idle  wait  0:01  0.00% init
> 80619 _ntp   2  -20  880K 2480K sleep poll  0:01  0.00% ntpd
>  9014 _pflogd40  672K  408K sleep bpf   0:01  0.00% pflogd
> 58764 root  100 2052K 7524K idle  wait  0:01  0.00% slim
> 
> 
> 
> On Mon, Oct 24, 2016 at 10:47 PM, Mike Larkin  wrote:
> 
> > On Mon, Oct 24, 2016 at 07:36:48PM -0500, Ax0n wrote:
> > > I suppose I'll ask here since it seems on-topic for this thread. Let me
> > > know if I shouldn't do this in the future. I've been testing vmm for
> > > exactly a week on two different snapshots. I have two VMs: One running
> > the
> > > same snapshot (amd64, Oct 22) I'm running on the host vm, the other
> > running
> > > amd64 6.0-RELEASE with no patches of any kind.
> > >
> > > For some reason, the vm running a recent snapshot locks up occasionally
> > > while I'm interacting with it via cu or occasionally ssh. Should I
> > expect a
> > > ddb prompt and/or kernel panic messages via the virtualized serial
> > console?
> > > Is there some kind of "break" command on the console to get into ddb when
> > > it appears to hang? A "No" or "Not yet" on those two questions would
> > > suffice if not possible. I know this isn't supported, and appreciate the
> > > hard work.
> > >
> > > Host dmesg:
> > > http://stuff.h-i-r.net/2016-10-22.Aspire5733Z.dmesg.txt
> > >
> > > VM (Oct 22 Snapshot) dmesg:
> > > http://stuff.h-i-r.net/2016-10-22.vmm.dmesg.txt
> > >
> >
> > These look fine. Not sure why it would have locked up. Is the associated
> > vmd
> > process idle, or spinning like mad?
> >
> > -ml
> >
> > > Second:
> > > I'm using vm.conf (contents below) to start the aforementioned snapshot
> > vm
> > > at boot.
> > > There's a "disable" line inside vm.conf to keep one VM from spinning up
> > > with vmd.  Is there a way to start this one with vmctl aside from passing
> > > all the options to vmctl as below?
> > >
> > > doas vmctl start -c -d OBSD-RELa -i 1 -k /home/axon/obsd/amd64/bsd -m
> > 512M
> > >
> > > I've tried stuff along the lines of:
> > > doas vmctl start OBSD-RELa.vm
> > >
> > > vm "obsdvmm.vm" {
> > > memory 512M
> > > kernel "bsd"
> > > disk "/home/axon/vmm/OBSD6"
> > > interface tap
> > > }
> > > vm "OBSD-RELa.vm" {
> > > memory 512M
> > > kernel "/

Re: OpenBSD 6-stable vmd

2016-10-24 Thread Ax0n
Thanks for the update, ml.

The VM Just did it again in the middle of backspacing over uname -a...

$ uname -a
OpenBSD vmmbsd.labs.h-i-r.net 6.0 GENERIC.MP#0 amd64
$ un   <-- frozen

Spinning like mad.

[axon@transient ~]$ vmctl status
   ID   PID VCPUSMAXMEMCURMEM  TTY NAME
2  2769 1 512MB 149MB   /dev/ttyp3 -c
1 48245 1 512MB 211MB   /dev/ttyp0 obsdvmm.vm
[axon@transient ~]$ ps aux | grep 48245
_vmd 48245 98.5  2.3 526880 136956 ??  Rp 1:54PM   47:08.30 vmd:
obsdvmm.vm (vmd)

load averages:  2.43,  2.36,
2.26
transient.my.domain 18:29:10
56 processes: 53 idle, 3 on
processor
up  4:35
CPU0 states:  3.8% user,  0.0% nice, 15.4% system,  0.6% interrupt, 80.2%
idle
CPU1 states: 15.3% user,  0.0% nice, 49.3% system,  0.0% interrupt, 35.4%
idle
CPU2 states:  6.6% user,  0.0% nice, 24.3% system,  0.0% interrupt, 69.1%
idle
CPU3 states:  4.7% user,  0.0% nice, 18.1% system,  0.0% interrupt, 77.2%
idle
Memory: Real: 1401M/2183M act/tot Free: 3443M Cache: 536M Swap: 0K/4007M

  PID USERNAME PRI NICE  SIZE   RES STATE WAIT  TIMECPU COMMAND
48245 _vmd  430  515M  134M onprocthrslee  47:37 98.00% vmd
 7234 axon   20  737M  715M sleep poll 33:18 19.14% firefox
42481 _x11  550   16M   42M onproc- 2:53  9.96% Xorg
 2769 _vmd  290  514M   62M idle  thrslee   2:29  9.62% vmd
13503 axon  100  512K 2496K sleep nanosle   0:52  1.12% wmapm
76008 axon  100  524K 2588K sleep nanosle   0:10  0.73% wmmon
57059 axon  100  248M  258M sleep nanosle   0:08  0.34% wmnet
23088 axon   20  580K 2532K sleep select0:10  0.00%
wmclockmon
64041 axon   20 3752K   10M sleep poll  0:05  0.00% wmaker
16919 axon   20 7484K   20M sleep poll  0:04  0.00%
xfce4-terminal
1 root  100  408K  460K idle  wait  0:01  0.00% init
80619 _ntp   2  -20  880K 2480K sleep poll  0:01  0.00% ntpd
 9014 _pflogd40  672K  408K sleep bpf   0:01  0.00% pflogd
58764 root  100 2052K 7524K idle  wait  0:01  0.00% slim



On Mon, Oct 24, 2016 at 10:47 PM, Mike Larkin  wrote:

> On Mon, Oct 24, 2016 at 07:36:48PM -0500, Ax0n wrote:
> > I suppose I'll ask here since it seems on-topic for this thread. Let me
> > know if I shouldn't do this in the future. I've been testing vmm for
> > exactly a week on two different snapshots. I have two VMs: One running
> the
> > same snapshot (amd64, Oct 22) I'm running on the host vm, the other
> running
> > amd64 6.0-RELEASE with no patches of any kind.
> >
> > For some reason, the vm running a recent snapshot locks up occasionally
> > while I'm interacting with it via cu or occasionally ssh. Should I
> expect a
> > ddb prompt and/or kernel panic messages via the virtualized serial
> console?
> > Is there some kind of "break" command on the console to get into ddb when
> > it appears to hang? A "No" or "Not yet" on those two questions would
> > suffice if not possible. I know this isn't supported, and appreciate the
> > hard work.
> >
> > Host dmesg:
> > http://stuff.h-i-r.net/2016-10-22.Aspire5733Z.dmesg.txt
> >
> > VM (Oct 22 Snapshot) dmesg:
> > http://stuff.h-i-r.net/2016-10-22.vmm.dmesg.txt
> >
>
> These look fine. Not sure why it would have locked up. Is the associated
> vmd
> process idle, or spinning like mad?
>
> -ml
>
> > Second:
> > I'm using vm.conf (contents below) to start the aforementioned snapshot
> vm
> > at boot.
> > There's a "disable" line inside vm.conf to keep one VM from spinning up
> > with vmd.  Is there a way to start this one with vmctl aside from passing
> > all the options to vmctl as below?
> >
> > doas vmctl start -c -d OBSD-RELa -i 1 -k /home/axon/obsd/amd64/bsd -m
> 512M
> >
> > I've tried stuff along the lines of:
> > doas vmctl start OBSD-RELa.vm
> >
> > vm "obsdvmm.vm" {
> > memory 512M
> > kernel "bsd"
> > disk "/home/axon/vmm/OBSD6"
> > interface tap
> > }
> > vm "OBSD-RELa.vm" {
> > memory 512M
> > kernel "/home/axon/obsd/amd64/bsd"
> > disk "/home/axon/vmm/OBSD-RELa"
> > interface tap
> > disable
> > }
> >
>
> I think this is being worked on, but not done yet.
>
> -ml



Re: OpenBSD 6-stable vmd

2016-10-24 Thread Mike Larkin
On Mon, Oct 24, 2016 at 07:36:48PM -0500, Ax0n wrote:
> I suppose I'll ask here since it seems on-topic for this thread. Let me
> know if I shouldn't do this in the future. I've been testing vmm for
> exactly a week on two different snapshots. I have two VMs: One running the
> same snapshot (amd64, Oct 22) I'm running on the host vm, the other running
> amd64 6.0-RELEASE with no patches of any kind.
> 
> For some reason, the vm running a recent snapshot locks up occasionally
> while I'm interacting with it via cu or occasionally ssh. Should I expect a
> ddb prompt and/or kernel panic messages via the virtualized serial console?
> Is there some kind of "break" command on the console to get into ddb when
> it appears to hang? A "No" or "Not yet" on those two questions would
> suffice if not possible. I know this isn't supported, and appreciate the
> hard work.
> 
> Host dmesg:
> http://stuff.h-i-r.net/2016-10-22.Aspire5733Z.dmesg.txt
> 
> VM (Oct 22 Snapshot) dmesg:
> http://stuff.h-i-r.net/2016-10-22.vmm.dmesg.txt
> 

These look fine. Not sure why it would have locked up. Is the associated vmd
process idle, or spinning like mad?

-ml

> Second:
> I'm using vm.conf (contents below) to start the aforementioned snapshot vm
> at boot.
> There's a "disable" line inside vm.conf to keep one VM from spinning up
> with vmd.  Is there a way to start this one with vmctl aside from passing
> all the options to vmctl as below?
> 
> doas vmctl start -c -d OBSD-RELa -i 1 -k /home/axon/obsd/amd64/bsd -m 512M
> 
> I've tried stuff along the lines of:
> doas vmctl start OBSD-RELa.vm
> 
> vm "obsdvmm.vm" {
> memory 512M
> kernel "bsd"
> disk "/home/axon/vmm/OBSD6"
> interface tap
> }
> vm "OBSD-RELa.vm" {
> memory 512M
> kernel "/home/axon/obsd/amd64/bsd"
> disk "/home/axon/vmm/OBSD-RELa"
> interface tap
> disable
> }
> 

I think this is being worked on, but not done yet.

-ml



Re: OpenBSD 6-stable vmd

2016-10-24 Thread Ax0n
I suppose I'll ask here since it seems on-topic for this thread. Let me
know if I shouldn't do this in the future. I've been testing vmm for
exactly a week on two different snapshots. I have two VMs: One running the
same snapshot (amd64, Oct 22) I'm running on the host vm, the other running
amd64 6.0-RELEASE with no patches of any kind.

For some reason, the vm running a recent snapshot locks up occasionally
while I'm interacting with it via cu or occasionally ssh. Should I expect a
ddb prompt and/or kernel panic messages via the virtualized serial console?
Is there some kind of "break" command on the console to get into ddb when
it appears to hang? A "No" or "Not yet" on those two questions would
suffice if not possible. I know this isn't supported, and appreciate the
hard work.

Host dmesg:
http://stuff.h-i-r.net/2016-10-22.Aspire5733Z.dmesg.txt

VM (Oct 22 Snapshot) dmesg:
http://stuff.h-i-r.net/2016-10-22.vmm.dmesg.txt

Second:
I'm using vm.conf (contents below) to start the aforementioned snapshot vm
at boot.
There's a "disable" line inside vm.conf to keep one VM from spinning up
with vmd.  Is there a way to start this one with vmctl aside from passing
all the options to vmctl as below?

doas vmctl start -c -d OBSD-RELa -i 1 -k /home/axon/obsd/amd64/bsd -m 512M

I've tried stuff along the lines of:
doas vmctl start OBSD-RELa.vm

vm "obsdvmm.vm" {
memory 512M
kernel "bsd"
disk "/home/axon/vmm/OBSD6"
interface tap
}
vm "OBSD-RELa.vm" {
memory 512M
kernel "/home/axon/obsd/amd64/bsd"
disk "/home/axon/vmm/OBSD-RELa"
interface tap
disable
}





On Mon, Oct 24, 2016 at 4:58 PM, R0me0 ***  wrote:

> Hey @Peter, one more time thank so much for the heads up :)
>
> For those that interest.
>
> I'm running OpenBSD-Current under VMware-Workstation 12 ( just need to set
> processor proprieties to virtualize intel VT-x/EPT or AMD-V/RVI )
>
> And have fun to test VMD
>
> :)
>
> Thank you
>
>
>
>
>
> 2016-10-22 8:43 GMT-02:00 R0me0 *** :
>
> > Hey Peter ,
> >
> > Thank you for the advice, I'll get current
> >
> > Cheers dude !
> >
> > (:
> >
> >
> > 2016-10-22 6:44 GMT-02:00 Peter Hessler :
> >
> >> This isn't expected to work at all.  That is why it was disabled.
> >> You'll need to upgrade the Hypervisor to -current, or to 6.1 when it is
> >> released.
> >>
> >>
> >>
> >> On 2016 Oct 22 (Sat) at 00:06:08 -0200 (-0200), R0me0 *** wrote:
> >> :Hello misc.
> >> :
> >> :For testing purposes
> >> :
> >> :I compiled kernel with vmd support.
> >> :
> >> :After start the vm -> vmctl start "myvm" -m 512M -i 1 -d disk.img -k
> >> /bsd.rd
> >> :
> >> :I created a bridge and added vether0 and tap0
> >> :
> >> :In the vm I have configured an ip 192.168.1.30
> >> :
> >> :If I perform ping from OpenBSD Hypervisor -> ping 192.168.1.30 all
> >> packages
> >> :are send and received "on the fly"
> >> :
> >> :But if I perform the same step from "myvm", there is no packet loss but
> >> the
> >> :packets take so long to be send and consecutively replied
> >> :
> >> :I am performing this tests on Linux  running Vmware Workstation 12 .
> >> :
> >> :Is this behavior expected ?
> >> :
> >> :Any directions will be appreciated.
> >> :
> >> :Thank you
> >> :
> >> :myvm dmesg:
> >> :
> >> :OpenBSD 6.0 (RAMDISK_CD) #2100: Tue Jul 26 13:05:59 MDT 2016
> >> :   dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/
> RAMDISK_CD
> >> :RTC BIOS diagnostic error 20
> >> :real mem = 520093696 (496MB)
> >> :avail mem = 502673408 (479MB)
> >> :mainbus0 at root
> >> :bios0 at mainbus0
> >> :acpi at bios0 not configured
> >> :cpu0 at mainbus0: (uniprocessor)
> >> :cpu0: Intel(R) Core(TM) i7-4810MQ CPU @ 2.80GHz, 14335.74 MHz
> >> :cpu0:
> >> :FPU,VME,DE,PSE,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,
> >> PAT,PSE36,CFLUSH,DS,MMX,FXSR,SSE,SSE2,SS,SSE3,PCLMUL,SSSE3,
> >> FMA3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,AVX,F1
> >> :6C,RDRAND,HV,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,ARAT
> >> :pvbus0 at mainbus0: OpenBSD
> >> :pci0 at mainbus0 bus 0
> >> :pchb0 at pci0 dev 0 function 0 "OpenBSD VMM PCI Host Bridge" rev 0x00
> >> :virtio0 at pci0 dev 1 function 0 "Qumranet Virtio RNG" rev 0x00
> >> :viornd0 at virtio0
> >> :virtio0: irq 3
> >> :virtio1 at pci0 dev 2 function 0 "Qumranet Virtio Storage" rev 0x00
> >> :vioblk0 at virtio1
> >> :scsibus0 at vioblk0: 2 targets
> >> :sd0 at scsibus0 targ 0 lun 0:  SCSI3 0/direct
> >> fixed
> >> :sd0: 5120MB, 512 bytes/sector, 10485760 sectors
> >> :virtio1: irq 5
> >> :virtio2 at pci0 dev 3 function 0 "Qumranet Virtio Network" rev 0x00
> >> :vio0 at virtio2: address fe:e1:ba:d0:d0:94
> >> :virtio2: irq 9
> >> :isa0 at mainbus0
> >> :com0 at isa0 port 0x3f8/8 irq 4: ns8250, no fifo
> >> :com0: console
> >> :softraid0 at root
> >> :scsibus1 at softraid0: 256 targets
> >> :root on rd0a swap on rd0b dump on rd0b
> >> :WARNING: invalid time in clock chip
> >> :WARNING: CHECK AND RESET THE DATE!
> >> :
> >> :openbsd hypervis

Re: OpenBSD 6-stable vmd

2016-10-24 Thread R0me0 ***
Hey @Peter, one more time thank so much for the heads up :)

For those that interest.

I'm running OpenBSD-Current under VMware-Workstation 12 ( just need to set
processor proprieties to virtualize intel VT-x/EPT or AMD-V/RVI )

And have fun to test VMD

:)

Thank you





2016-10-22 8:43 GMT-02:00 R0me0 *** :

> Hey Peter ,
>
> Thank you for the advice, I'll get current
>
> Cheers dude !
>
> (:
>
>
> 2016-10-22 6:44 GMT-02:00 Peter Hessler :
>
>> This isn't expected to work at all.  That is why it was disabled.
>> You'll need to upgrade the Hypervisor to -current, or to 6.1 when it is
>> released.
>>
>>
>>
>> On 2016 Oct 22 (Sat) at 00:06:08 -0200 (-0200), R0me0 *** wrote:
>> :Hello misc.
>> :
>> :For testing purposes
>> :
>> :I compiled kernel with vmd support.
>> :
>> :After start the vm -> vmctl start "myvm" -m 512M -i 1 -d disk.img -k
>> /bsd.rd
>> :
>> :I created a bridge and added vether0 and tap0
>> :
>> :In the vm I have configured an ip 192.168.1.30
>> :
>> :If I perform ping from OpenBSD Hypervisor -> ping 192.168.1.30 all
>> packages
>> :are send and received "on the fly"
>> :
>> :But if I perform the same step from "myvm", there is no packet loss but
>> the
>> :packets take so long to be send and consecutively replied
>> :
>> :I am performing this tests on Linux  running Vmware Workstation 12 .
>> :
>> :Is this behavior expected ?
>> :
>> :Any directions will be appreciated.
>> :
>> :Thank you
>> :
>> :myvm dmesg:
>> :
>> :OpenBSD 6.0 (RAMDISK_CD) #2100: Tue Jul 26 13:05:59 MDT 2016
>> :   dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/RAMDISK_CD
>> :RTC BIOS diagnostic error 20
>> :real mem = 520093696 (496MB)
>> :avail mem = 502673408 (479MB)
>> :mainbus0 at root
>> :bios0 at mainbus0
>> :acpi at bios0 not configured
>> :cpu0 at mainbus0: (uniprocessor)
>> :cpu0: Intel(R) Core(TM) i7-4810MQ CPU @ 2.80GHz, 14335.74 MHz
>> :cpu0:
>> :FPU,VME,DE,PSE,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,
>> PAT,PSE36,CFLUSH,DS,MMX,FXSR,SSE,SSE2,SS,SSE3,PCLMUL,SSSE3,
>> FMA3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,AVX,F1
>> :6C,RDRAND,HV,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,ARAT
>> :pvbus0 at mainbus0: OpenBSD
>> :pci0 at mainbus0 bus 0
>> :pchb0 at pci0 dev 0 function 0 "OpenBSD VMM PCI Host Bridge" rev 0x00
>> :virtio0 at pci0 dev 1 function 0 "Qumranet Virtio RNG" rev 0x00
>> :viornd0 at virtio0
>> :virtio0: irq 3
>> :virtio1 at pci0 dev 2 function 0 "Qumranet Virtio Storage" rev 0x00
>> :vioblk0 at virtio1
>> :scsibus0 at vioblk0: 2 targets
>> :sd0 at scsibus0 targ 0 lun 0:  SCSI3 0/direct
>> fixed
>> :sd0: 5120MB, 512 bytes/sector, 10485760 sectors
>> :virtio1: irq 5
>> :virtio2 at pci0 dev 3 function 0 "Qumranet Virtio Network" rev 0x00
>> :vio0 at virtio2: address fe:e1:ba:d0:d0:94
>> :virtio2: irq 9
>> :isa0 at mainbus0
>> :com0 at isa0 port 0x3f8/8 irq 4: ns8250, no fifo
>> :com0: console
>> :softraid0 at root
>> :scsibus1 at softraid0: 256 targets
>> :root on rd0a swap on rd0b dump on rd0b
>> :WARNING: invalid time in clock chip
>> :WARNING: CHECK AND RESET THE DATE!
>> :
>> :openbsd hypervisor :
>> :
>> :
>> :OpenBSD 6.0-stable (GENERIC.MP) #0: Fri Oct 21 20:07:42 BRST 2016
>> :   root@puffysor.localdomain:/usr/src/sys/arch/amd64/compile/GENERIC.MP
>> :real mem = 2130640896 (2031MB)
>> :avail mem = 2061631488 (1966MB)
>> :mpath0 at root
>> :scsibus0 at mpath0: 256 targets
>> :mainbus0 at root
>> :bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xe0010 (242 entries)
>> :bios0: vendor Phoenix Technologies LTD version "6.00" date 07/02/2015
>> :bios0: VMware, Inc. VMware Virtual Platform
>> :acpi0 at bios0: rev 2
>> :acpi0: sleep states S0 S1 S4 S5
>> :acpi0: tables DSDT FACP BOOT APIC MCFG SRAT HPET WAET
>> :acpi0: wakeup devices PCI0(S3) USB_(S1) P2P0(S3) S1F0(S3) S2F0(S3)
>> S3F0(S3)
>> :S4F0(S3) S5F0(S3) S6F0(S3) S7F0(S3) S8F0(S3) S9F0(S3) S10F(S3) S11F(S3)
>> :S12F(S3) S13F(S3) [...]
>> :acpitimer0 at acpi0: 3579545 Hz, 24 bits
>> :acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
>> :cpu0 at mainbus0: apid 0 (boot processor)
>> :cpu0: Intel(R) Core(TM) i7-4810MQ CPU @ 2.80GHz, 3800.69 MHz
>> :cpu0:
>> :FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CM
>> OV,PAT,PSE36,CFLUSH,DS,MMX,FXSR,SSE,SSE2,SS,HTT,SSE3,PCLMUL,
>> VMX,SSSE3,FMA3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLIN
>> :E,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,PAGE1GB,LONG,LAHF,ABM,PE
>> RF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,SENSOR,ARAT
>> :
>> :cpu0: 256KB 64b/line 8-way L2 cache
>> :cpu0: smt 0, core 0, package 0
>> :mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
>> :cpu0: apic clock running at 65MHz
>> :cpu1 at mainbus0: apid 1 (application processor)
>> :cpu1: Intel(R) Core(TM) i7-4810MQ CPU @ 2.80GHz, 3810.50 MHz
>> :cpu1:
>> :FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CM
>> OV,PAT,PSE36,CFLUSH,DS,MMX,FXSR,SSE,SSE2,SS,HTT,SSE3,PCLMUL,
>> VMX,SSSE3,FMA3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLIN
>> :E,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,PAGE1GB,LO

Re: OpenBSD 6-stable vmd

2016-10-22 Thread R0me0 ***
Hey Peter ,

Thank you for the advice, I'll get current

Cheers dude !

(:


2016-10-22 6:44 GMT-02:00 Peter Hessler :

> This isn't expected to work at all.  That is why it was disabled.
> You'll need to upgrade the Hypervisor to -current, or to 6.1 when it is
> released.
>
>
>
> On 2016 Oct 22 (Sat) at 00:06:08 -0200 (-0200), R0me0 *** wrote:
> :Hello misc.
> :
> :For testing purposes
> :
> :I compiled kernel with vmd support.
> :
> :After start the vm -> vmctl start "myvm" -m 512M -i 1 -d disk.img -k
> /bsd.rd
> :
> :I created a bridge and added vether0 and tap0
> :
> :In the vm I have configured an ip 192.168.1.30
> :
> :If I perform ping from OpenBSD Hypervisor -> ping 192.168.1.30 all
> packages
> :are send and received "on the fly"
> :
> :But if I perform the same step from "myvm", there is no packet loss but
> the
> :packets take so long to be send and consecutively replied
> :
> :I am performing this tests on Linux  running Vmware Workstation 12 .
> :
> :Is this behavior expected ?
> :
> :Any directions will be appreciated.
> :
> :Thank you
> :
> :myvm dmesg:
> :
> :OpenBSD 6.0 (RAMDISK_CD) #2100: Tue Jul 26 13:05:59 MDT 2016
> :   dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/RAMDISK_CD
> :RTC BIOS diagnostic error 20
> :real mem = 520093696 (496MB)
> :avail mem = 502673408 (479MB)
> :mainbus0 at root
> :bios0 at mainbus0
> :acpi at bios0 not configured
> :cpu0 at mainbus0: (uniprocessor)
> :cpu0: Intel(R) Core(TM) i7-4810MQ CPU @ 2.80GHz, 14335.74 MHz
> :cpu0:
> :FPU,VME,DE,PSE,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,
> CMOV,PAT,PSE36,CFLUSH,DS,MMX,FXSR,SSE,SSE2,SS,SSE3,PCLMUL,
> SSSE3,FMA3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,AVX,F1
> :6C,RDRAND,HV,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,ARAT
> :pvbus0 at mainbus0: OpenBSD
> :pci0 at mainbus0 bus 0
> :pchb0 at pci0 dev 0 function 0 "OpenBSD VMM PCI Host Bridge" rev 0x00
> :virtio0 at pci0 dev 1 function 0 "Qumranet Virtio RNG" rev 0x00
> :viornd0 at virtio0
> :virtio0: irq 3
> :virtio1 at pci0 dev 2 function 0 "Qumranet Virtio Storage" rev 0x00
> :vioblk0 at virtio1
> :scsibus0 at vioblk0: 2 targets
> :sd0 at scsibus0 targ 0 lun 0:  SCSI3 0/direct
> fixed
> :sd0: 5120MB, 512 bytes/sector, 10485760 sectors
> :virtio1: irq 5
> :virtio2 at pci0 dev 3 function 0 "Qumranet Virtio Network" rev 0x00
> :vio0 at virtio2: address fe:e1:ba:d0:d0:94
> :virtio2: irq 9
> :isa0 at mainbus0
> :com0 at isa0 port 0x3f8/8 irq 4: ns8250, no fifo
> :com0: console
> :softraid0 at root
> :scsibus1 at softraid0: 256 targets
> :root on rd0a swap on rd0b dump on rd0b
> :WARNING: invalid time in clock chip
> :WARNING: CHECK AND RESET THE DATE!
> :
> :openbsd hypervisor :
> :
> :
> :OpenBSD 6.0-stable (GENERIC.MP) #0: Fri Oct 21 20:07:42 BRST 2016
> :   root@puffysor.localdomain:/usr/src/sys/arch/amd64/compile/GENERIC.MP
> :real mem = 2130640896 (2031MB)
> :avail mem = 2061631488 (1966MB)
> :mpath0 at root
> :scsibus0 at mpath0: 256 targets
> :mainbus0 at root
> :bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xe0010 (242 entries)
> :bios0: vendor Phoenix Technologies LTD version "6.00" date 07/02/2015
> :bios0: VMware, Inc. VMware Virtual Platform
> :acpi0 at bios0: rev 2
> :acpi0: sleep states S0 S1 S4 S5
> :acpi0: tables DSDT FACP BOOT APIC MCFG SRAT HPET WAET
> :acpi0: wakeup devices PCI0(S3) USB_(S1) P2P0(S3) S1F0(S3) S2F0(S3)
> S3F0(S3)
> :S4F0(S3) S5F0(S3) S6F0(S3) S7F0(S3) S8F0(S3) S9F0(S3) S10F(S3) S11F(S3)
> :S12F(S3) S13F(S3) [...]
> :acpitimer0 at acpi0: 3579545 Hz, 24 bits
> :acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
> :cpu0 at mainbus0: apid 0 (boot processor)
> :cpu0: Intel(R) Core(TM) i7-4810MQ CPU @ 2.80GHz, 3800.69 MHz
> :cpu0:
> :FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,
> CMOV,PAT,PSE36,CFLUSH,DS,MMX,FXSR,SSE,SSE2,SS,HTT,SSE3,
> PCLMUL,VMX,SSSE3,FMA3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLIN
> :E,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,PAGE1GB,LONG,LAHF,ABM,
> PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,SENSOR,ARAT
> :
> :cpu0: 256KB 64b/line 8-way L2 cache
> :cpu0: smt 0, core 0, package 0
> :mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
> :cpu0: apic clock running at 65MHz
> :cpu1 at mainbus0: apid 1 (application processor)
> :cpu1: Intel(R) Core(TM) i7-4810MQ CPU @ 2.80GHz, 3810.50 MHz
> :cpu1:
> :FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,
> CMOV,PAT,PSE36,CFLUSH,DS,MMX,FXSR,SSE,SSE2,SS,HTT,SSE3,
> PCLMUL,VMX,SSSE3,FMA3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLIN
> :E,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,PAGE1GB,LONG,LAHF,ABM,
> PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,SENSOR,ARAT
> :
> :cpu1: 256KB 64b/line 8-way L2 cache
> :cpu1: smt 0, core 1, package 0
> :ioapic0 at mainbus0: apid 2 pa 0xfec0, version 11, 24 pins
> :acpimcfg0 at acpi0 addr 0xf000, bus 0-127
> :acpihpet0 at acpi0: 14318179 Hz
> :acpiprt0 at acpi0: bus 0 (PCI0)
> :acpicpu0 at acpi0: C1(@1 halt!)
> :acpicpu1 at acpi0: C1(@1 halt!)
> :"PNP0001" at acpi0 not configured
>

Re: OpenBSD 6-stable vmd

2016-10-22 Thread Peter Hessler
This isn't expected to work at all.  That is why it was disabled.
You'll need to upgrade the Hypervisor to -current, or to 6.1 when it is
released.



On 2016 Oct 22 (Sat) at 00:06:08 -0200 (-0200), R0me0 *** wrote:
:Hello misc.
:
:For testing purposes
:
:I compiled kernel with vmd support.
:
:After start the vm -> vmctl start "myvm" -m 512M -i 1 -d disk.img -k /bsd.rd
:
:I created a bridge and added vether0 and tap0
:
:In the vm I have configured an ip 192.168.1.30
:
:If I perform ping from OpenBSD Hypervisor -> ping 192.168.1.30 all packages
:are send and received "on the fly"
:
:But if I perform the same step from "myvm", there is no packet loss but the
:packets take so long to be send and consecutively replied
:
:I am performing this tests on Linux  running Vmware Workstation 12 .
:
:Is this behavior expected ?
:
:Any directions will be appreciated.
:
:Thank you
:
:myvm dmesg:
:
:OpenBSD 6.0 (RAMDISK_CD) #2100: Tue Jul 26 13:05:59 MDT 2016
:   dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/RAMDISK_CD
:RTC BIOS diagnostic error 20
:real mem = 520093696 (496MB)
:avail mem = 502673408 (479MB)
:mainbus0 at root
:bios0 at mainbus0
:acpi at bios0 not configured
:cpu0 at mainbus0: (uniprocessor)
:cpu0: Intel(R) Core(TM) i7-4810MQ CPU @ 2.80GHz, 14335.74 MHz
:cpu0:
:FPU,VME,DE,PSE,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,MMX,FXSR,SSE,SSE2,SS,SSE3,PCLMUL,SSSE3,FMA3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,AVX,F1
:6C,RDRAND,HV,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,ARAT
:pvbus0 at mainbus0: OpenBSD
:pci0 at mainbus0 bus 0
:pchb0 at pci0 dev 0 function 0 "OpenBSD VMM PCI Host Bridge" rev 0x00
:virtio0 at pci0 dev 1 function 0 "Qumranet Virtio RNG" rev 0x00
:viornd0 at virtio0
:virtio0: irq 3
:virtio1 at pci0 dev 2 function 0 "Qumranet Virtio Storage" rev 0x00
:vioblk0 at virtio1
:scsibus0 at vioblk0: 2 targets
:sd0 at scsibus0 targ 0 lun 0:  SCSI3 0/direct fixed
:sd0: 5120MB, 512 bytes/sector, 10485760 sectors
:virtio1: irq 5
:virtio2 at pci0 dev 3 function 0 "Qumranet Virtio Network" rev 0x00
:vio0 at virtio2: address fe:e1:ba:d0:d0:94
:virtio2: irq 9
:isa0 at mainbus0
:com0 at isa0 port 0x3f8/8 irq 4: ns8250, no fifo
:com0: console
:softraid0 at root
:scsibus1 at softraid0: 256 targets
:root on rd0a swap on rd0b dump on rd0b
:WARNING: invalid time in clock chip
:WARNING: CHECK AND RESET THE DATE!
:
:openbsd hypervisor :
:
:
:OpenBSD 6.0-stable (GENERIC.MP) #0: Fri Oct 21 20:07:42 BRST 2016
:   root@puffysor.localdomain:/usr/src/sys/arch/amd64/compile/GENERIC.MP
:real mem = 2130640896 (2031MB)
:avail mem = 2061631488 (1966MB)
:mpath0 at root
:scsibus0 at mpath0: 256 targets
:mainbus0 at root
:bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xe0010 (242 entries)
:bios0: vendor Phoenix Technologies LTD version "6.00" date 07/02/2015
:bios0: VMware, Inc. VMware Virtual Platform
:acpi0 at bios0: rev 2
:acpi0: sleep states S0 S1 S4 S5
:acpi0: tables DSDT FACP BOOT APIC MCFG SRAT HPET WAET
:acpi0: wakeup devices PCI0(S3) USB_(S1) P2P0(S3) S1F0(S3) S2F0(S3) S3F0(S3)
:S4F0(S3) S5F0(S3) S6F0(S3) S7F0(S3) S8F0(S3) S9F0(S3) S10F(S3) S11F(S3)
:S12F(S3) S13F(S3) [...]
:acpitimer0 at acpi0: 3579545 Hz, 24 bits
:acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
:cpu0 at mainbus0: apid 0 (boot processor)
:cpu0: Intel(R) Core(TM) i7-4810MQ CPU @ 2.80GHz, 3800.69 MHz
:cpu0:
:FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,MMX,FXSR,SSE,SSE2,SS,HTT,SSE3,PCLMUL,VMX,SSSE3,FMA3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLIN
:E,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,PAGE1GB,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,SENSOR,ARAT
:
:cpu0: 256KB 64b/line 8-way L2 cache
:cpu0: smt 0, core 0, package 0
:mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
:cpu0: apic clock running at 65MHz
:cpu1 at mainbus0: apid 1 (application processor)
:cpu1: Intel(R) Core(TM) i7-4810MQ CPU @ 2.80GHz, 3810.50 MHz
:cpu1:
:FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,MMX,FXSR,SSE,SSE2,SS,HTT,SSE3,PCLMUL,VMX,SSSE3,FMA3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLIN
:E,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,PAGE1GB,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,SENSOR,ARAT
:
:cpu1: 256KB 64b/line 8-way L2 cache
:cpu1: smt 0, core 1, package 0
:ioapic0 at mainbus0: apid 2 pa 0xfec0, version 11, 24 pins
:acpimcfg0 at acpi0 addr 0xf000, bus 0-127
:acpihpet0 at acpi0: 14318179 Hz
:acpiprt0 at acpi0: bus 0 (PCI0)
:acpicpu0 at acpi0: C1(@1 halt!)
:acpicpu1 at acpi0: C1(@1 halt!)
:"PNP0001" at acpi0 not configured
:"PNP0303" at acpi0 not configured
:"VMW0003" at acpi0 not configured
:"PNP0A05" at acpi0 not configured
:acpiac0 at acpi0: AC unit online
:pvbus0 at mainbus0: VMware
:vmt0 at pvbus0
:pci0 at mainbus0 bus 0
:pchb0 at pci0 dev 0 function 0 "Intel 82443BX AGP" rev 0x01
:ppb0 at pci0 dev 1 function 0 "Intel 82443BX AGP" rev 0x01
:pci1 at ppb0 bus 1
:pcib0 at pci0 dev 7 function 0 "Intel 8

OpenBSD 6-stable vmd

2016-10-21 Thread R0me0 ***
Hello misc.

For testing purposes

I compiled kernel with vmd support.

After start the vm -> vmctl start "myvm" -m 512M -i 1 -d disk.img -k /bsd.rd

I created a bridge and added vether0 and tap0

In the vm I have configured an ip 192.168.1.30

If I perform ping from OpenBSD Hypervisor -> ping 192.168.1.30 all packages
are send and received "on the fly"

But if I perform the same step from "myvm", there is no packet loss but the
packets take so long to be send and consecutively replied

I am performing this tests on Linux  running Vmware Workstation 12 .

Is this behavior expected ?

Any directions will be appreciated.

Thank you

myvm dmesg:

OpenBSD 6.0 (RAMDISK_CD) #2100: Tue Jul 26 13:05:59 MDT 2016
   dera...@amd64.openbsd.org:/usr/src/sys/arch/amd64/compile/RAMDISK_CD
RTC BIOS diagnostic error 20
real mem = 520093696 (496MB)
avail mem = 502673408 (479MB)
mainbus0 at root
bios0 at mainbus0
acpi at bios0 not configured
cpu0 at mainbus0: (uniprocessor)
cpu0: Intel(R) Core(TM) i7-4810MQ CPU @ 2.80GHz, 14335.74 MHz
cpu0:
FPU,VME,DE,PSE,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,MMX,FXSR,SSE,SSE2,SS,SSE3,PCLMUL,SSSE3,FMA3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLINE,AES,AVX,F1
6C,RDRAND,HV,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,ARAT
pvbus0 at mainbus0: OpenBSD
pci0 at mainbus0 bus 0
pchb0 at pci0 dev 0 function 0 "OpenBSD VMM PCI Host Bridge" rev 0x00
virtio0 at pci0 dev 1 function 0 "Qumranet Virtio RNG" rev 0x00
viornd0 at virtio0
virtio0: irq 3
virtio1 at pci0 dev 2 function 0 "Qumranet Virtio Storage" rev 0x00
vioblk0 at virtio1
scsibus0 at vioblk0: 2 targets
sd0 at scsibus0 targ 0 lun 0:  SCSI3 0/direct fixed
sd0: 5120MB, 512 bytes/sector, 10485760 sectors
virtio1: irq 5
virtio2 at pci0 dev 3 function 0 "Qumranet Virtio Network" rev 0x00
vio0 at virtio2: address fe:e1:ba:d0:d0:94
virtio2: irq 9
isa0 at mainbus0
com0 at isa0 port 0x3f8/8 irq 4: ns8250, no fifo
com0: console
softraid0 at root
scsibus1 at softraid0: 256 targets
root on rd0a swap on rd0b dump on rd0b
WARNING: invalid time in clock chip
WARNING: CHECK AND RESET THE DATE!

openbsd hypervisor :


OpenBSD 6.0-stable (GENERIC.MP) #0: Fri Oct 21 20:07:42 BRST 2016
   root@puffysor.localdomain:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 2130640896 (2031MB)
avail mem = 2061631488 (1966MB)
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xe0010 (242 entries)
bios0: vendor Phoenix Technologies LTD version "6.00" date 07/02/2015
bios0: VMware, Inc. VMware Virtual Platform
acpi0 at bios0: rev 2
acpi0: sleep states S0 S1 S4 S5
acpi0: tables DSDT FACP BOOT APIC MCFG SRAT HPET WAET
acpi0: wakeup devices PCI0(S3) USB_(S1) P2P0(S3) S1F0(S3) S2F0(S3) S3F0(S3)
S4F0(S3) S5F0(S3) S6F0(S3) S7F0(S3) S8F0(S3) S9F0(S3) S10F(S3) S11F(S3)
S12F(S3) S13F(S3) [...]
acpitimer0 at acpi0: 3579545 Hz, 24 bits
acpimadt0 at acpi0 addr 0xfee0: PC-AT compat
cpu0 at mainbus0: apid 0 (boot processor)
cpu0: Intel(R) Core(TM) i7-4810MQ CPU @ 2.80GHz, 3800.69 MHz
cpu0:
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,MMX,FXSR,SSE,SSE2,SS,HTT,SSE3,PCLMUL,VMX,SSSE3,FMA3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLIN
E,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,PAGE1GB,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,SENSOR,ARAT

cpu0: 256KB 64b/line 8-way L2 cache
cpu0: smt 0, core 0, package 0
mtrr: Pentium Pro MTRR support, 8 var ranges, 88 fixed ranges
cpu0: apic clock running at 65MHz
cpu1 at mainbus0: apid 1 (application processor)
cpu1: Intel(R) Core(TM) i7-4810MQ CPU @ 2.80GHz, 3810.50 MHz
cpu1:
FPU,VME,DE,PSE,TSC,MSR,PAE,MCE,CX8,APIC,SEP,MTRR,PGE,MCA,CMOV,PAT,PSE36,CFLUSH,DS,MMX,FXSR,SSE,SSE2,SS,HTT,SSE3,PCLMUL,VMX,SSSE3,FMA3,CX16,PCID,SSE4.1,SSE4.2,x2APIC,MOVBE,POPCNT,DEADLIN
E,AES,XSAVE,AVX,F16C,RDRAND,HV,NXE,PAGE1GB,LONG,LAHF,ABM,PERF,ITSC,FSGSBASE,BMI1,AVX2,SMEP,BMI2,ERMS,INVPCID,SENSOR,ARAT

cpu1: 256KB 64b/line 8-way L2 cache
cpu1: smt 0, core 1, package 0
ioapic0 at mainbus0: apid 2 pa 0xfec0, version 11, 24 pins
acpimcfg0 at acpi0 addr 0xf000, bus 0-127
acpihpet0 at acpi0: 14318179 Hz
acpiprt0 at acpi0: bus 0 (PCI0)
acpicpu0 at acpi0: C1(@1 halt!)
acpicpu1 at acpi0: C1(@1 halt!)
"PNP0001" at acpi0 not configured
"PNP0303" at acpi0 not configured
"VMW0003" at acpi0 not configured
"PNP0A05" at acpi0 not configured
acpiac0 at acpi0: AC unit online
pvbus0 at mainbus0: VMware
vmt0 at pvbus0
pci0 at mainbus0 bus 0
pchb0 at pci0 dev 0 function 0 "Intel 82443BX AGP" rev 0x01
ppb0 at pci0 dev 1 function 0 "Intel 82443BX AGP" rev 0x01
pci1 at ppb0 bus 1
pcib0 at pci0 dev 7 function 0 "Intel 82371AB PIIX4 ISA" rev 0x08
pciide0 at pci0 dev 7 function 1 "Intel 82371AB IDE" rev 0x01: DMA, channel
0 configured to compatibility, channel 1 configured to compatibility
pciide0: channel 0 disabled (no drives)
atapiscsi0 at pciide0 channel 1 drive 0
scsibus1 at atapiscsi0: 2 targets
cd0 at scsibus1 targ 0 lun 0:  ATAPI
5/cdrom removab