Re: OpenBSD 6-stable vmd

2016-10-26 Thread Mike Larkin
On Wed, Oct 26, 2016 at 06:36:25PM -0500, Ax0n wrote:
> I'm running vmd with the options you specified, and using tee(1) to peel it
> off to a file while I can still watch what happens in the foreground. It
> hasn't happened again yet, but I haven't been messing with the VMs as much
> this week as I was over the weekend.
> 
> One thing of interest: inside the VM running the Oct 22 snapshot, top(1)
> reports the CPU utilization hovering over 1.0 load, with nearly 100% in
> interrupt state, which seems pretty odd to me.  I am also running an i386
> and amd64 vm at the same time, both on 6.0-Release and neither of them are
> exhibiting this high load. I'll probably update the snapshot of the
> -CURRENT(ish) VM tonight, and the snapshot of my host system (which is also
> my daily driver) this weekend.
> 

I've seen that (and have seen it reported) from time to time as well. This
is unlikely time being spent in interrupt, it's more likely a time accounting
error that's making the guest think it's spending more in interrupt servicing
than it actually is. This is due to the fact that both the statclock and
hardclock are running at 100Hz (or close to it) because the host is unable
to inject more frequent interrupts.

You might try running the host at 1000Hz and see if that fixes the problem.
It did, for me. Note that such an adjustment is really a hack and should
just be viewed as a temporary workaround. Of course, don't run your guests
at 1000Hz as well (that would defeat the purpose of cranking the host). That
parameter can be adjusted in param.c.

-ml

> load averages:  1.07,  1.09,  0.94   vmmbsd.labs.h-i-r.net
> 05:05:27
> 26 processes: 1 running, 24 idle, 1 on processor   up
>  0:28
> CPU states:  0.0% user,  0.0% nice,  0.4% system, 99.6% interrupt,  0.0%
> idle
> Memory: Real: 21M/130M act/tot Free: 355M Cache: 74M Swap: 0K/63M
> 
>   PID USERNAME PRI NICE  SIZE   RES STATE WAIT  TIMECPU COMMAND
> 1 root  100  420K  496K idle  wait  0:01  0.00% init
> 13415 _ntp   2  -20  888K 2428K sleep poll  0:00  0.00% ntpd
> 15850 axon   30  724K  760K sleep ttyin 0:00  0.00% ksh
> 42990 _syslogd   20  972K 1468K sleep kqread0:00  0.00% syslogd
> 89057 _pflogd40  672K  424K sleep bpf   0:00  0.00% pflogd
>  2894 root   20  948K 3160K sleep poll  0:00  0.00% sshd
> 85054 _ntp   20  668K 2316K idle  poll  0:00  0.00% ntpd
> 
> 
> 
> On Tue, Oct 25, 2016 at 2:09 AM, Mike Larkin  wrote:
> 
> > On Mon, Oct 24, 2016 at 11:07:32PM -0500, Ax0n wrote:
> > > Thanks for the update, ml.
> > >
> > > The VM Just did it again in the middle of backspacing over uname -a...
> > >
> > > $ uname -a
> > > OpenBSD vmmbsd.labs.h-i-r.net 6.0 GENERIC.MP#0 amd64
> > > $ un   <-- frozen
> > >
> > > Spinning like mad.
> > >
> >
> > Bizarre. If it were I, I'd next try killing all vmd processes and
> > running vmd -dvvv from a root console window and look for what it dumps
> > out when it hangs like this (if anything).
> >
> > You'll see a fair number of "vmd: unknown exit code 1" (and 48), those
> > are harmless and can be ignored, as can anything that vmd dumps out
> > before the vm gets stuck like this.
> >
> > If you capture this and post somewhere I can take a look. You may need to
> > extract the content out of /var/log/messages if a bunch gets printed.
> >
> > If this fails to diagnose what happens, I can work with you off-list on
> > how to debug further.
> >
> > -ml
> >
> > > [axon@transient ~]$ vmctl status
> > >ID   PID VCPUSMAXMEMCURMEM  TTY NAME
> > > 2  2769 1 512MB 149MB   /dev/ttyp3 -c
> > > 1 48245 1 512MB 211MB   /dev/ttyp0 obsdvmm.vm
> > > [axon@transient ~]$ ps aux | grep 48245
> > > _vmd 48245 98.5  2.3 526880 136956 ??  Rp 1:54PM   47:08.30 vmd:
> > > obsdvmm.vm (vmd)
> > >
> > > load averages:  2.43,  2.36,
> > > 2.26
> > > transient.my.domain 18:29:10
> > > 56 processes: 53 idle, 3 on
> > > processor
> > > up  4:35
> > > CPU0 states:  3.8% user,  0.0% nice, 15.4% system,  0.6% interrupt, 80.2%
> > > idle
> > > CPU1 states: 15.3% user,  0.0% nice, 49.3% system,  0.0% interrupt, 35.4%
> > > idle
> > > CPU2 states:  6.6% user,  0.0% nice, 24.3% system,  0.0% interrupt, 69.1%
> > > idle
> > > CPU3 states:  4.7% user,  0.0% nice, 18.1% system,  0.0% interrupt, 77.2%
> > > idle
> > > Memory: Real: 1401M/2183M act/tot Free: 3443M Cache: 536M Swap: 0K/4007M
> > >
> > >   PID USERNAME PRI NICE  SIZE   RES STATE WAIT  TIMECPU
> > COMMAND
> > > 48245 _vmd  430  515M  134M onprocthrslee  47:37 98.00% vmd
> > >  7234 axon   20  737M  715M sleep poll 33:18 19.14%
> > firefox
> > > 42481 _x11  550   16M   42M onproc- 2:53  9.96% Xorg
> > >  2769 _vmd  290  514M   62M idle  thrslee   2:29  9.62% vmd
> > > 13503 axon  10

Re: OpenBSD 6-stable vmd

2016-10-26 Thread Ax0n
I'm running vmd with the options you specified, and using tee(1) to peel it
off to a file while I can still watch what happens in the foreground. It
hasn't happened again yet, but I haven't been messing with the VMs as much
this week as I was over the weekend.

One thing of interest: inside the VM running the Oct 22 snapshot, top(1)
reports the CPU utilization hovering over 1.0 load, with nearly 100% in
interrupt state, which seems pretty odd to me.  I am also running an i386
and amd64 vm at the same time, both on 6.0-Release and neither of them are
exhibiting this high load. I'll probably update the snapshot of the
-CURRENT(ish) VM tonight, and the snapshot of my host system (which is also
my daily driver) this weekend.

load averages:  1.07,  1.09,  0.94   vmmbsd.labs.h-i-r.net
05:05:27
26 processes: 1 running, 24 idle, 1 on processor   up
 0:28
CPU states:  0.0% user,  0.0% nice,  0.4% system, 99.6% interrupt,  0.0%
idle
Memory: Real: 21M/130M act/tot Free: 355M Cache: 74M Swap: 0K/63M

  PID USERNAME PRI NICE  SIZE   RES STATE WAIT  TIMECPU COMMAND
1 root  100  420K  496K idle  wait  0:01  0.00% init
13415 _ntp   2  -20  888K 2428K sleep poll  0:00  0.00% ntpd
15850 axon   30  724K  760K sleep ttyin 0:00  0.00% ksh
42990 _syslogd   20  972K 1468K sleep kqread0:00  0.00% syslogd
89057 _pflogd40  672K  424K sleep bpf   0:00  0.00% pflogd
 2894 root   20  948K 3160K sleep poll  0:00  0.00% sshd
85054 _ntp   20  668K 2316K idle  poll  0:00  0.00% ntpd



On Tue, Oct 25, 2016 at 2:09 AM, Mike Larkin  wrote:

> On Mon, Oct 24, 2016 at 11:07:32PM -0500, Ax0n wrote:
> > Thanks for the update, ml.
> >
> > The VM Just did it again in the middle of backspacing over uname -a...
> >
> > $ uname -a
> > OpenBSD vmmbsd.labs.h-i-r.net 6.0 GENERIC.MP#0 amd64
> > $ un   <-- frozen
> >
> > Spinning like mad.
> >
>
> Bizarre. If it were I, I'd next try killing all vmd processes and
> running vmd -dvvv from a root console window and look for what it dumps
> out when it hangs like this (if anything).
>
> You'll see a fair number of "vmd: unknown exit code 1" (and 48), those
> are harmless and can be ignored, as can anything that vmd dumps out
> before the vm gets stuck like this.
>
> If you capture this and post somewhere I can take a look. You may need to
> extract the content out of /var/log/messages if a bunch gets printed.
>
> If this fails to diagnose what happens, I can work with you off-list on
> how to debug further.
>
> -ml
>
> > [axon@transient ~]$ vmctl status
> >ID   PID VCPUSMAXMEMCURMEM  TTY NAME
> > 2  2769 1 512MB 149MB   /dev/ttyp3 -c
> > 1 48245 1 512MB 211MB   /dev/ttyp0 obsdvmm.vm
> > [axon@transient ~]$ ps aux | grep 48245
> > _vmd 48245 98.5  2.3 526880 136956 ??  Rp 1:54PM   47:08.30 vmd:
> > obsdvmm.vm (vmd)
> >
> > load averages:  2.43,  2.36,
> > 2.26
> > transient.my.domain 18:29:10
> > 56 processes: 53 idle, 3 on
> > processor
> > up  4:35
> > CPU0 states:  3.8% user,  0.0% nice, 15.4% system,  0.6% interrupt, 80.2%
> > idle
> > CPU1 states: 15.3% user,  0.0% nice, 49.3% system,  0.0% interrupt, 35.4%
> > idle
> > CPU2 states:  6.6% user,  0.0% nice, 24.3% system,  0.0% interrupt, 69.1%
> > idle
> > CPU3 states:  4.7% user,  0.0% nice, 18.1% system,  0.0% interrupt, 77.2%
> > idle
> > Memory: Real: 1401M/2183M act/tot Free: 3443M Cache: 536M Swap: 0K/4007M
> >
> >   PID USERNAME PRI NICE  SIZE   RES STATE WAIT  TIMECPU
> COMMAND
> > 48245 _vmd  430  515M  134M onprocthrslee  47:37 98.00% vmd
> >  7234 axon   20  737M  715M sleep poll 33:18 19.14%
> firefox
> > 42481 _x11  550   16M   42M onproc- 2:53  9.96% Xorg
> >  2769 _vmd  290  514M   62M idle  thrslee   2:29  9.62% vmd
> > 13503 axon  100  512K 2496K sleep nanosle   0:52  1.12% wmapm
> > 76008 axon  100  524K 2588K sleep nanosle   0:10  0.73% wmmon
> > 57059 axon  100  248M  258M sleep nanosle   0:08  0.34% wmnet
> > 23088 axon   20  580K 2532K sleep select0:10  0.00%
> > wmclockmon
> > 64041 axon   20 3752K   10M sleep poll  0:05  0.00%
> wmaker
> > 16919 axon   20 7484K   20M sleep poll  0:04  0.00%
> > xfce4-terminal
> > 1 root  100  408K  460K idle  wait  0:01  0.00% init
> > 80619 _ntp   2  -20  880K 2480K sleep poll  0:01  0.00% ntpd
> >  9014 _pflogd40  672K  408K sleep bpf   0:01  0.00%
> pflogd
> > 58764 root  100 2052K 7524K idle  wait  0:01  0.00% slim
> >
> >
> >
> > On Mon, Oct 24, 2016 at 10:47 PM, Mike Larkin 
> wrote:
> >
> > > On Mon, Oct 24, 2016 at 07:36:48PM -0500, Ax0n wrote:
> > > > I suppose I'll ask here since it seems on-topic for this 

OpenBSD 6.0-stable: uvm_mapent_alloc: out of static map entries

2016-10-26 Thread mxb
Hey,
seeing following in dmesg:

uvm_mapent_alloc: out of static map entries

Wasn’t it fixed so system dynamically adjusted this or do I stil need to
increase and re-compile kernel ?

P.S.
Have plenty of RAM (15G free) on this box.


//mxb



route(8): default and ::/0

2016-10-26 Thread Delan Azabani
route(8) says:

> The route is assumed to be to a network if any of
> the following apply to destination:
>
> •   it is the word "default", equivalent to 0/0

Consistent with this, you can substitute "0/0" for "default":

> # netstat -rnf inet | grep default
> default192.0.2.1  UGS[...]

> # route delete 0/0
> delete net 0/0

> # netstat -rnf inet | grep -c default
> 0

> # route add 0/0 192.0.2.1
> add net 0/0: gateway 192.0.2.1

> # netstat -rnf inet | grep default
> default192.0.2.1  UGS[...]

> # route delete default
> delete net default

> # netstat -rnf inet | grep -c default 
> 0

Back in OpenBSD 5.7, I found it convenient to substitute "::/0" for
"-inet6 default", and I did so in some of my old hostname.if(5) files,
but this doesn't seem to work in OpenBSD 6.0:

> # netstat -rnf inet6 | grep default
> defaultfe80::1234%em0  [...]

> # route delete ::/0
> delete net ::/0: not in table

> # netstat -rnf inet6 | grep default
> defaultfe80::1234%em0  [...]

> # route delete -inet6 default
> delete net default

> # netstat -rnf inet6 | grep -c default
> 0

> # route add ::/0 fe80::1234%em0
> add net ::/0: gateway fe80::1234%em0: File exists

> # netstat -rnf inet6 | grep -c default
> 0

> # route add -inet6 default fe80::1234%em0
> add net default: gateway fe80::1234%em0

> # netstat -rnf inet6 | grep default
> defaultfe80::1234%em0  [...]

I've been looking in and around sbin/route/route.c, sys/net/route.c,
and sys/net/rt* for a relevant change with little success.

Is this a regression, or was the equivalence between "-inet6 default"
and "::/0" just a transient implementation detail?