Re: Routing socket issue?

2021-01-30 Thread Frank Kardel

Hi Roy!


On 01/31/21 03:27, Roy Marples wrote:

On 30/01/2021 22:01, Frank Kardel wrote:

"why it needs to be interested i..."

Ntpd needs to know the local address being used when sending to peers 
(authentication, which socket to use). That is why it not just reacts to


address information but also redetermines to local addresses (and 
sockets) are being used for reaching its peers.


The interaction with the routing socket is purposely simple. ntpd 
just needs to know that *something* has changed. It will then rescan 
after a grace period the interfaces and reevaluate


the interface/local address/socket setup. It does not need to be 
extremely snappy but it needs to happen.


Dropping that might delay ntpd's detection of changed local addresses 
for peers.


For example I fail to see how RTM_LOSING helps that because it won't 
change

how ntpd would configure itself.

Well if I read the comment right I am inclined to differ here:
In in_pcs.c we find:
/*
 * Check for alternatives when higher level complains
 * about service problems.  For now, invalidate cached
 * routing information.  If the route was created dynamically
 * (by a redirect), time to try a default gateway again.
 */
in_losing(struct inpcb *inp)

and the call is in tcp_time.c:
/*
 * If losing, let the lower level know and try for
 * a better route.  Also, if we backed off this far,
 * our srtt estimate is probably bogus.  Clobber it
 * so we'll take the next rtt measurement as our srtt;
 * move the current srtt into rttvar to keep the current
 * retransmit times until then.
 */

As ntpd acts after a grace period the routing engine may have corrected 
this situation and routing may indeed change.
ntpd's interactions with peers can take up to 1024s so it is good to 
attempt in a best effort way to keep the internal

local address/socket state close to the current state.
It is likely though that there have been routing messages like 
RTM_CHANGE/ADD/DELETE before that and RTM_LOSING is not providing

additional information at the point.



As NTP doesn't bring interfaces up or down, RFM_IFANNOUNCE is useless 
as well.
If the interface does vanish, any addresses on it will be reported via 
RTM_DELADDR.
RTM_IFINFO is also questionable as commentary in the code is that it 
only cares about addresses.



Well I read
ntp_io.c
/*
 * we are keen on new and deleted addresses and
 * if an interface goes up and down or routing
 * changes
 */
not as being interested in addresses only.

Also keep in mind that at this point routing messages are processed in a 
loop and the action here

timer_interfacetimeout(current_time + UPDATE_GRACE);
just sets the variable for the next interface+local address update run. 
This is very cheap. The grace period
will batch multiple routing message together. An explicit routing 
message flush is from my point of view
code clutter here. as the socket is effectively drained in the loop at 
the cost of examining the msg_type and setting

a variable. Not much gained here.

NOTE TO SELF: our kernel doesn't seem to report RTM_CHGADDR anymore 
looking at nxr.netbsd.org


I mean, if you want to argue against any of that then I would suggest 
why even bother filtering or looking at overflow at all?
Shrink the code - any activity on the routing socket, drain it 
ignoring all error, start the interface update timer.
That would be an option but we should react only on known events. There 
may be one or two events that could be removed from
the list after examination as other messages can cover for them. Keep in 
mind the this is a portable code section and the
code tries to be on the fail safe, robust side for the goal of 
address/routing tracking so adjusting it to a particular implementation

may break on other os implementations.




As for the message: IMHO it does not need to be logged at all 
(DPRINTF/maybe LOGDEBUG at most) because the overflow should and does 
just trigger ntpd to reevaluate the interface/routing configuration.


This information is not important at all for normal operation as the 
effects are correctly mitigated.


Great.

BTW: does the current code revert to (fail safe) periodic interface 
scanning if the routing socket is being disabled (happens when an 
unexpected error code is returned from read(2))?


No.

The socket is non blocking so the only error to ignore here would be 
EINTR.

Any other errors are due to bad programming IMO.
Could be bad programming, but I prefer the ntpd being forgiving against 
hiccups by reverting to periodic scanning when we
disable to routing socket. That is a fail safe strategy and would also 
warrant a log message as it is an unusual event.


Roy

Frank


Re: Help with libcurses and lynx under NetBSD-9 and -current?

2021-01-30 Thread RVP

On Wed, 27 Jan 2021, Brian Buhrow wrote:


1.  How do I get pkgsrc/www/lynx to compile using -ncurses instead of the 
native curses
library?  I tried setting various options in /etc/mk.conf, but it looks like  
it really wants
to compile using the native curses library.  I tried changing options.mk in the 
pkgsrc
directory, but  I apparently don't fully understand the maze of pkgsrc 
Makefiles.



I'm not a pkgsrc expert either, but, have a look at:

For ncurses:
Makefile.common
http://cvsweb.netbsd.org/bsdweb.cgi/pkgsrc/devel/ncurses/Makefile.common?rev=1.47&content-type=text/x-cvsweb-markup
Add:
CONFIGURE_ARGS+=--enable-termcap

For lynx:
Makefile
http://cvsweb.netbsd.org/bsdweb.cgi/pkgsrc/www/lynx/Makefile?rev=1.137&content-type=text/x-cvsweb-markup
Set SCREENTYPE to ncursesw in /etc/mk.conf?

I usually build both ncurses and lynx on my own using the
./configure; make; make install idiom:

ncurses:

1. CFLAGS="${CFLAGS/-flto -fpie /-fpic }" CXXFLAGS="${CXXFLAGS/-flto -fpie /-fpic 
}" \
   LDFLAGS="${LDFLAGS/-pie /}" ./configure \
--prefix=/opt/ncurses --enable-symlinks --with-manpage-symlinks \
--with-x11-rgb=/usr/X11R7/lib/X11/rgb.txt \
--enable-widec --enable-sp-funcs --enable-const --enable-ext-colors \
--enable-ext-mouse --enable-ext-putwin \
--enable-sigwinch --enable-wgetch-events --enable-tcap-names \
--enable-bsdpad --enable-colorfgbg --enable-termcap \
--with-pthread --enable-pthreads-eintr --enable-reentrant 
--enable-weak-symbols \
--without-debug --disable-overwrite --without-curses-h --with-termlib \
--with-cxx-shared --with-shared --disable-echo

2. make

3. make install


lynx:

1. PATH=$PATH:/opt/ncurses/bin CFLAGS="${CFLAGS/-flto /}" ./configure \
--prefix=/opt/lynx --disable-echo --enable-vertrace \
--disable-nls --enable-ipv6 --with-screen=ncursesw \
--enable-widec --enable-local-docs --with-ssl --enable-cjk \
--enable-japanese-utf8 --enable-wcwidth-support \
--enable-default-colors --enable-kbd-layout --enable-nested-tables \
--enable-charset-choice --enable-externs --enable-change-exec \
--enable-internal-links --enable-nsl-fork --enable-syslog \
--enable-underlines --with-bzlib --with-zlib

2. make

3. make install install-doc install-help



-RVP


Re: Help with libcurses and lynx under NetBSD-9 and -current?

2021-01-30 Thread RVP

On Wed, 27 Jan 2021, Christos Zoulas wrote:


I think we can make our libterminfo do the same by shuffling a few ifdefs
around :-)



Is that how libterminfo is going to be built from on (with $TERMCAP
support restored)?

-RVP


daily CVS update output

2021-01-30 Thread NetBSD source update


Updating src tree:
P src/distrib/sets/lists/tests/mi
P src/share/misc/acronyms.comp
P src/sys/arch/arm/nxp/imx6_clk.c
P src/sys/arch/arm/nxp/imx6_usb.c
P src/sys/dev/i2c/dstemp.c
P src/sys/dev/i2c/rs5c372.c
P src/sys/dev/i2c/s390.c
P src/sys/dev/ic/gem.c
P src/sys/dev/ic/gemvar.h
P src/sys/dev/pci/files.pci
P src/sys/dev/pci/if_mcx.c
P src/sys/dev/pci/pcidevs
P src/sys/dev/pci/pcidevs.h
P src/sys/dev/pci/pcidevs_data.h
P src/sys/net/files.net
U src/sys/net/toeplitz.c
U src/sys/net/toeplitz.h
P src/tests/usr.bin/xlint/lint1/msg_129.c
U src/tests/usr.bin/xlint/lint1/msg_129.exp
U src/tests/usr.bin/xlint/lint1/msg_189.c
U src/tests/usr.bin/xlint/lint1/msg_189.exp
P src/tests/usr.bin/xlint/lint1/msg_191.c
U src/tests/usr.bin/xlint/lint1/msg_191.exp
P src/tests/usr.bin/xlint/lint1/msg_192.c
U src/tests/usr.bin/xlint/lint1/msg_192.exp
P src/tests/usr.bin/xlint/lint1/msg_193.c
U src/tests/usr.bin/xlint/lint1/msg_193.exp
P src/tests/usr.bin/xlint/lint1/msg_194.c
U src/tests/usr.bin/xlint/lint1/msg_194.exp
P src/tests/usr.bin/xlint/lint1/msg_216.c
U src/tests/usr.bin/xlint/lint1/msg_216.exp
P src/tests/usr.bin/xlint/lint1/msg_217.c
U src/tests/usr.bin/xlint/lint1/msg_217.exp
P src/tests/usr.bin/xlint/lint1/msg_220.c
U src/tests/usr.bin/xlint/lint1/msg_220.exp
P src/tests/usr.bin/xlint/lint1/msg_223.c
U src/tests/usr.bin/xlint/lint1/msg_223.exp
P src/tests/usr.bin/xlint/lint1/msg_224.c
U src/tests/usr.bin/xlint/lint1/msg_224.exp
P src/tests/usr.bin/xlint/lint1/msg_231.c
U src/tests/usr.bin/xlint/lint1/msg_231.exp
P src/tests/usr.bin/xlint/lint1/msg_259.c
U src/tests/usr.bin/xlint/lint1/msg_259.exp
P src/usr.bin/make/Makefile
P src/usr.bin/make/buf.c
P src/usr.bin/make/buf.h
P src/usr.bin/make/cond.c
P src/usr.bin/make/dir.c
P src/usr.bin/make/enum.h
P src/usr.bin/make/for.c
P src/usr.bin/make/job.c
P src/usr.bin/make/main.c
P src/usr.bin/make/make.c
P src/usr.bin/make/parse.c
P src/usr.bin/make/suff.c
P src/usr.bin/make/var.c
P src/usr.bin/make/unit-tests/Makefile
P src/usr.bin/make/unit-tests/jobs-empty-commands.mk
P src/usr.bin/make/unit-tests/lint.mk
U src/usr.bin/make/unit-tests/opt-no-action-touch.exp
U src/usr.bin/make/unit-tests/opt-no-action-touch.mk
P src/usr.bin/make/unit-tests/opt-touch-jobs.mk
P src/usr.bin/xlint/lint1/decl.c
P src/usr.bin/xlint/lint1/err.c
P src/usr.bin/xlint/lint1/externs1.h
P src/usr.bin/xlint/lint1/func.c
P src/usr.bin/xlint/lint1/init.c
P src/usr.bin/xlint/lint1/tree.c
P src/usr.sbin/isibootd/isibootd.c
P src/usr.sbin/tprof/tprof_analyze.c

Updating xsrc tree:


Killing core files:




Updating file list:
-rw-rw-r--  1 srcmastr  netbsd  38842115 Jan 31 03:03 ls-lRA.gz


Re: Routing socket issue?

2021-01-30 Thread Roy Marples

On 30/01/2021 22:01, Frank Kardel wrote:

"why it needs to be interested i..."

Ntpd needs to know the local address being used when sending to peers 
(authentication, which socket to use). That is why it not just reacts to


address information but also redetermines to local addresses (and sockets) are 
being used for reaching its peers.


The interaction with the routing socket is purposely simple. ntpd just needs to 
know that *something* has changed. It will then rescan after a grace period the 
interfaces and reevaluate


the interface/local address/socket setup. It does not need to be extremely 
snappy but it needs to happen.


Dropping that might delay ntpd's detection of changed local addresses for peers.


For example I fail to see how RTM_LOSING helps that because it won't change
how ntpd would configure itself.

As NTP doesn't bring interfaces up or down, RFM_IFANNOUNCE is useless as well.
If the interface does vanish, any addresses on it will be reported via 
RTM_DELADDR.
RTM_IFINFO is also questionable as commentary in the code is that it only cares 
about addresses.


NOTE TO SELF: our kernel doesn't seem to report RTM_CHGADDR anymore looking at 
nxr.netbsd.org


I mean, if you want to argue against any of that then I would suggest why even 
bother filtering or looking at overflow at all?
Shrink the code - any activity on the routing socket, drain it ignoring all 
error, start the interface update timer.




As for the message: IMHO it does not need to be logged at all (DPRINTF/maybe 
LOGDEBUG at most) because the overflow should and does just trigger ntpd to 
reevaluate the interface/routing configuration.


This information is not important at all for normal operation as the effects are 
correctly mitigated.


Great.

BTW: does the current code revert to (fail safe) periodic interface scanning if 
the routing socket is being disabled (happens when an unexpected error code is 
returned from read(2))?


No.

The socket is non blocking so the only error to ignore here would be EINTR.
Any other errors are due to bad programming IMO.

Roy


Re: Routing socket issue?

2021-01-30 Thread Paul Goyette

On Sat, 30 Jan 2021, Roy Marples wrote:


On 30/01/2021 18:27, Paul Goyette wrote:

On Sat, 30 Jan 2021, Roy Marples wrote:


On 30/01/2021 15:12, Paul Goyette wrote:

I thought we took care of the buffer-space issue a long time ago, but
today I've gotten about a dozen of these:

...
Jan 30 05:20:11 speedy ntpd[3146]: routing socket reports: No buffer
space available


I recently adding a patch to enable the diagnostic AND take action on it.
We can change the upstream default from LOG_ERR to LOG_DEBUG or maybe 
their custom DPRINTF though if you think that would help reduce the noise.


Not concerned about noise, just wanted to make sure we didn't have a
regression slip by.  As long as the message is deliberate, I'm not too
worried.


Well, currently other apps such as dhcpcd still log an error when the routing 
socket overflows but a more helpful message.


I think we can just change it to:
  routing socket overflowed - will update interfaces

Happy with that?


Sure.



To alleviate the issue we could also stop ntpd from listening to routing 
changes has that has no bearing on how it discovers interfaces and addresses 
as far as i can tell.

Frank ok with that?

Roy

!DSPAM:6015c335157686319899926!




++--+---+
| Paul Goyette   | PGP Key fingerprint: | E-mail addresses: |
| (Retired)  | FA29 0E3B 35AF E8AE 6651 | p...@whooppee.com |
| Software Developer | 0786 F758 55DE 53BA 7731 | pgoye...@netbsd.org   |
++--+---+

Re: Routing socket issue?

2021-01-30 Thread Frank Kardel

"why it needs to be interested i..."

Ntpd needs to know the local address being used when sending to peers 
(authentication, which socket to use). That is why it not just reacts to


address information but also redetermines to local addresses (and 
sockets) are being used for reaching its peers.


The interaction with the routing socket is purposely simple. ntpd just 
needs to know that *something* has changed. It will then rescan after a 
grace period the interfaces and reevaluate


the interface/local address/socket setup. It does not need to be 
extremely snappy but it needs to happen.


Dropping that might delay ntpd's detection of changed local addresses 
for peers.


As for the message: IMHO it does not need to be logged at all 
(DPRINTF/maybe LOGDEBUG at most) because the overflow should and does 
just trigger ntpd to reevaluate the interface/routing configuration.


This information is not important at all for normal operation as the 
effects are correctly mitigated.


BTW: does the current code revert to (fail safe) periodic interface 
scanning if the routing socket is being disabled (happens when an 
unexpected error code is returned from read(2))?


Frank


On 01/30/21 21:41, Roy Marples wrote:

On 30/01/2021 18:27, Paul Goyette wrote:

On Sat, 30 Jan 2021, Roy Marples wrote:


On 30/01/2021 15:12, Paul Goyette wrote:

I thought we took care of the buffer-space issue a long time ago, but
today I've gotten about a dozen of these:

...
Jan 30 05:20:11 speedy ntpd[3146]: routing socket reports: No buffer
space available


I recently adding a patch to enable the diagnostic AND take action 
on it.
We can change the upstream default from LOG_ERR to LOG_DEBUG or 
maybe their custom DPRINTF though if you think that would help 
reduce the noise.


Not concerned about noise, just wanted to make sure we didn't have a
regression slip by.  As long as the message is deliberate, I'm not too
worried.


Just to be clear on this, we have the framework to filter out routing 
messages we don't need to stop overflow from happening and we can also 
detect when overflow still happens.
Currently ntpd now does both, before it just filtered out, but I 
didn't change what it was interested in and now I'm curious why it 
needs to be interested in actual routing changes for interface/address 
discovery as I'm pretty sure we can drop that.


As we enable this in more applications we just have to make some 
choices - filter more out vs increasing buffer size vs just discarding 
the error if the prior two are not feasible.


Roy




Re: Routing socket issue?

2021-01-30 Thread Roy Marples

On 30/01/2021 18:27, Paul Goyette wrote:

On Sat, 30 Jan 2021, Roy Marples wrote:


On 30/01/2021 15:12, Paul Goyette wrote:

I thought we took care of the buffer-space issue a long time ago, but
today I've gotten about a dozen of these:

...
Jan 30 05:20:11 speedy ntpd[3146]: routing socket reports: No buffer
space available


I recently adding a patch to enable the diagnostic AND take action on it.
We can change the upstream default from LOG_ERR to LOG_DEBUG or maybe their 
custom DPRINTF though if you think that would help reduce the noise.


Not concerned about noise, just wanted to make sure we didn't have a
regression slip by.  As long as the message is deliberate, I'm not too
worried.


Just to be clear on this, we have the framework to filter out routing messages 
we don't need to stop overflow from happening and we can also detect when 
overflow still happens.
Currently ntpd now does both, before it just filtered out, but I didn't change 
what it was interested in and now I'm curious why it needs to be interested in 
actual routing changes for interface/address discovery as I'm pretty sure we can 
drop that.


As we enable this in more applications we just have to make some choices - 
filter more out vs increasing buffer size vs just discarding the error if the 
prior two are not feasible.


Roy


Re: Routing socket issue?

2021-01-30 Thread Roy Marples

On 30/01/2021 18:27, Paul Goyette wrote:

On Sat, 30 Jan 2021, Roy Marples wrote:


On 30/01/2021 15:12, Paul Goyette wrote:

I thought we took care of the buffer-space issue a long time ago, but
today I've gotten about a dozen of these:

...
Jan 30 05:20:11 speedy ntpd[3146]: routing socket reports: No buffer
space available


I recently adding a patch to enable the diagnostic AND take action on it.
We can change the upstream default from LOG_ERR to LOG_DEBUG or maybe their 
custom DPRINTF though if you think that would help reduce the noise.


Not concerned about noise, just wanted to make sure we didn't have a
regression slip by.  As long as the message is deliberate, I'm not too
worried.


Well, currently other apps such as dhcpcd still log an error when the routing 
socket overflows but a more helpful message.


I think we can just change it to:
   routing socket overflowed - will update interfaces

Happy with that?

To alleviate the issue we could also stop ntpd from listening to routing changes 
has that has no bearing on how it discovers interfaces and addresses as far as i 
can tell.

Frank ok with that?

Roy


Re: Routing socket issue?

2021-01-30 Thread Paul Goyette

On Sat, 30 Jan 2021, Roy Marples wrote:


On 30/01/2021 15:12, Paul Goyette wrote:

I thought we took care of the buffer-space issue a long time ago, but
today I've gotten about a dozen of these:

...
Jan 30 05:20:11 speedy ntpd[3146]: routing socket reports: No buffer
space available


I recently adding a patch to enable the diagnostic AND take action on it.
We can change the upstream default from LOG_ERR to LOG_DEBUG or maybe their 
custom DPRINTF though if you think that would help reduce the noise.


Not concerned about noise, just wanted to make sure we didn't have a
regression slip by.  As long as the message is deliberate, I'm not too
worried.

Perhaps LOG_DEBUG would be better, though, as well as some indication of
what "action" is taken and whether or not the action was successful?


++--+---+
| Paul Goyette   | PGP Key fingerprint: | E-mail addresses: |
| (Retired)  | FA29 0E3B 35AF E8AE 6651 | p...@whooppee.com |
| Software Developer | 0786 F758 55DE 53BA 7731 | pgoye...@netbsd.org   |
++--+---+


Re: Routing socket issue?

2021-01-30 Thread Roy Marples

On 30/01/2021 15:12, Paul Goyette wrote:

I thought we took care of the buffer-space issue a long time ago, but
today I've gotten about a dozen of these:

...
Jan 30 05:20:11 speedy ntpd[3146]: routing socket reports: No buffer
space available


I recently adding a patch to enable the diagnostic AND take action on it.
We can change the upstream default from LOG_ERR to LOG_DEBUG or maybe their 
custom DPRINTF though if you think that would help reduce the noise.


Roy


Routing socket issue?

2021-01-30 Thread Paul Goyette

I thought we took care of the buffer-space issue a long time ago, but
today I've gotten about a dozen of these:

...
Jan 30 05:20:11 speedy ntpd[3146]: routing socket reports: No buffer
space available
...



++--+---+
| Paul Goyette   | PGP Key fingerprint: | E-mail addresses: |
| (Retired)  | FA29 0E3B 35AF E8AE 6651 | p...@whooppee.com |
| Software Developer | 0786 F758 55DE 53BA 7731 | pgoye...@netbsd.org   |
++--+---+


Re: panic: _bus_virt_to_bus for vioif on GCE with GENERIC kernel

2021-01-30 Thread Paul Ripke
On Sat, Jan 30, 2021 at 12:37:31AM +0100, Reinoud Zandijk wrote:
> On Thu, Jan 28, 2021 at 11:56:30PM +1100, Paul Ripke wrote:
> > Just tried running a newly built kernel on a GCE instance, and ran into
> > this panic. The previously running kernel is 9.99.73 from back around
> > October last year.
> > 
> > Anyone else tried booting -current on GCE recently? My suspicion is
> > the VirtIO changes committed around Jan 20. I'll sync back prior to
> > those and retry, if nobody else beats me to it.
> 
> Not on GCE no. Have you tried the earlier version?

>From the old, old kernel:

piixpm0: SMBus disabled
virtio0 at pci0 dev 3 function 0
virtio0: Virtio SCSI Device (rev. 0x00)
vioscsi0 at virtio0: Features: 0
virtio0: allocated 221184 byte for virtqueue 0 for control, size 8192
virtio0: allocated 221184 byte for virtqueue 1 for event, size 8192
virtio0: allocated 221184 byte for virtqueue 2 for request, size 8192
vioscsi0: cmd_per_lun 8 qsize 8192 seg_max 64 max_target 253 max_lun 1
virtio0: config interrupting at msix0 vec 0
virtio0: queues interrupting at msix0 vec 1
scsibus0 at vioscsi0: 16 targets, 1 lun per target
virtio1 at pci0 dev 4 function 0
virtio1: Virtio Network Device (rev. 0x00)
vioif0 at virtio1: Features: 0x30020
vioif0: Ethernet address 42:01:0a:98:00:02
virtio1: allocated 114688 byte for virtqueue 0 for rx0, size 4096
virtio1: allocated 114688 byte for virtqueue 1 for tx0, size 4096
virtio1: config interrupting at msix1 vec 0
virtio1: queues interrupting at msix1 vec 1
virtio2 at pci0 dev 5 function 0
virtio2: Virtio Memory Balloon Device (rev. 0x00)
viomb0 at virtio2virtio2: allocated 12288 byte for virtqueue 0 for inflate, 
size 256
virtio2: allocated 12288 byte for virtqueue 1 for deflate, size 256
: Features: 0
virtio2: interrupting at ioapic0 pin 10
virtio3 at pci0 dev 6 function 0
virtio3: Virtio Entropy Device (rev. 0x00)
viornd0 at virtio3: Features: 0
virtio3: allocated 12288 byte for virtqueue 0 for Entropy request, size 256
virtio3: interrupting at ioapic0 pin 10
isa0 at pcib0

Confirmed that a kernel built immediately prior to the following commit works,
and fails after this commit:
https://github.com/NetBSD/src/commit/7bca0bcf21c9b3465a6ee4eef6c01be32c9de1eb

> > [   1.0303647] piixpm0: SMBus disabled
> > [   1.0303647] virtio0 at pci0 dev 3 function 0
> > [   1.0303647] virtio0: SCSI device (rev. 0x00)
> > [   1.0303647] vioscsi0 at virtio0: features: 0
> > [   1.0303647] vioscsi0: cmd_per_lun 8 qsize 8192 seg_max 64 max_target 253 
> > max_lun 1
> > [   1.0303647] virtio0: config interrupting at msix0 vec 0
> > [   1.0303647] virtio0: queues interrupting at msix0 vec 1
> > [   1.0303647] scsibus0 at vioscsi0: 16 targets, 1 lun per target
> > [   1.0303647] virtio1 at pci0 dev 4 function 0
> > [   1.0303647] virtio1: network device (rev. 0x00)
> > [   1.0303647] vioif0 at virtio1: features: 
> > 0x20030020
> 
> Could you A) test with virtio v1 PCI devices? ie without legacy and if that
> fails too could you B) test with src/sys/dev/pci/if_vioif.c:832 commented out
> and see if that makes a difference? That's a new virtio 1.0 feature that was
> apparently negotiated and should work in transitional devices and should not
> be accepted in older. It could be that CGE is making a mistake there but
> negotiating EVENT_IDX shifts registers so has a big impact if it goes wrong.

A) Erm, how? Read thru some of the source and saw mentions of v1.0 vs v0.9,
but didn't see a way of just disabling legacy support?

B)

[   1.0265446] piixpm0: SMBus disabled
[   1.0265446] virtio0 at pci0 dev 3 function 0
[   1.0265446] virtio0: SCSI device (rev. 0x00)
[   1.0265446] vioscsi0 at virtio0: features: 0
[   1.0265446] vioscsi0: cmd_per_lun 8 qsize 8192 seg_max 64 max_target 253 
max_lun 1
[   1.0265446] virtio0: config interrupting at msix0 vec 0
[   1.0265446] virtio0: queues interrupting at msix0 vec 1
[   1.0265446] scsibus0 at vioscsi0: 16 targets, 1 lun per target
[   1.0265446] virtio1 at pci0 dev 4 function 0
[   1.0265446] virtio1: network device (rev. 0x00)
[   1.0265446] vioif0 at virtio1: features: 0x30020
[   1.0265446] vioif0: Ethernet address 42:01:0a:98:00:02
[   1.0265446] panic: _bus_virt_to_bus
[   1.0265446] cpu0: Begin traceback...
[   1.0265446] vpanic() at netbsd:vpanic+0x156
[   1.0265446] snprintf() at netbsd:snprintf
[   1.0265446] _bus_dma_alloc_bouncebuf() at netbsd:_bus_dma_alloc_bouncebuf
[   1.0265446] bus_dmamap_load() at netbsd:bus_dmamap_load+0x9c
[   1.0265446] vioif_dmamap_create_load.constprop.0() at 
netbsd:vioif_dmamap_create_load.constprop.0+0x7e
[   1.0265446] vioif_attach() at netbsd:vioif_attach+0x1088
[   1.0265446] config_attach_loc() at netbsd:config_attach_loc+0x17e
[   1.0265446] virtio_pci_rescan() at netbsd:virtio_pci_rescan+0x48
[   1.0265446] virtio_pci_attach() at netbsd:virtio_pci_attach+0x23a
[   1.0265446] config_attach_loc() at netbsd:config_attach_loc+0x17e
[   1.0265446] pci_probe_device() at netbsd:pci_probe_device+0x585
[   1.0265446] pc