Re: divert packet kernel lock

2022-05-06 Thread Alexander Bluhm
On Fri, May 06, 2022 at 10:16:35PM +0200, Mark Kettenis wrote:
> > Date: Fri, 6 May 2022 14:48:59 +0200
> > From: Alexander Bluhm 
> > 
> > On Thu, May 05, 2022 at 11:10:54PM +0200, Mark Kettenis wrote:
> > > > Date: Thu, 5 May 2022 22:41:01 +0200
> > > > From: Alexander Bluhm 
> > > >
> > > > Hi,
> > > >
> > > > The easiest way to make divert_packet() MP safe for now, is to
> > > > protect sbappendaddr() with kernel lock.
> > >
> > > All other invocations of sbappendaddr() run with the kernel lock held?
> > 
> > No.  Only this place takes kernel lock.
> > 
> > > If so, maybe that should be asserted inside sbappendaddr()?
> > 
> > This is only a temporary hack.  The clean solution would be a socket
> > mutex.  I have marked it with XXXSMP.  Maybe this is place is a
> > good start to implement and test such a lock.
> > 
> > > If not, I don't understand how this would help...
> > 
> > All other places call sbappendaddr() with exclusive net lock.
> > divert_packet() holds the shared net lock, so it cannot run in
> > parallel with the other callers.
> > 
> > What is left is protection between multiple divert_packet() running
> > and calling sbappendaddr().  For that kernel lock helps.
> > 
> > Of course that is a dirty hack.  But we have a race in the commited
> > codebase that I want to plug quickly.  A proper solution needs more
> > thought.
> 
> Ouch.  I suppose use the kernel lock here makes sense since you're
> going to take it after the call anyway.
> 
> Maybe change the comment to state that in other places sbappendaddr()
> is always called with an exclusive net lock and therefore can't run
> while we're holding a shared net lock?

Is this better to understand?

Index: netinet/ip_divert.c
===
RCS file: /data/mirror/openbsd/cvs/src/sys/netinet/ip_divert.c,v
retrieving revision 1.67
diff -u -p -r1.67 ip_divert.c
--- netinet/ip_divert.c 5 May 2022 16:44:22 -   1.67
+++ netinet/ip_divert.c 6 May 2022 20:45:43 -
@@ -222,11 +222,19 @@ divert_packet(struct mbuf *m, int dir, u
}
 
so = inp->inp_socket;
+   /*
+* XXXSMP sbappendaddr() is not MP safe and this function is called
+* from pf with shared netlock.  To call only one sbappendaddr() from
+* divert_packet(), protect it with kernel lock.  All other places
+* call sbappendaddr() with exclusive net lock.  This blocks
+* divert_packet() as we have the shared lock.
+*/
+   KERNEL_LOCK();
if (sbappendaddr(so, >so_rcv, sintosa(), m, NULL) == 0) {
+   KERNEL_UNLOCK();
divstat_inc(divs_fullsock);
goto bad;
}
-   KERNEL_LOCK();
sorwakeup(inp->inp_socket);
KERNEL_UNLOCK();
 
Index: netinet6/ip6_divert.c
===
RCS file: /data/mirror/openbsd/cvs/src/sys/netinet6/ip6_divert.c,v
retrieving revision 1.66
diff -u -p -r1.66 ip6_divert.c
--- netinet6/ip6_divert.c   5 May 2022 16:44:22 -   1.66
+++ netinet6/ip6_divert.c   6 May 2022 20:45:11 -
@@ -228,11 +228,19 @@ divert6_packet(struct mbuf *m, int dir, 
}
 
so = inp->inp_socket;
+   /*
+* XXXSMP sbappendaddr() is not MP safe and this function is called
+* from pf with shared netlock.  To call only one sbappendaddr() from
+* divert_packet(), protect it with kernel lock.  All other places
+* call sbappendaddr() with exclusive net lock.  This blocks
+* divert_packet() as we have the shared lock.
+*/
+   KERNEL_LOCK();
if (sbappendaddr(so, >so_rcv, sin6tosa(), m, NULL) == 0) {
+   KERNEL_UNLOCK();
div6stat_inc(div6s_fullsock);
goto bad;
}
-   KERNEL_LOCK();
sorwakeup(inp->inp_socket);
KERNEL_UNLOCK();
 



Re: divert packet kernel lock

2022-05-06 Thread Mark Kettenis
> Date: Fri, 6 May 2022 14:48:59 +0200
> From: Alexander Bluhm 
> 
> On Thu, May 05, 2022 at 11:10:54PM +0200, Mark Kettenis wrote:
> > > Date: Thu, 5 May 2022 22:41:01 +0200
> > > From: Alexander Bluhm 
> > >
> > > Hi,
> > >
> > > The easiest way to make divert_packet() MP safe for now, is to
> > > protect sbappendaddr() with kernel lock.
> >
> > All other invocations of sbappendaddr() run with the kernel lock held?
> 
> No.  Only this place takes kernel lock.
> 
> > If so, maybe that should be asserted inside sbappendaddr()?
> 
> This is only a temporary hack.  The clean solution would be a socket
> mutex.  I have marked it with XXXSMP.  Maybe this is place is a
> good start to implement and test such a lock.
> 
> > If not, I don't understand how this would help...
> 
> All other places call sbappendaddr() with exclusive net lock.
> divert_packet() holds the shared net lock, so it cannot run in
> parallel with the other callers.
> 
> What is left is protection between multiple divert_packet() running
> and calling sbappendaddr().  For that kernel lock helps.
> 
> Of course that is a dirty hack.  But we have a race in the commited
> codebase that I want to plug quickly.  A proper solution needs more
> thought.

Ouch.  I suppose use the kernel lock here makes sense since you're
going to take it after the call anyway.

Maybe change the comment to state that in other places sbappendaddr()
is always called with an exclusive net lock and therefore can't run
while we're holding a shared net lock?

> > > Index: netinet/ip_divert.c
> > > ===
> > > RCS file: /data/mirror/openbsd/cvs/src/sys/netinet/ip_divert.c,v
> > > retrieving revision 1.67
> > > diff -u -p -r1.67 ip_divert.c
> > > --- netinet/ip_divert.c   5 May 2022 16:44:22 -   1.67
> > > +++ netinet/ip_divert.c   5 May 2022 20:36:23 -
> > > @@ -222,11 +222,18 @@ divert_packet(struct mbuf *m, int dir, u
> > >   }
> > >
> > >   so = inp->inp_socket;
> > > + /*
> > > +  * XXXSMP sbappendaddr() is not MP safe and this function is called
> > > +  * from pf with shared netlock.  To run only one sbappendaddr()
> > > +  * protect it with kernel lock.  Socket buffer access from system
> > > +  * call is protected with exclusive net lock.
> > > +  */
> > > + KERNEL_LOCK();
> > >   if (sbappendaddr(so, >so_rcv, sintosa(), m, NULL) == 0) {
> > > + KERNEL_UNLOCK();
> > >   divstat_inc(divs_fullsock);
> > >   goto bad;
> > >   }
> > > - KERNEL_LOCK();
> > >   sorwakeup(inp->inp_socket);
> > >   KERNEL_UNLOCK();
> > >
> > > Index: netinet6/ip6_divert.c
> > > ===
> > > RCS file: /data/mirror/openbsd/cvs/src/sys/netinet6/ip6_divert.c,v
> > > retrieving revision 1.66
> > > diff -u -p -r1.66 ip6_divert.c
> > > --- netinet6/ip6_divert.c 5 May 2022 16:44:22 -   1.66
> > > +++ netinet6/ip6_divert.c 5 May 2022 20:36:23 -
> > > @@ -228,11 +228,18 @@ divert6_packet(struct mbuf *m, int dir,
> > >   }
> > >
> > >   so = inp->inp_socket;
> > > + /*
> > > +  * XXXSMP sbappendaddr() is not MP safe and this function is called
> > > +  * from pf with shared netlock.  To run only one sbappendaddr()
> > > +  * protect it with kernel lock.  Socket buffer access from system
> > > +  * call is protected with exclusive net lock.
> > > +  */
> > > + KERNEL_LOCK();
> > >   if (sbappendaddr(so, >so_rcv, sin6tosa(), m, NULL) == 0) {
> > > + KERNEL_UNLOCK();
> > >   div6stat_inc(div6s_fullsock);
> > >   goto bad;
> > >   }
> > > - KERNEL_LOCK();
> > >   sorwakeup(inp->inp_socket);
> > >   KERNEL_UNLOCK();
> > >
> > >
> > >
> 



net lock priority

2022-05-06 Thread Alexander Bluhm
Hi,

When creating network load by forwarding packets, SSH gets unusable
and ping time gets above 10 seconds.

Problem is that while multiple forwarding threads are running with
shared net lock, the exclusive lock cannot be acquired.  This is
unfair.

Diff below prevents that a read lock is granted when another thread
is waiting for the exclusive lock.  With that ping time stays under
300 ms.

Does this read write lock prio change make sense?

bluhm

Index: kern/kern_rwlock.c
===
RCS file: /data/mirror/openbsd/cvs/src/sys/kern/kern_rwlock.c,v
retrieving revision 1.47
diff -u -p -r1.47 kern_rwlock.c
--- kern/kern_rwlock.c  8 Feb 2021 08:18:45 -   1.47
+++ kern/kern_rwlock.c  6 May 2022 12:08:01 -
@@ -81,7 +81,7 @@ static const struct rwlock_op {
},
{   /* RW_READ */
RWLOCK_READ_INCR,
-   RWLOCK_WRLOCK,
+   RWLOCK_WRLOCK | RWLOCK_WRWANT,
RWLOCK_WAIT,
0,
PLOCK
@@ -103,7 +103,7 @@ rw_enter_read(struct rwlock *rwl)
 {
unsigned long owner = rwl->rwl_owner;
 
-   if (__predict_false((owner & RWLOCK_WRLOCK) ||
+   if (__predict_false((owner & (RWLOCK_WRLOCK | RWLOCK_WRWANT)) ||
rw_cas(>rwl_owner, owner, owner + RWLOCK_READ_INCR)))
rw_enter(rwl, RW_READ);
else {



Re: divert packet kernel lock

2022-05-06 Thread Vitaliy Makkoveev
sbappendaddr() has no sleep points, so this should work. I have no
objections to commit this as quick and dirty fix. Otherwise the
network stack parallelisation diff should be reverted.

> On 6 May 2022, at 15:48, Alexander Bluhm  wrote:
> 
> On Thu, May 05, 2022 at 11:10:54PM +0200, Mark Kettenis wrote:
>>> Date: Thu, 5 May 2022 22:41:01 +0200
>>> From: Alexander Bluhm 
>>> 
>>> Hi,
>>> 
>>> The easiest way to make divert_packet() MP safe for now, is to
>>> protect sbappendaddr() with kernel lock.
>> 
>> All other invocations of sbappendaddr() run with the kernel lock held?
> 
> No.  Only this place takes kernel lock.
> 
>> If so, maybe that should be asserted inside sbappendaddr()?
> 
> This is only a temporary hack.  The clean solution would be a socket
> mutex.  I have marked it with XXXSMP.  Maybe this is place is a
> good start to implement and test such a lock.
> 
>> If not, I don't understand how this would help...
> 
> All other places call sbappendaddr() with exclusive net lock.
> divert_packet() holds the shared net lock, so it cannot run in
> parallel with the other callers.
> 
> What is left is protection between multiple divert_packet() running
> and calling sbappendaddr().  For that kernel lock helps.
> 
> Of course that is a dirty hack.  But we have a race in the commited
> codebase that I want to plug quickly.  A proper solution needs more
> thought.
> 
>>> Index: netinet/ip_divert.c
>>> ===
>>> RCS file: /data/mirror/openbsd/cvs/src/sys/netinet/ip_divert.c,v
>>> retrieving revision 1.67
>>> diff -u -p -r1.67 ip_divert.c
>>> --- netinet/ip_divert.c 5 May 2022 16:44:22 -   1.67
>>> +++ netinet/ip_divert.c 5 May 2022 20:36:23 -
>>> @@ -222,11 +222,18 @@ divert_packet(struct mbuf *m, int dir, u
>>> }
>>> 
>>> so = inp->inp_socket;
>>> +   /*
>>> +* XXXSMP sbappendaddr() is not MP safe and this function is called
>>> +* from pf with shared netlock.  To run only one sbappendaddr()
>>> +* protect it with kernel lock.  Socket buffer access from system
>>> +* call is protected with exclusive net lock.
>>> +*/
>>> +   KERNEL_LOCK();
>>> if (sbappendaddr(so, >so_rcv, sintosa(), m, NULL) == 0) {
>>> +   KERNEL_UNLOCK();
>>> divstat_inc(divs_fullsock);
>>> goto bad;
>>> }
>>> -   KERNEL_LOCK();
>>> sorwakeup(inp->inp_socket);
>>> KERNEL_UNLOCK();
>>> 
>>> Index: netinet6/ip6_divert.c
>>> ===
>>> RCS file: /data/mirror/openbsd/cvs/src/sys/netinet6/ip6_divert.c,v
>>> retrieving revision 1.66
>>> diff -u -p -r1.66 ip6_divert.c
>>> --- netinet6/ip6_divert.c   5 May 2022 16:44:22 -   1.66
>>> +++ netinet6/ip6_divert.c   5 May 2022 20:36:23 -
>>> @@ -228,11 +228,18 @@ divert6_packet(struct mbuf *m, int dir, 
>>> }
>>> 
>>> so = inp->inp_socket;
>>> +   /*
>>> +* XXXSMP sbappendaddr() is not MP safe and this function is called
>>> +* from pf with shared netlock.  To run only one sbappendaddr()
>>> +* protect it with kernel lock.  Socket buffer access from system
>>> +* call is protected with exclusive net lock.
>>> +*/
>>> +   KERNEL_LOCK();
>>> if (sbappendaddr(so, >so_rcv, sin6tosa(), m, NULL) == 0) {
>>> +   KERNEL_UNLOCK();
>>> div6stat_inc(div6s_fullsock);
>>> goto bad;
>>> }
>>> -   KERNEL_LOCK();
>>> sorwakeup(inp->inp_socket);
>>> KERNEL_UNLOCK();
>>> 
>>> 
>>> 
> 



Re: divert packet kernel lock

2022-05-06 Thread Alexander Bluhm
On Thu, May 05, 2022 at 11:10:54PM +0200, Mark Kettenis wrote:
> > Date: Thu, 5 May 2022 22:41:01 +0200
> > From: Alexander Bluhm 
> > 
> > Hi,
> > 
> > The easiest way to make divert_packet() MP safe for now, is to
> > protect sbappendaddr() with kernel lock.
> 
> All other invocations of sbappendaddr() run with the kernel lock held?

No.  Only this place takes kernel lock.

> If so, maybe that should be asserted inside sbappendaddr()?

This is only a temporary hack.  The clean solution would be a socket
mutex.  I have marked it with XXXSMP.  Maybe this is place is a
good start to implement and test such a lock.

> If not, I don't understand how this would help...

All other places call sbappendaddr() with exclusive net lock.
divert_packet() holds the shared net lock, so it cannot run in
parallel with the other callers.

What is left is protection between multiple divert_packet() running
and calling sbappendaddr().  For that kernel lock helps.

Of course that is a dirty hack.  But we have a race in the commited
codebase that I want to plug quickly.  A proper solution needs more
thought.

> > Index: netinet/ip_divert.c
> > ===
> > RCS file: /data/mirror/openbsd/cvs/src/sys/netinet/ip_divert.c,v
> > retrieving revision 1.67
> > diff -u -p -r1.67 ip_divert.c
> > --- netinet/ip_divert.c 5 May 2022 16:44:22 -   1.67
> > +++ netinet/ip_divert.c 5 May 2022 20:36:23 -
> > @@ -222,11 +222,18 @@ divert_packet(struct mbuf *m, int dir, u
> > }
> >  
> > so = inp->inp_socket;
> > +   /*
> > +* XXXSMP sbappendaddr() is not MP safe and this function is called
> > +* from pf with shared netlock.  To run only one sbappendaddr()
> > +* protect it with kernel lock.  Socket buffer access from system
> > +* call is protected with exclusive net lock.
> > +*/
> > +   KERNEL_LOCK();
> > if (sbappendaddr(so, >so_rcv, sintosa(), m, NULL) == 0) {
> > +   KERNEL_UNLOCK();
> > divstat_inc(divs_fullsock);
> > goto bad;
> > }
> > -   KERNEL_LOCK();
> > sorwakeup(inp->inp_socket);
> > KERNEL_UNLOCK();
> >  
> > Index: netinet6/ip6_divert.c
> > ===
> > RCS file: /data/mirror/openbsd/cvs/src/sys/netinet6/ip6_divert.c,v
> > retrieving revision 1.66
> > diff -u -p -r1.66 ip6_divert.c
> > --- netinet6/ip6_divert.c   5 May 2022 16:44:22 -   1.66
> > +++ netinet6/ip6_divert.c   5 May 2022 20:36:23 -
> > @@ -228,11 +228,18 @@ divert6_packet(struct mbuf *m, int dir, 
> > }
> >  
> > so = inp->inp_socket;
> > +   /*
> > +* XXXSMP sbappendaddr() is not MP safe and this function is called
> > +* from pf with shared netlock.  To run only one sbappendaddr()
> > +* protect it with kernel lock.  Socket buffer access from system
> > +* call is protected with exclusive net lock.
> > +*/
> > +   KERNEL_LOCK();
> > if (sbappendaddr(so, >so_rcv, sin6tosa(), m, NULL) == 0) {
> > +   KERNEL_UNLOCK();
> > div6stat_inc(div6s_fullsock);
> > goto bad;
> > }
> > -   KERNEL_LOCK();
> > sorwakeup(inp->inp_socket);
> > KERNEL_UNLOCK();
> >  
> > 
> > 



Re: allow 240/4 in various network daemons

2022-05-06 Thread Tom Smyth
I would agree with the diff.. @claudio  (for what it is worth)

in principle 240.0.0.0/4 was reserved for future use in the past...
so  changing that today makes sense to me ...


On Fri, 6 May 2022 at 13:20, Claudio Jeker  wrote:
>
> On Thu, May 05, 2022 at 11:37:24AM +0200, Claudio Jeker wrote:
> > So most routing daemons and other network daemons like pppd do not allow
> > 240/4 as IPs because they check the IP against IN_BADCLASS().
> > I think it is time to remove this restriction.
> >
> > Now there is another magical network 0.0.0.0/8 which is not allowed in
> > some but not all of the routing daemons. Not sure if that should be
> > removed or blocked in all daemons.
>
> The discussion about this diff totally derailed so lets try again. Anyone
> wants to OK this?
>
> --
> :wq Claudio
>
> Index: usr.sbin/bgpd/kroute.c
> ===
> RCS file: /cvs/src/usr.sbin/bgpd/kroute.c,v
> retrieving revision 1.244
> diff -u -p -r1.244 kroute.c
> --- usr.sbin/bgpd/kroute.c  8 Mar 2022 12:58:57 -   1.244
> +++ usr.sbin/bgpd/kroute.c  5 May 2022 08:48:27 -
> @@ -1448,12 +1448,11 @@ kr_redistribute(int type, struct ktable
> return;
>
> /*
> -* We consider the loopback net, multicast and experimental addresses
> +* We consider the loopback net and multicast addresses
>  * as not redistributable.
>  */
> a = ntohl(kr->prefix.s_addr);
> -   if (IN_MULTICAST(a) || IN_BADCLASS(a) ||
> -   (a >> IN_CLASSA_NSHIFT) == IN_LOOPBACKNET)
> +   if (IN_MULTICAST(a) || (a >> IN_CLASSA_NSHIFT) == IN_LOOPBACKNET)
> return;
>
> /* Check if the nexthop is the loopback addr. */
> Index: usr.sbin/bgpd/rde.c
> ===
> RCS file: /cvs/src/usr.sbin/bgpd/rde.c,v
> retrieving revision 1.544
> diff -u -p -r1.544 rde.c
> --- usr.sbin/bgpd/rde.c 22 Mar 2022 10:53:08 -  1.544
> +++ usr.sbin/bgpd/rde.c 5 May 2022 08:48:49 -
> @@ -1790,10 +1790,10 @@ bad_flags:
> UPD_READ(_addr, p, plen, 4);
> /*
>  * Check if the nexthop is a valid IP address. We consider
> -* multicast and experimental addresses as invalid.
> +* multicast addresses as invalid.
>  */
> tmp32 = ntohl(nexthop.v4.s_addr);
> -   if (IN_MULTICAST(tmp32) || IN_BADCLASS(tmp32)) {
> +   if (IN_MULTICAST(tmp32)) {
> rde_update_err(peer, ERR_UPDATE, ERR_UPD_NEXTHOP,
> op, len);
> return (-1);
> Index: usr.sbin/eigrpd/util.c
> ===
> RCS file: /cvs/src/usr.sbin/eigrpd/util.c,v
> retrieving revision 1.10
> diff -u -p -r1.10 util.c
> --- usr.sbin/eigrpd/util.c  7 Dec 2018 08:40:54 -   1.10
> +++ usr.sbin/eigrpd/util.c  5 May 2022 08:53:31 -
> @@ -224,7 +224,7 @@ bad_addr_v4(struct in_addr addr)
>
> if (((a >> IN_CLASSA_NSHIFT) == 0) ||
> ((a >> IN_CLASSA_NSHIFT) == IN_LOOPBACKNET) ||
> -   IN_MULTICAST(a) || IN_BADCLASS(a))
> +   IN_MULTICAST(a))
> return (1);
>
> return (0);
> Index: usr.sbin/ldpd/util.c
> ===
> RCS file: /cvs/src/usr.sbin/ldpd/util.c,v
> retrieving revision 1.5
> diff -u -p -r1.5 util.c
> --- usr.sbin/ldpd/util.c7 Dec 2018 08:40:54 -   1.5
> +++ usr.sbin/ldpd/util.c5 May 2022 08:54:03 -
> @@ -223,7 +223,7 @@ bad_addr_v4(struct in_addr addr)
>
> if (((a >> IN_CLASSA_NSHIFT) == 0) ||
> ((a >> IN_CLASSA_NSHIFT) == IN_LOOPBACKNET) ||
> -   IN_MULTICAST(a) || IN_BADCLASS(a))
> +   IN_MULTICAST(a))
> return (1);
>
> return (0);
> Index: usr.sbin/mrouted/inet.c
> ===
> RCS file: /cvs/src/usr.sbin/mrouted/inet.c,v
> retrieving revision 1.6
> diff -u -p -r1.6 inet.c
> --- usr.sbin/mrouted/inet.c 21 Apr 2013 06:42:43 -  1.6
> +++ usr.sbin/mrouted/inet.c 5 May 2022 08:57:09 -
> @@ -36,7 +36,6 @@ inet_valid_host(u_int32_t naddr)
>  addr = ntohl(naddr);
>
>  return (!(IN_MULTICAST(addr) ||
> - IN_BADCLASS (addr) ||
>   (addr & 0xff00) == 0));
>  }
>
> @@ -83,7 +82,7 @@ inet_valid_subnet(u_int32_t nsubnet, u_i
> (subnet & 0xff00) == 0x7f00 ||
> (subnet & 0xff00) == 0x) return (FALSE);
>  }
> -else if (IN_CLASSD(subnet) || IN_BADCLASS(subnet)) {
> +else if (IN_CLASSD(subnet)) {
> /* Above Class C address space */
> return (FALSE);
>  }
> Index: usr.sbin/ospfd/kroute.c
> ===
> RCS file: 

Re: clang-local.1: document support for source-based code coverage

2022-05-06 Thread Theo de Raadt
Frederic Cambus  wrote:

> On a more serious note though, building from ports was the only way
> to have -stable packages before we started to offer -stable binary
> packages with OpenBSD 6.5, and it is still the only way for users of
> architectures for which those packages are not provided. It's thus
> reasonable to assume most of our users are familiar with the ports
> tree hierarchy terminology.

I have no idea where you are coming from.  The vast majority of our
users did NOT care about newer ports, and stuck with the provided
packages through the 6 month release cycle. 

The % of users who build ports is low single-digit.



Re: allow 240/4 in various network daemons

2022-05-06 Thread Theo Buehler
On Fri, May 06, 2022 at 02:11:46PM +0200, Claudio Jeker wrote:
> On Thu, May 05, 2022 at 11:37:24AM +0200, Claudio Jeker wrote:
> > So most routing daemons and other network daemons like pppd do not allow
> > 240/4 as IPs because they check the IP against IN_BADCLASS().
> > I think it is time to remove this restriction.
> > 
> > Now there is another magical network 0.0.0.0/8 which is not allowed in
> > some but not all of the routing daemons. Not sure if that should be
> > removed or blocked in all daemons.
> 
> The discussion about this diff totally derailed so lets try again. Anyone
> wants to OK this?

ok tb



Re: allow 240/4 in various network daemons

2022-05-06 Thread Claudio Jeker
On Thu, May 05, 2022 at 11:37:24AM +0200, Claudio Jeker wrote:
> So most routing daemons and other network daemons like pppd do not allow
> 240/4 as IPs because they check the IP against IN_BADCLASS().
> I think it is time to remove this restriction.
> 
> Now there is another magical network 0.0.0.0/8 which is not allowed in
> some but not all of the routing daemons. Not sure if that should be
> removed or blocked in all daemons.

The discussion about this diff totally derailed so lets try again. Anyone
wants to OK this?
 
-- 
:wq Claudio

Index: usr.sbin/bgpd/kroute.c
===
RCS file: /cvs/src/usr.sbin/bgpd/kroute.c,v
retrieving revision 1.244
diff -u -p -r1.244 kroute.c
--- usr.sbin/bgpd/kroute.c  8 Mar 2022 12:58:57 -   1.244
+++ usr.sbin/bgpd/kroute.c  5 May 2022 08:48:27 -
@@ -1448,12 +1448,11 @@ kr_redistribute(int type, struct ktable 
return;
 
/*
-* We consider the loopback net, multicast and experimental addresses
+* We consider the loopback net and multicast addresses
 * as not redistributable.
 */
a = ntohl(kr->prefix.s_addr);
-   if (IN_MULTICAST(a) || IN_BADCLASS(a) ||
-   (a >> IN_CLASSA_NSHIFT) == IN_LOOPBACKNET)
+   if (IN_MULTICAST(a) || (a >> IN_CLASSA_NSHIFT) == IN_LOOPBACKNET)
return;
 
/* Check if the nexthop is the loopback addr. */
Index: usr.sbin/bgpd/rde.c
===
RCS file: /cvs/src/usr.sbin/bgpd/rde.c,v
retrieving revision 1.544
diff -u -p -r1.544 rde.c
--- usr.sbin/bgpd/rde.c 22 Mar 2022 10:53:08 -  1.544
+++ usr.sbin/bgpd/rde.c 5 May 2022 08:48:49 -
@@ -1790,10 +1790,10 @@ bad_flags:
UPD_READ(_addr, p, plen, 4);
/*
 * Check if the nexthop is a valid IP address. We consider
-* multicast and experimental addresses as invalid.
+* multicast addresses as invalid.
 */
tmp32 = ntohl(nexthop.v4.s_addr);
-   if (IN_MULTICAST(tmp32) || IN_BADCLASS(tmp32)) {
+   if (IN_MULTICAST(tmp32)) {
rde_update_err(peer, ERR_UPDATE, ERR_UPD_NEXTHOP,
op, len);
return (-1);
Index: usr.sbin/eigrpd/util.c
===
RCS file: /cvs/src/usr.sbin/eigrpd/util.c,v
retrieving revision 1.10
diff -u -p -r1.10 util.c
--- usr.sbin/eigrpd/util.c  7 Dec 2018 08:40:54 -   1.10
+++ usr.sbin/eigrpd/util.c  5 May 2022 08:53:31 -
@@ -224,7 +224,7 @@ bad_addr_v4(struct in_addr addr)
 
if (((a >> IN_CLASSA_NSHIFT) == 0) ||
((a >> IN_CLASSA_NSHIFT) == IN_LOOPBACKNET) ||
-   IN_MULTICAST(a) || IN_BADCLASS(a))
+   IN_MULTICAST(a))
return (1);
 
return (0);
Index: usr.sbin/ldpd/util.c
===
RCS file: /cvs/src/usr.sbin/ldpd/util.c,v
retrieving revision 1.5
diff -u -p -r1.5 util.c
--- usr.sbin/ldpd/util.c7 Dec 2018 08:40:54 -   1.5
+++ usr.sbin/ldpd/util.c5 May 2022 08:54:03 -
@@ -223,7 +223,7 @@ bad_addr_v4(struct in_addr addr)
 
if (((a >> IN_CLASSA_NSHIFT) == 0) ||
((a >> IN_CLASSA_NSHIFT) == IN_LOOPBACKNET) ||
-   IN_MULTICAST(a) || IN_BADCLASS(a))
+   IN_MULTICAST(a))
return (1);
 
return (0);
Index: usr.sbin/mrouted/inet.c
===
RCS file: /cvs/src/usr.sbin/mrouted/inet.c,v
retrieving revision 1.6
diff -u -p -r1.6 inet.c
--- usr.sbin/mrouted/inet.c 21 Apr 2013 06:42:43 -  1.6
+++ usr.sbin/mrouted/inet.c 5 May 2022 08:57:09 -
@@ -36,7 +36,6 @@ inet_valid_host(u_int32_t naddr)
 addr = ntohl(naddr);
 
 return (!(IN_MULTICAST(addr) ||
- IN_BADCLASS (addr) ||
  (addr & 0xff00) == 0));
 }
 
@@ -83,7 +82,7 @@ inet_valid_subnet(u_int32_t nsubnet, u_i
(subnet & 0xff00) == 0x7f00 ||
(subnet & 0xff00) == 0x) return (FALSE);
 }
-else if (IN_CLASSD(subnet) || IN_BADCLASS(subnet)) {
+else if (IN_CLASSD(subnet)) {
/* Above Class C address space */
return (FALSE);
 }
Index: usr.sbin/ospfd/kroute.c
===
RCS file: /cvs/src/usr.sbin/ospfd/kroute.c,v
retrieving revision 1.114
diff -u -p -r1.114 kroute.c
--- usr.sbin/ospfd/kroute.c 20 Aug 2020 03:09:28 -  1.114
+++ usr.sbin/ospfd/kroute.c 5 May 2022 08:54:30 -
@@ -565,12 +565,11 @@ kr_redist_eval(struct kroute *kr, struct
goto dont_redistribute;
 
/*
-* We consider the loopback net, multicast and experimental addresses
+* We consider the 

Re: wsmouse(4): tapping

2022-05-06 Thread Ulf Brosziewski
On 5/3/22 10:03, Ulf Brosziewski wrote:
> The implementation of the tapping mechanism in wsmouse(4) has a bug
> concerning the transitions into the hold/drag state, see
> https://marc.info/...
> for details.  The patch proposed in that message is obsolete.  I've
> been made aware that there is another problem, the transition only
> works with left-button taps.
> 
> This patch merges tap detection and the handling of hold/drag states,
> which is a cleaner and more generic approach.  It fixes the problems
> mentioned above, and it permits a better synchronization of button
> states when tap inputs and the use of hardware buttons are mixed.
> 
> OK?
> 
> 
> Index: dev/wscons/wstpad.c
> ===
> RCS file: /cvs/src/sys/dev/wscons/wstpad.c,v
> retrieving revision 1.30
> [...]


This version of the patch extends the synchronization of button states
to the tapping timeouts.  With this extension, even quite exotic
combinations of button and tapping inputs will produce proper pairs
of button-up and button-down events.

OK?


Index: dev/wscons/wstpad.c
===
RCS file: /cvs/src/sys/dev/wscons/wstpad.c,v
retrieving revision 1.30
diff -u -p -r1.30 wstpad.c
--- dev/wscons/wstpad.c 24 Mar 2021 18:28:24 -  1.30
+++ dev/wscons/wstpad.c 6 May 2022 08:39:21 -
@@ -82,18 +82,17 @@ enum tap_state {
TAP_DETECT,
TAP_IGNORE,
TAP_LIFTED,
-   TAP_2ND_TOUCH,
TAP_LOCKED,
-   TAP_NTH_TOUCH,
+   TAP_LOCKED_DRAG,
 };

 enum tpad_cmd {
CLEAR_MOTION_DELTAS,
SOFTBUTTON_DOWN,
SOFTBUTTON_UP,
+   TAPBUTTON_SYNC,
TAPBUTTON_DOWN,
TAPBUTTON_UP,
-   TAPBUTTON_DOUBLECLK,
VSCROLL,
HSCROLL,
 };
@@ -212,8 +211,10 @@ struct wstpad {
struct {
enum tap_state state;
int contacts;
-   int centered;
+   int valid;
+   u_int pending;
u_int button;
+   int masked;
int maxdist;
struct timeout to;
/* parameters: */
@@ -651,6 +652,7 @@ wstpad_softbuttons(struct wsmouseinput *
}
 }

+/* Check whether the duration of t is within the tap limit. */
 int
 wstpad_is_tap(struct wstpad *tp, struct tpad_touch *t)
 {
@@ -675,7 +677,7 @@ wstpad_tap_filter(struct wstpad *tp, str
dy = abs(t->y - t->orig.y) * tp->ratio;
dist = (dx >= dy ? dx + 3 * dy / 8 : dy + 3 * dx / 8);
}
-   tp->tap.centered = (CENTERED(t) && dist <= (tp->tap.maxdist << 12));
+   tp->tap.valid = (CENTERED(t) && dist <= (tp->tap.maxdist << 12));
 }


@@ -694,7 +696,7 @@ wstpad_tap_touch(struct wsmouseinput *in
lifted = (input->mt.sync[MTS_TOUCH] & ~input->mt.touches);
FOREACHBIT(lifted, slot) {
s = >tpad_touches[slot];
-   if (tp->tap.state == TAP_DETECT && !tp->tap.centered)
+   if (tp->tap.state == TAP_DETECT && !tp->tap.valid)
wstpad_tap_filter(tp, s);
if (t == NULL || timespeccmp(>orig.time,
>orig.time, >))
@@ -703,7 +705,7 @@ wstpad_tap_touch(struct wsmouseinput *in
} else {
if (tp->t->state == TOUCH_END) {
t = tp->t;
-   if (tp->tap.state == TAP_DETECT && !tp->tap.centered)
+   if (tp->tap.state == TAP_DETECT && !tp->tap.valid)
wstpad_tap_filter(tp, t);
}
}
@@ -711,30 +713,31 @@ wstpad_tap_touch(struct wsmouseinput *in
return (t);
 }

+/* Determine the "tap button", keep track of whether a touch is masked. */
+u_int
+wstpad_tap_button(struct wstpad *tp)
+{
+   int n = tp->tap.contacts - tp->contacts - 1;
+
+   tp->tap.masked = tp->contacts;
+
+   return (n >= 0 && n < TAP_BTNMAP_SIZE ? tp->tap.btnmap[n] : 0);
+}
+
 /*
- * If each contact in a sequence of contacts that overlap in time
- * is a tap, a button event may be generated when the number of
- * contacts drops to zero, or to one if there is a masked touch.
+ * In the hold/drag state, do not mask touches if no masking was involved
+ * in the preceding tap gesture.
  */
 static inline int
-tap_finished(struct wstpad *tp, int nmasked)
+tap_unmask(struct wstpad *tp)
 {
-   return (tp->contacts == nmasked
-   && (nmasked == 0 || !wstpad_is_tap(tp, tp->t)));
-}
-
-static inline u_int
-tap_btn(struct wstpad *tp, int nmasked)
-{
-   int n = tp->tap.contacts - nmasked;
-
-   return (n <= TAP_BTNMAP_SIZE ? tp->tap.btnmap[n - 1] : 0);
+   return ((tp->tap.button || tp->tap.pending) && tp->tap.masked == 0);
 }

 /*
- * This handler supports one-, two-, and three-finger-taps, which
- * are mapped to left-button, right-button and middle-button 

Re: athn(4) USB question: Where is Tx interrupt handler?

2022-05-06 Thread Stefan Sperling
On Thu, May 05, 2022 at 01:19:08PM -0400, Farhan Khan wrote:
> Hi all,
> 
> Summary Question:
> 
> Broadly, I am trying to understand where a interrupt callback is specified if 
> not already specified by usbd_open_pipe_intr(9). Specifically, for the 
> athn(4) 
> driver, I am trying to understand if/how athn_usb_intr() is executed for Tx 
> interrupts. This seems necessary yet I do not see a callback specified by 
> usbd_setup_xfer(9) or by usbd_open_pipe_intr(9).
> 
> All code is located in sys/dev/usb/if_athn_usb.c.
> 
> Question Walk-through:
> 
> >From reading the code, it seems that the athn_usb_intr() function is called 
> whenever a Tx interrupt is triggered. The reason I think this is because 
> there 
> is a tsleep_nsec(9) for a Tx interrupt that awaits for a wakeup(9) that only 
> happens in athn_usb_intr().
> 
> The 3 relevant steps are listed below in athn_usb_htc_setup() under the 
> comment "Set credits for WLAN Tx pipe":
> 
> 1. athn_usb_htc_msg(), which runs usbd_setup_xfer(9) and usbd_transfer(9) for 
> a Tx interrupt. The callback is set to NULL.
> 2. usc->wait_msg_id is set to AR_HTC_MSG_CONF_PIPE_RSP.
> 3. A tsleep_nsec() on >wait_msg_id
> 
> The only place I see a wakeup(9) on >wait_msg_id is within 
> athn_usb_intr(), on condition that usc->wait_msg_id is set to 
> AR_HTC_MSG_CONF_PIPE_RSP. Seems like a perfect match. Additionally, I do not 
> see an Rx interrupt anywhere else. But even if it does happen somewhere and I 
> am just missing it, the only place AR_HTC_MSG_CONF_PIPE_RSP is used is step 2.
> 
> Rx interrupt callbacks to athn_usb_intr() are specified by the 
> usbd_open_pipe_intr(9) call in athn_usb_open_pipes(). That seems explicit. 
> But 
> for the Tx interrupt, I do not see where the mapping to athn_usb_intr() is.
> 
> Please assist, thank you.

Everything related to Tx is happening in athn_usb_tx() and athn_usb_txoef().

usbd_setup_xfer() gets a function pointer to call when the USB transfer
has completed. This function pointer is athn_usb_txeof():

usbd_setup_xfer(data->xfer, usc->tx_data_pipe, data, data->buf,
xferlen, USBD_FORCE_SHORT_XFER | USBD_NO_COPY, ATHN_USB_TX_TIMEOUT,
athn_usb_txeof);

athn_usb_txeof() puts the buffer associated with the Tx attempt back onto
the list of free buffers, and schedules more Tx attempts if needed by
calling if_start().

The associated mbuf is freed quite early, before the Tx attempt is even made,
because the entire packet gets copied into the Tx command sent to the device:

/* Copy payload. */
m_copydata(m, 0, m->m_pkthdr.len, frm);
frm += m->m_pkthdr.len;
m_freem(m);



Re: clang-local.1: document support for source-based code coverage

2022-05-06 Thread Frederic Cambus
On Wed, May 04, 2022 at 09:50:47AM -0600, Theo de Raadt wrote:

> > > Isn't the purpose of the clang-local(1) to document local changes to the
> > > system compiler, -fsanitize-minimal-runtime feels like a special case as
> > > we do not have the complete sanitizer runtime.
> > 
> > What do you suggest as a good location where people will 
> > find that information easily ?
> 
> Who knows, but probably not in clang-local.1
> 
> I actually find it a bit offensive when a base document has to mention
> something not in base. While here, let me make a comment on the

This is a valid objection, and there were other concerns about
clang-local.1 not being an adequate place to mention this, so I'm
withdrawing this diff.

> proposal's use of the token "devel/llvm" -- that is so completely obtuse
> and out of touch with the potential user base.  The average person will
> not understand that at all.  It is hugely presumptious that anyone
> searching for this compiler tooling will be familiar with the "ports"
> tree-heiracry; the reality is NOONE uses ports, instead they use
> packages with has a completely different namespace, and thus
> "devel/llvm" is completely meaningless to a person who uses packages.

That's a nice slogan idea for a future ports hackathon t-shirt:
"the reality is NOONE uses ports".

On a more serious note though, building from ports was the only way
to have -stable packages before we started to offer -stable binary
packages with OpenBSD 6.5, and it is still the only way for users of
architectures for which those packages are not provided. It's thus
reasonable to assume most of our users are familiar with the ports
tree hierarchy terminology.



Re: rc.subr - allow setting daemon start directory

2022-05-06 Thread Antoine Jacoutot
On Thu, May 05, 2022 at 04:49:10PM -0600, Aaron Bieber wrote:
> 
> 
> On Thu, May 5, 2022, at 4:46 PM, Antoine Jacoutot wrote:
> > Can you elaborate?
> > Do they need to start from a specific directory or from a directory 
> > they have write access to?
> > Because we could cd /tmp unconditionally 
> >
> 
> From a specific directory. I can’t think of which ones off the top of my 
> head, but for sure I have some personal rc scripts that need it for nodejs 
> things.

Ok. Let me think about it.

-- 
Antoine