Re: parallel ip forwarding

2021-12-23 Thread YASUOKA Masahiko
Hello,

On Fri, 24 Dec 2021 00:55:04 +0100
Alexander Bluhm  wrote:
> On Fri, Dec 03, 2021 at 08:35:45PM +0100, Alexander Bluhm wrote:
>> Note that IPsec still has the workaround to disable multiple queues.
> 
> I think we can remove the ipsec_in_use workaround now.  The IPsec
> path is protected with the kernel lock.
> 
> There are some issues left:
> - npppd l2pt ipsecflowinfo is not MP safe

Does this mean the things we are discussing on the "Fix
ipsp_spd_lookup() for transport mode" thread?  I wonder if there is
another issue.

> - the acquire SA feature is not MP safe
> - Hrvoje has seen a panic with sasync



Re: parallel ip forwarding

2021-12-23 Thread Alexander Bluhm
On Fri, Dec 03, 2021 at 08:35:45PM +0100, Alexander Bluhm wrote:
> Note that IPsec still has the workaround to disable multiple queues.

I think we can remove the ipsec_in_use workaround now.  The IPsec
path is protected with the kernel lock.

There are some issues left:
- npppd l2pt ipsecflowinfo is not MP safe
- the acquire SA feature is not MP safe
- Hrvoje has seen a panic with sasync

If you use one of these, the diff below should trigger crashes faster.
If you use only regular IPsec or forwarding, I hope it is stable.

bluhm

Index: net/if.c
===
RCS file: /cvs/src/sys/net/if.c,v
retrieving revision 1.644
diff -u -p -r1.644 if.c
--- net/if.c11 Nov 2021 10:03:10 -  1.644
+++ net/if.c23 Dec 2021 23:11:20 -
@@ -108,6 +108,10 @@
 #include 
 #endif
 
+#ifdef IPSEC
+#include 
+#endif
+
 #ifdef MPLS
 #include 
 #endif
@@ -237,7 +241,7 @@ int ifq_congestion;
 
 int netisr;
 
-#defineNET_TASKQ   1
+#defineNET_TASKQ   4
 struct taskq   *nettqmp[NET_TASKQ];
 
 struct task if_input_task_locked = TASK_INITIALIZER(if_netisr, NULL);
@@ -834,15 +838,10 @@ if_input_process(struct ifnet *ifp, stru
 * lists and the socket layer.
 */
 
-   /*
-* XXXSMP IPsec data structures are not ready to be accessed
-* by multiple network threads in parallel.  In this case
-* use an exclusive lock.
-*/
-   NET_LOCK();
+   NET_RLOCK_IN_SOFTNET();
while ((m = ml_dequeue(ml)) != NULL)
(*ifp->if_input)(ifp, m);
-   NET_UNLOCK();
+   NET_RUNLOCK_IN_SOFTNET();
 }
 
 void
@@ -899,6 +898,12 @@ if_netisr(void *unused)
arpintr();
KERNEL_UNLOCK();
}
+#endif
+   if (n & (1 << NETISR_IP))
+   ipintr();
+#ifdef INET6
+   if (n & (1 << NETISR_IPV6))
+   ip6intr();
 #endif
 #if NPPP > 0
if (n & (1 << NETISR_PPP)) {
Index: net/if_ethersubr.c
===
RCS file: /cvs/src/sys/net/if_ethersubr.c,v
retrieving revision 1.276
diff -u -p -r1.276 if_ethersubr.c
--- net/if_ethersubr.c  19 Aug 2021 10:22:00 -  1.276
+++ net/if_ethersubr.c  23 Dec 2021 23:11:20 -
@@ -222,7 +222,10 @@ ether_resolve(struct ifnet *ifp, struct 
 
switch (af) {
case AF_INET:
+   KERNEL_LOCK();
+   /* XXXSMP there is a MP race in arpresolve() */
error = arpresolve(ifp, rt, m, dst, eh->ether_dhost);
+   KERNEL_UNLOCK();
if (error)
return (error);
eh->ether_type = htons(ETHERTYPE_IP);
@@ -245,7 +248,10 @@ ether_resolve(struct ifnet *ifp, struct 
break;
 #ifdef INET6
case AF_INET6:
+   KERNEL_LOCK();
+   /* XXXSMP there is a MP race in nd6_resolve() */
error = nd6_resolve(ifp, rt, m, dst, eh->ether_dhost);
+   KERNEL_UNLOCK();
if (error)
return (error);
eh->ether_type = htons(ETHERTYPE_IPV6);
@@ -271,13 +277,19 @@ ether_resolve(struct ifnet *ifp, struct 
break;
 #ifdef INET6
case AF_INET6:
+   KERNEL_LOCK();
+   /* XXXSMP there is a MP race in nd6_resolve() */
error = nd6_resolve(ifp, rt, m, dst, eh->ether_dhost);
+   KERNEL_UNLOCK();
if (error)
return (error);
break;
 #endif
case AF_INET:
+   KERNEL_LOCK();
+   /* XXXSMP there is a MP race in arpresolve() */
error = arpresolve(ifp, rt, m, dst, eh->ether_dhost);
+   KERNEL_UNLOCK();
if (error)
return (error);
break;
@@ -529,12 +541,14 @@ ether_input(struct ifnet *ifp, struct mb
case ETHERTYPE_PPPOE:
if (m->m_flags & (M_MCAST | M_BCAST))
goto dropanyway;
+   KERNEL_LOCK();
 #ifdef PIPEX
if (pipex_enable) {
struct pipex_session *session;
 
if ((session = pipex_pppoe_lookup_session(m)) != NULL) {
pipex_pppoe_input(m, session);
+   KERNEL_UNLOCK();
return;
}
}
@@ -543,6 +557,7 @@ ether_input(struct ifnet *ifp, struct mb
pppoe_disc_input(m);
else
pppoe_data_input(m);
+   KERNEL_UNLOCK();
return;
 #endif
 #ifdef MPLS
Index: net/ifq.c
===

warning for -current builders: update your kernel before libc/ld.so!

2021-12-23 Thread Philip Guenther
For those building -current themselves, when you update past the commit
below you must be sure to build *and reboot to* a new kernel with the
change before you install a new libc or ld.so!

If you fail to do so then anything using the newer-than-kernel libc/ld.so
will coredump immediately, generally on the first mmap(2), and you'll need
to reboot to a bsd.rd or similar and put a matching kernel+libc+ld.so in
place.

This might be a good time to just install an official snapshot instead.

-- Forwarded message -
From: Philip Guenther 
Date: Thu, Dec 23, 2021 at 10:51 AM
Subject: CVS: cvs.openbsd.org: src
To: 


CVSROOT:/cvs
Module name:src
Changes by: guent...@cvs.openbsd.org2021/12/23 11:50:33

Modified files:
sys/kern   : syscalls.master vfs_syscalls.c kern_ktrace.c
 kern_pledge.c
sys/arch/sh/sh : trap.c
sys/arch/hppa/hppa: trap.c
sys/uvm: uvm_mmap.c
lib/libc/sys   : Makefile.inc
libexec/ld.so  : Makefile loader.c
libexec/ld.so/m88k: rtld_machine.c
usr.bin/kdump  : kdump.c
Added files:
libexec/ld.so  : syscall.h
Removed files:
lib/libc/sys   : ftruncate.c lseek.c mmap.c mquery.c pread.c
 preadv.c pwrite.c pwritev.c truncate.c
libexec/ld.so/m88k: syscall.h
libexec/ld.so/aarch64: syscall.h
libexec/ld.so/alpha: syscall.h
libexec/ld.so/amd64: syscall.h
libexec/ld.so/arm: syscall.h
libexec/ld.so/hppa: syscall.h
libexec/ld.so/i386: syscall.h
libexec/ld.so/mips64: syscall.h
libexec/ld.so/powerpc: syscall.h
libexec/ld.so/powerpc64: syscall.h
libexec/ld.so/riscv64: syscall.h
libexec/ld.so/sh: syscall.h
libexec/ld.so/sparc64: syscall.h

Log message:
Roll the syscalls that have an off_t argument to remove the explicit
padding.
Switch libc and ld.so to the generic stubs for these calls.
WARNING: reboot to updated kernel before installing libc or ld.so!

Time for a story...

When gcc (back in 1.x days) first implemented long long, it didn't (always)
pass 64bit arguments in 'aligned' registers/stack slots, with the result
that
argument offsets didn't match structure offsets.  This affected the nine
system
calls that pass off_t arguments:
ftruncate lseek mmap mquery pread preadv pwrite pwritev truncate

To avoid having to do custom ASM wrappers for those, BSD put an explicit pad
argument in so that the off_t argument would always start on a even slot and
thus be naturally aligned.  Thus those odd wrappers in lib/libc/sys/ that
use
__syscall() and pass an extra '0' argument.

The ABIs for different CPUs eventually settled how things should be passed
on
each and gcc 2.x followed them.  The only arch now where it helps is
landisk,
which needs to skip the last argument register if it would be the first
half of
a 64bit argument.  So: add new syscalls without the pad argument and on
landisk
do that skipping directly in the syscall handler in the kernel.  Keep compat
support for the existing syscalls long enough for the transition.

ok deraadt@


Re: pax(1): Don't open files that will be skipped

2021-12-23 Thread Jeremy Evans
On 11/22 11:15, Jeremy Evans wrote:
> On 10/19 09:34, Jeremy Evans wrote:
> > Currently, when creating an archive file with pax(1), pax will attempt
> > to open a file even if the file will be skipped due to an -s
> > replacement with the empty string. With this change, pax will not
> > attempt to open files that it knows will be skipped.
> > 
> > When doing direct copies to a directory (-rw), pax already skips
> > the file before attempting to open it. So this makes the behavior
> > more consistent.
> > 
> > My main reason for this change is to be able to silence the following
> > warning when backing up:
> > 
> >   pax: Unable to open /etc/spwd.db to read: Operation not permitted
> > 
> > One other possible benefit is if you are skipping a lot of files,
> > avoiding the open syscall for each file may make pax faster.
> > 
> > OKs?
> 
> Ping.  I still think this is worth doing.  I've tested and it is also
> measurably faster (25%) if you are excluding a lot of files:
> 
> $ doas time obj/pax -f /dev/null -w -x cpio -s '/^.*$//' /some-path
> 2.65 real 1.95 user 0.73 sys
> $ doas time pax -f /dev/null -w -x cpio -s '/^.*$//' /some-path
> 3.34 real 2.04 user 1.30 sys
> 
> OKs?

Still looking for a review/OK for this.

Thanks,
Jeremy

> 
> Thanks,
> Jeremy
> 
> > 
> > Jeremy
> > 
> > Index: ar_subs.c
> > ===
> > RCS file: /cvs/src/bin/pax/ar_subs.c,v
> > retrieving revision 1.49
> > diff -u -p -r1.49 ar_subs.c
> > --- ar_subs.c   28 Jun 2019 13:34:59 -  1.49
> > +++ ar_subs.c   19 Oct 2021 16:05:31 -
> > @@ -441,6 +441,23 @@ wr_archive(ARCHD *arcn, int is_app)
> > if (hlk && (chk_lnk(arcn) < 0))
> > break;
> >  
> > +   /*
> > +* Modify the name as requested by the user
> > +*/
> > +   if ((res = mod_name(arcn)) < 0) {
> > +   /*
> > +* pax finished, purge link table entry and stop
> > +*/
> > +   purg_lnk(arcn);
> > +   break;
> > +   } else if (res > 0) {
> > +   /*
> > +* skipping file, purge link table entry
> > +*/
> > +   purg_lnk(arcn);
> > +   continue;
> > +   }
> > +
> > if (PAX_IS_REG(arcn->type) || (arcn->type == PAX_HRG)) {
> > /*
> >  * we will have to read this file. by opening it now we
> > @@ -456,20 +473,7 @@ wr_archive(ARCHD *arcn, int is_app)
> > }
> > }
> >  
> > -   /*
> > -* Now modify the name as requested by the user
> > -*/
> > -   if ((res = mod_name(arcn)) < 0) {
> > -   /*
> > -* name modification says to skip this file, close the
> > -* file and purge link table entry
> > -*/
> > -   rdfile_close(arcn, &fd);
> > -   purg_lnk(arcn);
> > -   break;
> > -   }
> > -
> > -   if ((res > 0) || (docrc && (set_crc(arcn, fd) < 0))) {
> > +   if (docrc && (set_crc(arcn, fd) < 0)) {
> > /*
> >  * unable to obtain the crc we need, close the file,
> >  * purge link table entry



rpki-client refactor common repo code

2021-12-23 Thread Claudio Jeker
Create a common repo_done() function which does the entiyq_flush and in
the case of RRDP the fallback to rsync. This simplifies the code and will
help to add the repo info to the parser process.

One difference between this and the original version is the case when a
RRDP repository merge fails. Before the code would just give up while now
it will also fall back to rsync. I think that's fine in the end if
anything with RRDP fails rpki-client should try rsync before giving up.

Apart from the the code should behave the same.
-- 
:wq Claudio

Index: repo.c
===
RCS file: /cvs/src/usr.sbin/rpki-client/repo.c,v
retrieving revision 1.17
diff -u -p -r1.17 repo.c
--- repo.c  22 Dec 2021 09:35:14 -  1.17
+++ repo.c  23 Dec 2021 15:09:36 -
@@ -88,7 +88,7 @@ struct tarepo {
 };
 SLIST_HEAD(, tarepo)   tarepos = SLIST_HEAD_INITIALIZER(tarepos);
 
-struct repo {
+struct repo {
SLIST_ENTRY(repo)entry;
char*repouri;
char*notifyuri;
@@ -105,6 +105,8 @@ SLIST_HEAD(, repo)  repos = SLIST_HEAD_IN
 /* counter for unique repo id */
 unsigned int   repoid;
 
+static struct rsyncrepo*rsync_get(const char *, int);
+
 /*
  * Database of all file path accessed during a run.
  */
@@ -271,6 +273,49 @@ repo_mkpath(char *file)
 }
 
 /*
+ * Return the state of a repository.
+ */
+static enum repo_state
+repo_state(struct repo *rp)
+{
+   if (rp->ta)
+   return rp->ta->state;
+   if (rp->rsync)
+   return rp->rsync->state;
+   if (rp->rrdp)
+   return rp->rrdp->state;
+   errx(1, "%s: bad repo", rp->repouri);
+}
+
+/*
+ * Function called once a repository is done with the sync. Either
+ * successfully or after failure.
+ */
+static void
+repo_done(const void *vp, int ok)
+{
+   struct repo *rp;
+
+   SLIST_FOREACH(rp, &repos, entry) {
+   if (rp->ta == vp)
+   entityq_flush(&rp->queue, rp);
+   if (rp->rsync == vp)
+   entityq_flush(&rp->queue, rp);
+   if (rp->rrdp == vp) {
+   if (!ok) {
+   /* try to fall back to rsync */
+   rp->rrdp = NULL;
+   rp->rsync = rsync_get(rp->repouri, 0);
+   /* need to check if it was already loaded */
+   if (repo_state(rp) != REPO_LOADING)
+   entityq_flush(&rp->queue, rp);
+   } else
+   entityq_flush(&rp->queue, rp);
+   }
+   }
+}
+
+/*
  * Build TA file name based on the repo info.
  * If temp is set add Xs for mkostemp.
  */
@@ -344,14 +389,10 @@ ta_fetch(struct tarepo *tr)
}
 
if (tr->uriidx >= tr->urisz) {
-   struct repo *rp;
-
tr->state = REPO_FAILED;
logx("ta/%s: fallback to cache", tr->descr);
 
-   SLIST_FOREACH(rp, &repos, entry)
-   if (rp->ta == tr)
-   entityq_flush(&rp->queue, rp);
+   repo_done(tr, 0);
return;
}
 
@@ -406,9 +447,9 @@ ta_get(struct tal *tal)
tal->uri = NULL;
 
if (noop) {
-   tr->state = REPO_DONE;
+   tr->state = REPO_FAILED;
logx("ta/%s: using cache", tr->descr);
-   /* there is nothing in the queue so no need to flush */
+   repo_done(tr, 0);
} else {
/* try to create base directory */
if (mkpath(tr->basedir) == -1)
@@ -471,9 +512,9 @@ rsync_get(const char *uri, int nofetch)
rr->basedir = repo_dir(repo, "rsync", 0);
 
if (noop || nofetch) {
-   rr->state = REPO_DONE;
+   rr->state = REPO_FAILED;
logx("%s: using cache", rr->basedir);
-   /* there is nothing in the queue so no need to flush */
+   repo_done(rr, 0);
} else {
/* create base directory */
if (mkpath(rr->basedir) == -1) {
@@ -541,9 +582,9 @@ rrdp_get(const char *uri, int nofetch)
RB_INIT(&rr->deleted);
 
if (noop || nofetch) {
-   rr->state = REPO_DONE;
+   rr->state = REPO_FAILED;
logx("%s: using cache", rr->notifyuri);
-   /* there is nothing in the queue so no need to flush */
+   repo_done(rr, 0);
} else {
/* create base directory */
if (mkpath(rr->basedir) == -1) {
@@ -629,21 +670,6 @@ repo_alloc(int talid)
 }
 
 /*
- * Return the state of a repository.
- */
-static enum repo_state
-repo_state(struct repo *rp)
-{
-   if (rp->ta)
-   return rp->ta->state;
-   if (rp->rrdp)
-   return rp->rrdp->state;
-  

Handle knotes whose vnodes have been revoked

2021-12-23 Thread Visa Hankala
The system revokes open files for a tty device when the device is being
reused. The revoking has applied to the old poll/select implementations
sort of automagically because the scan code has always (re)walked the
path from file to vnode to device. With kqueue, the rewalking does not
happen in particular when event registrations are preserved between
system calls.

Ideally, the revoking of a tty device should invalidate any associated
knotes. The invalidation could be done in the device close routine.
However, the revoking takes full effect only after vclean() has changed
the vnode v_op pointer to dead_vops. The kevent() system call is already
unlocked, so programs can in principle register new knotes until the
pointer update becomes visible to other threads.

To avoid going down the rabbit hole of vnode revoking right now, the diff
below adds just-in-time fixing of knotes that should be cleared. It stops
programs from keeping their knotes on revoked ttys. It is ugly but does
not litter the already complex tty code with kqueue-specific stopgaps.

OK?

Index: kern/kern_event.c
===
RCS file: src/sys/kern/kern_event.c,v
retrieving revision 1.177
diff -u -p -r1.177 kern_event.c
--- kern/kern_event.c   20 Dec 2021 16:24:32 -  1.177
+++ kern/kern_event.c   23 Dec 2021 14:20:54 -
@@ -55,6 +55,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
 #ifdef DIAGNOSTIC
@@ -1381,6 +1396,31 @@ retry:
continue;
}
 
+   /* Fix knotes whose vnodes have been revoked. */
+   if ((kn->kn_fop->f_flags & FILTEROP_ISFD) &&
+   kn->kn_fp != NULL &&
+   kn->kn_fp->f_type == DTYPE_VNODE) {
+   struct vnode *vp = kn->kn_fp->f_data;
+
+   if (__predict_false(vp->v_op == &dead_vops &&
+   kn->kn_fop != &dead_filtops)) {
+   filter_detach(kn);
+   kn->kn_fop = &dead_filtops;
+
+   /*
+* Check if the event should be delivered.
+* Use f_event directly because this is
+* a special situation.
+*/
+   if (kn->kn_fop->f_event(kn, 0) == 0) {
+   filter_detach(kn);
+   knote_drop(kn, p);
+   mtx_enter(&kq->kq_lock);
+   continue;
+   }
+   }
+   }
+
memset(kevp, 0, sizeof(*kevp));
if (filter_process(kn, kevp) == 0) {
mtx_enter(&kq->kq_lock);



Adjust filt_solisten for poll and select

2021-12-23 Thread Visa Hankala
The listen version of EVFILT_READ for sockets differs somewhat from
the code in soo_poll(). The following patch adds a poll/select branch
to make the listen filter more consistent with the old code.

OK?

Index: kern/uipc_socket.c
===
RCS file: src/sys/kern/uipc_socket.c,v
retrieving revision 1.270
diff -u -p -r1.270 uipc_socket.c
--- kern/uipc_socket.c  13 Dec 2021 14:56:55 -  1.270
+++ kern/uipc_socket.c  23 Dec 2021 14:20:54 -
@@ -2318,11 +2318,23 @@ filt_soexceptprocess(struct knote *kn, s
 int
 filt_solisten_common(struct knote *kn, struct socket *so)
 {
+   int active;
+
soassertlocked(so);
 
kn->kn_data = so->so_qlen;
+   active = (kn->kn_data != 0);
+
+   if (kn->kn_flags & (__EV_POLL | __EV_SELECT)) {
+   if (so->so_state & SS_ISDISCONNECTED) {
+   kn->kn_flags |= __EV_HUP;
+   active = 1;
+   } else {
+   active = soreadable(so);
+   }
+   }
 
-   return (kn->kn_data != 0);
+   return (active);
 }
 
 int



Re: Fix ipsp_spd_lookup() for transport mode

2021-12-23 Thread YASUOKA Masahiko
Hi,

On Mon, 20 Dec 2021 13:20:46 +0100
Alexander Bluhm  wrote:
> On Tue, Dec 14, 2021 at 06:25:20PM +0900, YASUOKA Masahiko wrote:
>> Yes, if there is another better idea, it will be welcome.
>> For this moment, the diff is the best idea for me.
> 
> Sorry, no better idea.  I have no experiance with l2pt.  Codewise
> the diff looks fine, but I don't understand the consequences.

Thank you for your review and comments.

>> +if (tdbflow != NULL)
>> +rn = rn_lookup((caddr_t)&tdbflow->tdb_filter,
>> +(caddr_t)&tdbflow->tdb_filtermask, rnh);
> 
> Does rn_lookup() modify the radix tree?  I looks like rn_lookup ->
> rn_addmask -> rn_insert() does that.  This will make it impossible
> to make IPsec MP capable.  The radix tree is not MP safe, art has
> been implemented as an alternative.  An ipsp_spd_lookup() should
> not modify the flows.  It is stange that a function named rn_lookup()
> does modifications.  Did I miss something?

rn_lookup() doesn't make any modification.  rn_lookup() calls
rn_addmask() with second argument search=1.

 183 /* return a perfect match if m_arg is set, else do a regular rn_match */
 184 struct radix_node *
 185 rn_lookup(void *v_arg, void *m_arg, struct radix_node_head *head)
 186 {
 187 struct radix_node *x, *tm;
 188 caddr_t netmask = 0;
 189 
 190 if (m_arg) {
 191 tm = rn_addmask(m_arg, 1, head->rnh_treetop->rn_off);

and then rn_addmask()

 416 struct radix_node *
 417 rn_addmask(void *n_arg, int search, int skip)
 418 {
 (snip)
 449 if (tm || search)
 450 return (tm);
 451 tm = malloc(max_keylen + 2 * sizeof(*tm), M_RTABLE, M_NOWAIT | 
M_ZERO);
 452 if (tm == NULL)
 453 return (0);
 454 saved_tm = tm;
 455 netmask = cp = (caddr_t)(tm + 2);
 456 memcpy(cp, addmask_key, mlen);
 457 tm = rn_insert(cp, mask_rnhead, &maskduplicated, tm);

returns at #449-450 before calling rn_insert().  It seems that
rn_addmask() does read only operations when "search".

> Why do you call rn_lookup() here?

Since rn_match() doesn't take a mask and returns the best one.

For an example, if there are multiple peers behind a NAT, flows like
below can be configured at the same time.

  (a) Windows:  REMOTE_IP:1701/udp <=> LOCAL_IP:1701/udp
  (b) Linux:REMOTE_IP:ANY/udp  <=> LOCAL_IP:1701/udp

If source port of a packet from the Linux is 1701, rn_match() will
return (a) for it, then ipsp_spd_lookup() will fail to verify that the
given tdb matches the policy.

Policies can be created with wildcards (any port, any protocol), then
it is compared with a packet whose port and protocol is concreted.

Since rn_match() is to find a bestmatch, it can't find a wildcard
policy properly if there is a non wildcard policy which is overlapped
by the wildcard.

So the diff uses rn_lookup() to find the correct policy.


> Could we add the masks earlier when the flows are added?
> 
>> +else if (tdbp != NULL)
>> +rn = rn_lookup((caddr_t)&tdbp->tdb_filter,
>> +(caddr_t)&tdbp->tdb_filtermask, rnh);
> 
> What are the consequences of this chunk for regular IPsec?

I have thought that again.  Now I realized the problem is only for
transport mode.  For tunnel mode, since best match is always
preferred, rn_lookup() should be used.  I'll update the diff that uses
rn_lookup() for transport mode only.

>>  /* Match source/dest IDs. */
>> -if (ipo->ipo_ids)
>> -if (tdbp->tdb_ids == NULL ||
>> -!ipsp_ids_match(ipo->ipo_ids, 
>> tdbp->tdb_ids))
>> +if (ipo->ipo_ids != NULL) {
>> +if ((tdbp->tdb_flags & TDBF_TUNNELING) == 0 &&
>> +(tdbp->tdb_flags & TDBF_UDPENCAP) != 0) {
>> +/*
>> + * Skip IDs check for transport mode
>> + * with NAT-T.  Multiple clients (IDs)
>> + * can use a same policy.
aima>> + */
>> +} else if (tdbp->tdb_ids == NULL &&
>> +!ipsp_ids_match(ipo->ipo_ids,
>> +tdbp->tdb_ids))
>>  goto nomatchin;
>> +}
> 
> This was added to make IPsec/l2tp work in rev 1.85.  And now you
> change it to make it work.  I wish markus@ or mikeb@ could give a
> clue.

At the change of 1.85, "ipsec-id bundles" is introduced.  This
provides a way to select the same flow by a ipsecflowinfo after
rekeying.

The diff above is to skip a checking (TDB.ids == FLOW.ids) for
transport mode and NAT-T cases.  When NAT-T cases, a flow is shared by
multiple peers (IDs) so the skip is needed.  Also it is OK since

Re: ipsec kernel lock

2021-12-23 Thread Hrvoje Popovski
On 22.12.2021. 14:52, Alexander Bluhm wrote:
> Hi,
> 
> IPsec is not MP safe yet.  To allow forwarding in parallel without
> dirty hacks, it is better to protect IPsec input and output with
> kernel lock.  We do not loose much as crypto needs the kernel lock
> anyway.  From here we can refine the lock later.
> 
> Note that there is no kernel lock in the SPD lockup path.  I want
> to keep that lock free to allow fast forwarding with non IPsec
> traffic.
> 
> There are still some races in special cases, but in general it works
> with parallel IP input.
> 
> ok?

Hi,

i'm trying to panic sasyncd setup with this and parallel forwarding diff
and i just can't :)




Re: uhidppctl(8)

2021-12-23 Thread Claudio Jeker
On Thu, Dec 23, 2021 at 07:50:24AM +, Raf Czlonka wrote:
> On Wed, Dec 22, 2021 at 08:32:16AM GMT, Claudio Jeker wrote:
> > On Tue, Dec 21, 2021 at 03:49:47PM -0500, jwinnie@tilde.institute wrote:
> > > 
> > > Hello OpenBSD developers,
> > > 
> > > I am interested in contributing to improve the uhidpp(4)
> > > (Logitech Unifying Reciever) support in OpenBSD.
> > > 
> > > Currently, the uhidpp(4) driver only handles detecting certain
> > > sensors, but I would like more robust support for these devices,
> > > including:
> > > 
> > > * pairing and unpairing devices on the command-line
> > > * controlling the scrolling speed/auxiliary buttons of wireless mice
> > > 
> > > Do you know where I should start? Should I work on the uhidpp(4)
> > > driver, or should I go ahead and start writing a command-line utility,
> > > like uhidppctl(8)?
> > 
> > I doubt we want a uhidppctl(8) unless it is really necessary.
> > I would expect wsmouse(4) to handle the controlling of scrolling
> > speed and auxiliary buttons.
> 
> Hi Claudio,
> 
> I had to do pair some devices recently for both Logitech Unifying
> receiver as well as How Dell Universal Pairing receiver (out of
> scope here), and it was a bit of a faff having to source another
> machine and OS. So, purely from a user perspective, given that a
> single receiver can pair up to six devices, it would be nice not
> having to use another machine/OS to do the pairing step :^)

I agree with you and I did not talk about pairing devices in my reply
because this is a more complex topic. We only had the now removed netbt
stack that had pairing implemented and it was rather bad to use.
Device pairing should work a bit like how WiFi join works.
Another thing to not forget is that all of our wifi adapters use
ifconifg(4) for configuration. We don't ship a iwmctl, iwnctl, iwxctl,
bwfmctl, ralctl, runctl, wiconfig and mtwctl. Sure currently this is only
about Logitech so uhidppctl(8) works but what when more such adapters
are added?

-- 
:wq Claudio