On Mon, Jun 27, 2022 at 12:39:11AM +0300, Vitaliy Makkoveev wrote:
> On Sun, Jun 26, 2022 at 11:26:13PM +0200, Jeremie Courreges-Anglas wrote:
> > On Mon, Jun 27 2022, Vitaliy Makkoveev <m...@openbsd.org> wrote:
> > > On Sun, Jun 26, 2022 at 10:39:10PM +0200, Alexander Bluhm wrote:
> > >> On Sun, Jun 26, 2022 at 07:45:57PM +0300, Vitaliy Makkoveev wrote:
> > >> > This extra serialization is not required. In packet processing path we
> > >> > have shared netlock held, but we do read-only access on per session
> > >> > `flags' and `ifindex'. We always modify them from ioctl(2) path with
> > >> > exclusive netlock held. The rest of pipex(4) session is immutable or
> > >> > uses per-session locks.
> > >> 
> > >> I was wondering about pipex_enable variable.  It is documented as
> > >> protected by netlock.
> > >> net/pipex.c:int pipex_enable = 0;                       /* [N] */
> > >> 
> > >> But I did not find NET_LOCK() in pipex_sysctl() or its callers.
> > >> Should we grab net lock there or in net_sysctl() in the write path?
> > >> Although only an integer is set atomically, it looks strange if
> > >> such a value is changed while we are in the middle of packet
> > >> processing.
> > >> 
> > >
> > > I guess it should be declared as atomic. We use ifq_set_maxlen(...,1)
> > > hack so we don't cross netlock borders in pipex(4) output path, but it's
> > > expected we could cross them. We never check `pipex_enable' within
> > > (*if_qstart)() and we don't worry it's not serialized with the rest of
> > > stack. Also we will process already enqueued pipex(4) packets regardless
> > > on `pipex_enable' state.
> > >
> > > But this means we need to use local copy of `pipex_enable' within
> > > pppx_if_output(), otherwise we loose consistency.
> > 
> > FWIW pipex_enable_local = READ_ONCE(pipex_enable) would prevent the
> > compiler from reading the global value twice.
> > 
> 
> Such case was already discussed, but as I remember the opinions were
> different. Does something changed what I missed? The READ_ONCE() code
> doesn't changed from that time.
> 
> Also if I should, I prefer to use atomic_load_int(9) because it
> documented and more consistent with our atomic API.

The diff with atomic_load_int(9):

Index: sys/net/if_pppx.c
===================================================================
RCS file: /cvs/src/sys/net/if_pppx.c,v
retrieving revision 1.116
diff -u -p -r1.116 if_pppx.c
--- sys/net/if_pppx.c   26 Jun 2022 15:50:21 -0000      1.116
+++ sys/net/if_pppx.c   26 Jun 2022 21:40:19 -0000
@@ -817,7 +817,9 @@ pppx_if_output(struct ifnet *ifp, struct
        struct pppx_if *pxi = (struct pppx_if *)ifp->if_softc;
        struct pppx_hdr *th;
        int error = 0;
-       int proto;
+       int pipex_enable_local, proto;
+
+       pipex_enable_local = atomic_load_int(&pipex_enable);
 
        NET_ASSERT_LOCKED();
 
@@ -831,7 +833,7 @@ pppx_if_output(struct ifnet *ifp, struct
        if (ifp->if_bpf)
                bpf_mtap_af(ifp->if_bpf, dst->sa_family, m, BPF_DIRECTION_OUT);
 #endif
-       if (pipex_enable) {
+       if (pipex_enable_local) {
                switch (dst->sa_family) {
 #ifdef INET6
                case AF_INET6:
@@ -856,7 +858,7 @@ pppx_if_output(struct ifnet *ifp, struct
        }
        *mtod(m, int *) = proto;
 
-       if (pipex_enable)
+       if (pipex_enable_local)
                error = if_enqueue(ifp, m);
        else {
                M_PREPEND(m, sizeof(struct pppx_hdr), M_DONTWAIT);
Index: sys/net/pipex.c
===================================================================
RCS file: /cvs/src/sys/net/pipex.c,v
retrieving revision 1.138
diff -u -p -r1.138 pipex.c
--- sys/net/pipex.c     26 Jun 2022 15:50:21 -0000      1.138
+++ sys/net/pipex.c     26 Jun 2022 21:40:19 -0000
@@ -94,7 +94,7 @@ struct pool mppe_key_pool;
  *       L       pipex_list_mtx
  */
 
-int    pipex_enable = 0;                       /* [N] */
+int    pipex_enable = 0;                       /* [A] */
 struct pipex_hash_head
     pipex_session_list,                                /* [L] master session 
list */
     pipex_close_wait_list,                     /* [L] expired session list */

Reply via email to