On 11/24/22 04:59, Alexandr Nedvedicky wrote:
Hello,

On Thu, Nov 24, 2022 at 11:23:51AM +1000, David Gwynne wrote:
</snip>
we're working toward dropping the need for NET_LOCK before PF_LOCK. could
we try the diff below as a compromise?

sashan@ and mvs@ have pushed that forward, so this diff should be enough
now.

     looks good to me.

OK sashan

Index: pf.c
===================================================================
RCS file: /cvs/src/sys/net/pf.c,v
retrieving revision 1.1153
diff -u -p -r1.1153 pf.c
--- pf.c        12 Nov 2022 02:48:14 -0000      1.1153
+++ pf.c        24 Nov 2022 01:21:48 -0000
@@ -1603,9 +1603,6 @@ pf_purge(void *null)
  {
        unsigned int interval = max(1, pf_default_rule.timeout[PFTM_INTERVAL]);
- /* XXX is NET_LOCK necessary? */
-       NET_LOCK();
-
        PF_LOCK();
pf_purge_expired_src_nodes();
@@ -1616,7 +1613,6 @@ pf_purge(void *null)
         * Fragments don't require PF_LOCK(), they use their own lock.
         */
        pf_purge_expired_fragments();
-       NET_UNLOCK();
/* interpret the interval as idle time between runs */
        timeout_add_sec(&pf_purge_to, interval);
@@ -1891,7 +1887,6 @@ pf_purge_expired_states(const unsigned i
        if (SLIST_EMPTY(&gcl))
                return (scanned);
- NET_LOCK();
        rw_enter_write(&pf_state_list.pfs_rwl);
        PF_LOCK();
        PF_STATE_ENTER_WRITE();
@@ -1904,7 +1899,6 @@ pf_purge_expired_states(const unsigned i
        PF_STATE_EXIT_WRITE();
        PF_UNLOCK();
        rw_exit_write(&pf_state_list.pfs_rwl);
-       NET_UNLOCK();
while ((st = SLIST_FIRST(&gcl)) != NULL) {
                SLIST_REMOVE_HEAD(&gcl, gc_list);


With this diff against -current - my dmesg is spammed with:

splassert: pfsync_delete_state: want 2 have 0
Starting stack trace...
pfsync_delete_state(fffffd820af9f940) at pfsync_delete_state+0x58
pf_remove_state(fffffd820af9f940) at pf_remove_state+0x14b
pf_purge_expired_states(42,40) at pf_purge_expired_states+0x202
pf_purge_states(0) at pf_purge_states+0x1c
taskq_thread(ffffffff822c78f0) at taskq_thread+0x11a

Reply via email to