On Tue, Apr 27, 2021, at 21:18, Aaron Conole wrote:
> Gaetan Rivet <[email protected]> writes:
> 
> > The current rate limit is set to allow other threads to update the
> > connections when applicable. This was valid when taking the 'ct_lock'
> > was needed with a global critical section.
> >
> > Now that the size of the critical section for 'ct_lock' is reduced, it
> > is not necessary to rate limit calls to ct_sweep() anymore.
> >
> > Signed-off-by: Gaetan Rivet <[email protected]>
> > Reviewed-by: Eli Britstein <[email protected]>
> > ---
> 
> It's weird to see patch 8/11 and 9/11 set up this way.
> 
> Would it make sense to just squash them together?
> 
> >  lib/conntrack.c | 24 +++++++-----------------
> >  1 file changed, 7 insertions(+), 17 deletions(-)
> >
> > diff --git a/lib/conntrack.c b/lib/conntrack.c
> > index ea2e5b63b..8a7538b7b 100644
> > --- a/lib/conntrack.c
> > +++ b/lib/conntrack.c
> > @@ -1675,20 +1675,12 @@ conntrack_clean(struct conntrack *ct, long long now)
> >   * there is an actual connection that expires, or because a new connection
> >   * might be created with the minimum timeout).
> >   *
> > - * The logic below has two goals:
> > - *
> > - * - We want to reduce the number of wakeups and batch connection cleanup
> > - *   when the load is not very high.  CT_CLEAN_INTERVAL ensures that if we
> > - *   are coping with the current cleanup tasks, then we wait at least
> > - *   5 seconds to do further cleanup.
> > - *
> > - * - We don't want to keep the map locked too long, as we might prevent
> > - *   traffic from flowing.  CT_CLEAN_MIN_INTERVAL ensures that if cleanup 
> > is
> > - *   behind, there is at least some 200ms blocks of time when the map will 
> > be
> > - *   left alone, so the datapath can operate unhindered.
> > + * We want to reduce the number of wakeups and batch connection cleanup
> > + * when the load is not very high.  CT_CLEAN_INTERVAL ensures that if we
> > + * are coping with the current cleanup tasks, then we wait at least
> > + * 5 seconds to do further cleanup.
> >   */
> >  #define CT_CLEAN_INTERVAL 5000 /* 5 seconds */
> > -#define CT_CLEAN_MIN_INTERVAL 200  /* 0.2 seconds */
> >  
> >  static void *
> >  clean_thread_main(void *f_)
> > @@ -1705,12 +1697,10 @@ clean_thread_main(void *f_)
> >          long long now = time_msec();
> >          next_wake = conntrack_clean(ct, now);
> >  
> > -        if (next_wake < now) {
> > -            poll_immediate_wake();
> > -        } else if (next_wake < now + CT_CLEAN_MIN_INTERVAL) {
> > -            poll_timer_wait_until(now + CT_CLEAN_MIN_INTERVAL);
> > +        if (next_wake > now) {
> > +            poll_timer_wait_until(MIN(next_wake, now + CT_CLEAN_INTERVAL));
> >          } else {
> > -            poll_timer_wait_until(MAX(next_wake, now + CT_CLEAN_INTERVAL));
> > +            poll_immediate_wake();
> >          }
> >          latch_wait(&ct->clean_thread_exit);
> >          poll_block();
> 
> 

Hello Aaron,

Indeed, the reasoning is that avoiding the 0-ms timers should be less 
controversial,
I expected this patch to be discussed and probably changed or dropped.
So to simplify this I preferred to split the patches.

It can be squashed if everyone agrees with the change.

William had some objection to it, and we did not reach a consensus during v1 
review.
If there are a lot of connections, removing the rate limit will make the 
ct_clean thread
more active. William said at the time that running a CPU at 100% to age 
connections
might be a problem. I think that if we have ageing work to do, it should be 
done as soon
as possible. Connections being freed will reduce memory usage and general 
stress on the
conntrack. Additionally, poll_block() will make the thread yield and the 
scheduler should deal with being fair.

This patch is not essential to the series. It is here because I wanted to bring 
attention to this
bit of logic that is not justified any more.

-- 
Gaetan Rivet
_______________________________________________
dev mailing list
[email protected]
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to