On Fri, Jun 26, 2020 at 09:15:38PM +0200, Martin Pieuchot wrote:
> On 26/06/20(Fri) 17:53, Vitaliy Makkoveev wrote:
> > On Fri, Jun 26, 2020 at 02:29:03PM +0200, Martin Pieuchot wrote:
> > > On 26/06/20(Fri) 12:35, Vitaliy Makkoveev wrote:
> > > > On Fri, Jun 26, 2020 at 10:23:42AM +0200, Martin Pieuchot wrote:
> > > > > On 25/06/20(Thu) 19:56, Vitaliy Makkoveev wrote:
> > > > > > Updated diff. 
> > > > > > 
> > > > > > OpenBSD uses 16 bit counter for allocate interface indexes. So we 
> > > > > > can't
> > > > > > store index in session and be sure if_get(9) returned `ifnet' is our
> > > > > > original `ifnet'.
> > > > > 
> > > > > Why not?  The point of if_get(9) is to be sure.  If that doesn't work
> > > > > for whatever reason then the if_get(9) interface has to be fixed.  
> > > > > Which
> > > > > case doesn't work for you?  Do you have a reproducer?  
> > > > > 
> > > > > How does sessions stay around if their corresponding interface is
> > > > > destroyed?
> > > > 
> > > > We have `pipexinq' and `pipexoutq' which can store pointers to session.
> > > > pipexintr() process these queues. pipexintr() and
> > > > pipex_destroy_session() are *always* different context. This mean we
> > > > *can't* free pipex(4) session without be sure there is no reference to
> > > > this session in `pipexinq' or `pipexoutq'. Elsewhere this will cause use
> > > > afret free issue. Look please at net/pipex.c:846. The way pppx(4)
> > > > destroy sessions is wrong. While pppac(4) destroys sessions by
> > > > pipex_iface_fini() it's also wrong. Because we don't check `pipexinq'
> > > > and `pipexoutq' state. I'am said it again and again.
> > > 
> > > I understand.  Why is it a problem?  Using reference counting the way
> > > you're suggesting is *one* possible solution to a problem we don't fully
> > > understand.  What are we trying to achieve?  Which problem are we trying
> > > to solve?
> > 
> > Sorry, may be I misunderstand something.
> > 
> > `pipexoutq' case:
> > 
> > 1. pppac_start() calls pipex_output()
> > 2. pipex_output() calls pipex_ip_output()
> > 3. pipex_ip_output() calls pipex_ppp_enqueue()
> > 4. pipex_ppp_enqueue() calls schednetisr() which is task_add()
> > 
> > `pipexinq' cases:
> > 
> > 1.1. ether_input() calls pipex_pppoe_input()
> > 1.2. gre_input() calls gre_input_1()
> >          gre_input_1() calls pipex_pptp_input()
> > 1.3. udp_input() calls pipex_l2tp_input()
> > 
> > 2. pipex_{pppoe,pptp,l2tp}_input() calls pipex_common_input()
> > 3. pipex_common_input() calls schednetisr() which is task_add()
> > 
> > task_add(9) just schedules the execution of the work specified by `tq'.
> > So we can do pipex_destroy_session() * between * schednetisr() and
> > pipexintr(). And we can do this right * now *, with our current locking.
> > And this is the problem I'am trying to solve.
> > 
> > My apologies if I'am wrong above. Please point me where I'am wrong.
> > 
> > Also before pipex_{pppoe,pptp,l2tp}_input() we call corresponding
> > pipex_{pptp,l2tp}_lookup_session() to obtain pointer to pipex(4)
> > session. We should be shure `session' is still walid between
> > pipex_*_lookup() and pipex_*_input(). It's not required now, but will be
> > required in future.
> 
> Why not iterate over the queues and garbage collect the sessions that
> are about to be removed?  That's what the network stack was doing with
> mbuf queues prior to if_get(9) when interfaces where destroyed.
> 

Do you mean net/if.c:1185 and below? This is the queues associated with
this `ifp'. But for pipex(4) we should go through all mbufs associated
with pipex(4). This can be heavy if we have hundreds of sessions. Also
this would work until session destruction and `pr_input' are serialized.

Point me please the line in source to see if I'am wrong about `ifnet's
mbuf queues claninig.

Reply via email to