On Mon, Dec 12, 2016 at 03:22:36PM +0100, Martin Pieuchot wrote:
> @@ -56,15 +56,17 @@ struct bpf_d {
...
> - struct bpf_if * bd_bif; /* interface descriptor */
> + int __in_uiomove;
> +
> + struct bpf_if *bd_bif; /* interface descriptor */
If __in_uiomove
On 10/12/16(Sat) 23:01, Jonathan Matthew wrote:
> On Mon, Nov 28, 2016 at 04:01:14PM +0100, Martin Pieuchot wrote:
> > Last diff to trade the KERNEL_LOCK for a mutex in order to protect data
> > accessed inside bpf_catchpacket().
> >
> > Note about the multiples data structures:
> >
> > - selwa
On 07/12/16(Wed) 00:44, Alexander Bluhm wrote:
> On Mon, Nov 28, 2016 at 04:01:14PM +0100, Martin Pieuchot wrote:
> > @@ -717,7 +753,9 @@ bpfioctl(dev_t dev, u_long cmd, caddr_t
> > *(u_int *)addr = size = bpf_maxbufsize;
> > else if (size < BPF_MINB
On Mon, Nov 28, 2016 at 04:01:14PM +0100, Martin Pieuchot wrote:
> Last diff to trade the KERNEL_LOCK for a mutex in order to protect data
> accessed inside bpf_catchpacket().
>
> Note about the multiples data structures:
>
> - selwakeup() is called in a thread context (task) so we rely on the
On Mon, Nov 28, 2016 at 04:01:14PM +0100, Martin Pieuchot wrote:
> @@ -313,7 +319,13 @@ bpf_detachd(struct bpf_d *d)
> int error;
>
> d->bd_promisc = 0;
> +
> + bpf_get(d);
> + mtx_leave(&d->bd_mtx);
> error = ifpromisc(bp->bif_if
Last diff to trade the KERNEL_LOCK for a mutex in order to protect data
accessed inside bpf_catchpacket().
Note about the multiples data structures:
- selwakeup() is called in a thread context (task) so we rely on the
KERNEL_LOCK() to serialize access to kqueue(9) data.
- the global list
On Tue, Nov 22, 2016 at 11:54:47AM +0100, Martin Pieuchot wrote:
> Next extract diff that tweaks bpf_detachd().
>
> The goal here is to remove ``d'' from the list and NULLify ``d->bd_bif''
> before calling ifpromisc().
>
> The reason is that ifpromisc() can sleep. Think USB ;) So we'll have to
On 13/09/16(Tue) 12:23, Martin Pieuchot wrote:
> Here's the big scary diff I've been using for some months now to stop
> grabbing the KERNEL_LOCK() in bpf_mtap(9). This has been originally
> written to prevent lock ordering inside pf_test(). Now that we're
> heading toward using a rwlock, we won'
On Wed, Nov 16, 2016 at 12:18:48PM +0100, Martin Pieuchot wrote:
> Here's another extracted diff: Use goto in read & write and always
> increment the reference count in write.
>
> ok?
OK bluhm@
>
> Index: net/bpf.c
> ===
> RCS fi
On 13/09/16(Tue) 12:23, Martin Pieuchot wrote:
> Here's the big scary diff I've been using for some months now to stop
> grabbing the KERNEL_LOCK() in bpf_mtap(9). This has been originally
> written to prevent lock ordering inside pf_test(). Now that we're
> heading toward using a rwlock, we won'
On Mon, Nov 14, 2016 at 10:07:30AM +0100, Martin Pieuchot wrote:
> Here's another extracted diff to move forward. Let bpf_allocbufs()
> fail when allocating memory, this way we can call it while holding
> a mutex.
>
> ok?
OK bluhm@
>
> Index: net/bpf.c
> ===
On 13/09/16(Tue) 12:23, Martin Pieuchot wrote:
> Here's the big scary diff I've been using for some months now to stop
> grabbing the KERNEL_LOCK() in bpf_mtap(9). This has been originally
> written to prevent lock ordering inside pf_test(). Now that we're
> heading toward using a rwlock, we won'
Here's the big scary diff I've been using for some months now to stop
grabbing the KERNEL_LOCK() in bpf_mtap(9). This has been originally
written to prevent lock ordering inside pf_test(). Now that we're
heading toward using a rwlock, we won't have this problem, but fewer
usages of KERNEL_LOCK()
13 matches
Mail list logo