On Mon, 4 Jan 2016, Stefan Fritsch wrote:
> On Sun, 3 Jan 2016, Theo de Raadt wrote:
> > >> dlg writes:
> > >> > should we just do it unconditionally? is there a downside to that?
> > >
> > >It may decrease performance a tiny bit. Since such bits tend to add 
> > >up, I would be hesitant to enable the check for drivers that don't 
> > >need it. OTOH, freebsd and linux seem to always do the check.
> > 
> > How does it decrease performance?
> 
> I am talking about this diff in ether_input():
> 
> diff --git sys/net/if_ethersubr.c sys/net/if_ethersubr.c
> --- sys/net/if_ethersubr.c
> +++ sys/net/if_ethersubr.c
> @@ -353,8 +353,7 @@ ether_input(struct ifnet *ifp, struct mbuf *m, void 
> *cookie)
>        * If packet is unicast and we're in promiscuous mode, make sure it
>        * is for us.  Drop otherwise.
>        */
> -     if ((m->m_flags & (M_BCAST|M_MCAST)) == 0 &&
> -         (ifp->if_flags & IFF_PROMISC)) {
> +     if ((m->m_flags & (M_BCAST|M_MCAST)) == 0) {
>               if (memcmp(ac->ac_enaddr, eh->ether_dhost, ETHER_ADDR_LEN)) {
>                       m_freem(m);
>                       return (1);
> 
> For other drivers than vio this means an additional 6 byte memcmp where 
> one operand is already in the cache, and the other operand may or may not 
> be in the cache. 'tiny performance impact' seems about right.

Hrvoje Popovski was very helpful and did some performance tests with the 
patch above. There is only measurable difference at very high package 
rates, but I must admit that there the difference is a bit larger than I 
expected:

> i'm sending 64byte packets from host connected to ix2 and counting then
> on host connected to ix3

> without your patch

sending         receiving
400kpps         400kpps
600kpps         600kpps
800kpps         690kpps
1Mpps           610kpps
1.4Mpps         580kpps
12Mpps          580kpps


> with your patch

sending         receiving
400kpps         400kpps         -   0%
600kpps         600kpps         -   0%
800kpps         680kpps         - 1.4%
1Mpps           600kpps         - 1.6%
1.4Mpps         560kpps         - 3.4%
12Mpps          560kpps         - 3.4%



So, which variant should I commit?

- do the check unconditionally
- add some flag so that ether_input() does the check for vio(4) 
unconditionally
- do the check in vio(4)

mpi fixed a similar bug in trunk(4) in July, but that does not use 
ether_input() and would not profit from the flag approach.



> > Maybe you should show the diff which does the check in the driver
> > itself, then then it will be easier to see the performance cost.  Right
> > now I can't perceive the problem.
> 
> It's not very expensive. But it's more code duplication than I like:
> 
> diff --git sys/dev/pci/if_vio.c sys/dev/pci/if_vio.c
> --- sys/dev/pci/if_vio.c
> +++ sys/dev/pci/if_vio.c
> @@ -1005,6 +1005,9 @@ vio_rxeof(struct vio_softc *sc)
>       int r = 0;
>       int slot, len, bufs_left;
>       struct virtio_net_hdr *hdr;
> +     int promisc = (ifp->if_flags & IFF_PROMISC);
> +     u_int8_t *enaddr = sc->sc_ac.ac_enaddr;
> +     struct ether_header *eh;
>  
>       while (virtio_dequeue(vsc, vq, &slot, &len) == 0) {
>               r = 1;
> @@ -1035,10 +1038,33 @@ vio_rxeof(struct vio_softc *sc)
>                       bufs_left--;
>               }
>  
> -             if (bufs_left == 0) {
> -                     ml_enqueue(&ml, m0);
> -                     m0 = NULL;
> +             if (bufs_left > 0)
> +                     continue;
> +
> +             /*
> +              * Unfortunately, mac filtering is only best effort
> +              * in virtio-net. Unwanted packets may still arrive.
> +              * If we are not promiscous, we have to check if the
> +              * packet is actually for us.
> +              */
> +             if (!promisc) {
> +                     m0 = m_pullup(m0, ETHER_HDR_LEN);
> +                     /*
> +                      * no need to check m0 == NULL, we know m0 has enough
> +                      * space
> +                      */
> +
> +                     eh = mtod(m0, struct ether_header *);
> +                     if (!ETHER_IS_MULTICAST(eh->ether_dhost) &&
> +                         memcmp(enaddr, eh->ether_dhost, ETHER_ADDR_LEN) != 
> 0) {
> +                             m_freem(m0);
> +                             m0 = NULL;
> +                             continue;
> +                     }
>               }
> +
> +             ml_enqueue(&ml, m0);
> +             m0 = NULL;
>       }
>       if (m0 != NULL) {
>               DBGPRINT("expected %d buffers, got %d", (int)hdr->num_buffers,
> 
> 

Reply via email to