Hi, Wael,

I think I know what's going on here.  You don't say how the reported data rate 
differed from expected, but I suspect the reported data rate was higher than 
expected.  Packet sockets are a low level packet delivery mechanism supported 
by the kernel.  It allows the kernel to copy packets directly into memory that 
is mapped into the memory space of a user process (e.g. hashpipe).  The kernel 
does no filtering (by default) on the incoming packets before delivering them 
to the user process(es) that have requested them.  The selection by port 
happens (by default) at the application layer.  This means that two hashpipe 
instances using packet sockets to listen to the same network interface will 
each receive copies of all packets, regardless of the destination UDP port, 
even if they only want a specific UDP destination port.  This is very similar 
to how two tcpdump instances will get copies of all packets.

Alessio Magro has done some work to use the "Berkeley Packet Filter" 
(https://www.kernel.org/doc/html/latest/networking/filter.html) to perform 
low-level packet filtering in the kernel with packet sockets in hashpipe.  I 
think that approach could allow you to achieve the packet filtering that you 
want, but it's somewhat non-trivial to implement.

As for 100% CPU utilization, that could be due to using "busywait" versions of 
the status buffer locking and/or data buffer access functions or it could just 
be due to the net threads being very busy processing packets.

HTH,
Dave


> On Dec 2, 2020, at 19:06, Wael Farah <[email protected]> wrote:
> 
> Hi Folks,
> 
> Hope everyone's doing well.
> 
> I have an application I am trying to develop using hashpipe, and one of the 
> solutions might be using multiple instances of hashpipe on a single 40 GbE 
> interface.
> 
> When I tried running 2 instances of hashpipe I faced a problem. The data rate 
> reported by the instances does not match that expected from the TXs. No 
> issues were seen if I reconfigure the TXs to send data to a single port, 
> rather than 2, and initialising a single hashpipe thread. Can the 2 
> netthreads compete for resources on the NIC even if they are bound to 
> different ports? I've also noticed that the CPU usage for the 2 netthreads is 
> always 100%.
> I am using "hashpipe_pktsock_recv_udp_frame" for the acquisition.
> 
> Has anyone seen this/similar issue before?
> 
> Thanks!
> Wael
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "[email protected]" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] 
> <mailto:[email protected]>.
> To view this discussion on the web visit 
> https://groups.google.com/a/lists.berkeley.edu/d/msgid/casper/CALO2pVe814yov06vb%3DeqSgXJdkN%2BDc3gEcF63Xwb7Kk_YGMy2Q%40mail.gmail.com
>  
> <https://groups.google.com/a/lists.berkeley.edu/d/msgid/casper/CALO2pVe814yov06vb%3DeqSgXJdkN%2BDc3gEcF63Xwb7Kk_YGMy2Q%40mail.gmail.com?utm_medium=email&utm_source=footer>.

-- 
You received this message because you are subscribed to the Google Groups 
"[email protected]" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/a/lists.berkeley.edu/d/msgid/casper/1BA4B4C2-05F7-4BC2-AB49-C7181748B26A%40berkeley.edu.

Reply via email to