Oh, sorry. I thought there was just some obvious quirk I was
overlooking.

I'm using an 82599 silicom card with two ports. The driver is configured
to present 8 RX queues for each port, all though DNA. The application
has
16 threads, each using pfring_recv(wait_for_incoming_packet=0) to read
packets from one pfring, and then using pfring_send(flush_packet=1) to
transmit each packet to one of the *other* rings. The system has 8
cores,
and each thread is assigned to a specific core. (As it happens, the two
threads handling the two rings that are "working together" are assigned
to the same core, but I don't know if that's relevant.)

The basic approach for my app was just cloned from the
pfcount_multicast.c
example. I'm using "active polling", sleeping for 10us whenever
pfring_recv() returns nothing, then trying again.

On xmit, I'm generally flushing packets, but I don't think that matters
one way or the other.

As I say, this works fine when one of the ports is idle, but when
packets
are sent to both ports, all the packets are still received fine, but a
few
of the sends fail. The DNA send function that pfring_send() calls
doesn't
return any error code, so I can't tell you why it's failing.

These tests are all being done at high packet rates, so I'm presuming
that the failure has something to do with simultaneous calls into
pfring_recv() and pfring_send() using the same ring but from two
different threads. Since the DNA library is secret, I can't look at it
to see why that might be a problem.

I was assuming I was just missing a technical detail, so I didn't
experiment very much with this, but I can if you think it will help.

-don

-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of Alfredo
Cardigliano
Sent: Saturday, October 15, 2011 12:41 AM
To: [email protected]
Subject: Re: [Ntop-misc] Using DNA devices,pfring_send() sometimes fails
to a ring that's also receiving

Don
can you better explain your app configuration, maybe with an example, in
order to better understand (or try to reproduce) the issue?
Your previous description is a little confusing and I would like to know
exactly on which interface/thread you are sending/receiving packets.

Regards
Alfredo

On Oct 15, 2011, at 12:23 AM, Don Provan wrote:

> I'm using a silicom 82599 based NIC (close enough?) running with 8
> queues per port.
> -don
> 
> -----Original Message-----
> From: [email protected]
> [mailto:[email protected]] On Behalf Of Luca Deri
> Sent: Friday, October 14, 2011 1:48 PM
> To: [email protected]
> Cc: <[email protected]>
> Subject: Re: [Ntop-misc] Using DNA devices,pfring_send() sometimes
fails
> to a ring that's also receiving
> 
> Don
> A few questions:
> - what NIC do you own?
> - are you using the drive in single or multi queue ?
> 
> Regards Luca
> 
> Sent from my iPad
> 
> On 14/ott/2011, at 21:02, "Don Provan" <[email protected]> wrote:
> 
>> I'm using ixgbe ports via DNA with pf_ring 5.1.0. My code uses
>> pfring_recv() to receive packets from one ring, then uses
> pfring_send()
>> to transmit them via another ring. The code works fine *unless* the
>> other ring is *also* receiving packets at the same time on another
>> thread. In that case, pfring_send() fails a few times out of a
> hundred.
>> Is there some ring locking requirement that I'm missing?
>> -don
>> _______________________________________________
>> Ntop-misc mailing list
>> [email protected]
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc
_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to