Hi Cliff please see inline On May 18, 2013, at 5:30 AM, Cliff Burdick <[email protected]> wrote:
> I have an application configured with the DNA cluster running on core 0, with > 8 threads running on cores 1-8 on a Xeon processor. I'm using a custom hash > function which just picks off the last octet of the source IP, and sends it > to threads 1-8. I'm loading the DNA driver using the following: > > insmod ixgbe.ko MQ=0,0 mtu=9000 > > When I run pfdnacluster_multithread I can start 8 threads without any > dropping of packets. My understanding is that to use zero-copy mode, I can > only have a single thread operating on the packets at a time since the buffer > is automatically freed when another pfring_recv call is made. Yes, this sentence is valid per-thread. > Because of this, each of my slave threads make a copy of the data before > immediately returning back to call pfring_recv again. You can avoid this by allocating additional buffers per-thread and using a buffer swap when receiving a packet. This way you can keep aside up to K packets, where K is the number of additional buffers allocated in the per-thread pool. In order to allocate these additional buffers please have a look at dna_cluster_low_level_settings(). To get a buffers from the per-thread pool: pkt_handle = pfring_alloc_pkt_buff(ring[thread_id]) To swap a received packet with another buffer: ret = pfring_recv_pkt_buff(ring[thread_id], pkt_handle, &hdr, wait_for_packet) > For some reason, I am dropping what appears to be an increasing number of > packets, depending on which thread it is. Usually the lower-numbered threads > drop about 10%, while the higher-number ones drop around 90%. Please pay attention also to logical/physical cores when playing with core affinity. Can I see the output of cat /proc/cpuinfo | grep "processor\|model name\|physical id" and the affinity you are using? > I'm receiving about 230Kpps (1.3Gbps) evenly distributed between the threads, > and my understanding was that DNA mode would handle this. My code for the > receiver is identical to the multithread example (8192 buffers for rx/tx, > receive only, wait_mode =0). > > My slave thread makes the call using the following: > pfring_recv_parsed(m_ring, &packet, 0, &header, 1, 0, 1, 0); If you use pfring_recv_pkt_buff() you can still use the parsing functionality by calling pfring_parse_pkt() > > Also, what is the preferred way of dropping packets inside of the hash > function when I don't want it routed to any of my threads, return > DNA_CLUSTER_FAIL, or send it to a queue that is not being processed? return DNA_CLUSTER_DROP; Best Regards Alfredo > > Any help is appreciated. Thanks. > _______________________________________________ > Ntop-misc mailing list > [email protected] > http://listgateway.unipi.it/mailman/listinfo/ntop-misc _______________________________________________ Ntop-misc mailing list [email protected] http://listgateway.unipi.it/mailman/listinfo/ntop-misc
