Thanks Luca that's got it, my bad, I didn't read far enough down the README, 
very sorry.

Slight typo for anyone trying this its "FdirMode=0".

I have also tried changing min_num_slots=2048 and letting RSS select a default 
value which nets me 64 queues.

When using 2048 slots, 64 queues and "pfcount", I max out at ~150Kpps which is 
well below your value, so something I am doing is not quite right as I seem to 
be a factor of 20 slower.

When using 2048 slots and "pfcount_multichannel" I seem to get twice the packet 
count that is being sent, I wonder if hyperthreading is causing the application 
to double count?

Andrew


From: [email protected] 
[mailto:[email protected]] On Behalf Of Luca Deri
Sent: Friday, August 12, 2011 4:29 PM
To: [email protected]
Subject: Re: [Ntop-misc] X520 what options for higher throughput with 
pfcount_multichannel?

Hi Andrew

On 08/12/2011 04:44 PM, 
[email protected]<mailto:[email protected]> wrote:
Hi Luca,

>- you can create fewer RX queues when insmod ixgbe (suggested)

  Is there a simple way to reduce the number of queues? Just to confirm am I 
correct in assuming it's the RSS option? If I try RSS=0 I seem to just get one 
queue in pfcount_multichannel, but if I use RSS=16 I still get 48 queues.
I Think that you need to use RSS=16,.... and also FDirMode=0,....

>From the PF_RING/drivers/intel/ixgbe/ixgbe-3.3.9-DNA/README file

NOTE: The RSS parameter has no effect on 82599-based adapters unless the
FdirMode parameter is simultaneously used to disable Flow Director. See
Intel(R) Ethernet Flow Director section below for more detail.
...
FdirMode
--------
Valid Range: 0-2 (0=off, 1=ATR, 2=Perfect filter mode)
Default Value: 1

  Flow Director filtering modes.

Cheers Luca



  Just looking at the code now to see what it is supposed to be doing....

  Thanks,

Andrew


From: 
[email protected]<mailto:[email protected]>
 [mailto:[email protected]] On Behalf Of Luca Deri
Sent: Friday, August 12, 2011 2:20 PM
To: [email protected]<mailto:[email protected]>
Subject: Re: [Ntop-misc] X520 what options for higher throughput with 
pfcount_multichannel?

Andrew
it might be that pfring_multicount on your machine opens 64 rings and this 
exhausts the memory. Possible solutions include

- you can reduce the amount of memory (reduce min_num_slots during insmod) in 
PF_RING (not suggested)

note that you should also balance interrupts for better performance so after 
you insmod do

PF_RING/drivers/intel/ixgbe/ixgbe-3.3.9-DNA/scripts/set_irq_affinity.sh ethX 
(or look as an example 
/home/deri/PF_RING/drivers/intel/ixgbe/ixgbe-3.3.9-DNA/src/load.sh)

Luca

On 08/12/2011 03:06 PM, 
[email protected]<mailto:[email protected]> wrote:
Hi Luca,


Clean booted the machine.


Built your ixgbe driver for the kernel version I am using.



Built your code as per README instructions.


rmmod the existing ixgbe module loaded when the machine started up.


inserted your ixgbe driver using insmod and using default params


inserted the pf_ring module with insmod using default params


then ran the pfcount_multichannel -i ethX


The machine has 32 cores, so 64 'processors' I would guess due to hyper 
threading.

Andrew






From: 
[email protected]<mailto:[email protected]>
 [mailto:[email protected]] On Behalf Of Luca Deri
Sent: Friday, August 12, 2011 1:57 PM
To: [email protected]<mailto:[email protected]>
Subject: Re: [Ntop-misc] X520 what options for higher throughput with 
pfcount_multichannel?

Andrew
1. how did you insmod the pf_ring/ixgbe module (I mean what are the options you 
use)?
2. how do you start the multichannel app?

Luca

On 08/12/2011 02:51 PM, 
[email protected]<mailto:[email protected]> wrote:



Hi Luca,

  I was getting some vmap errors, I wonder if this is the reason for the low 
throughtput?

[ 7313.184192] device eth4 entered promiscuous mode
[ 7313.195911] device eth4 left promiscuous mode
[ 7313.200292] device eth4 entered promiscuous mode
[ 7313.385476] vmap allocation for size 2101248 failed: use vmalloc=<size> to 
increase size.
[ 7313.385483] [PF_RING] ERROR: not enough memory for ring
[ 7313.385487] [PF_RING] ring_mmap(): unable to allocate memory

  Is there any documentation on how to deal with this? Do I reduce the number 
of queues?

  Thanks,

Andrew


From: 
[email protected]<mailto:[email protected]>
 [mailto:[email protected]] On Behalf Of Luca Deri
Sent: Friday, August 12, 2011 1:42 PM
To: [email protected]<mailto:[email protected]>
Subject: Re: [Ntop-misc] X520 what options for higher throughput with 
pfcount_multichannel?

Hi Andrew
nice to hear from you. The figures you have are bad, as in total you capture 
too little. Typical figures are shown here 
http://www.ntop.org/blog/pf_ring/packet-capture-performance-at-10-gbit-pf_ring-vs-tnapi/

I suggest to go for DNA as it is both fast and CPU savvy (see 
http://www.ntop.org/blog/pf_ring/how-to-sendreceive-26mpps-using-pf_ring-on-commodity-hardware/)

Luca

On 08/12/2011 02:32 PM, 
[email protected]<mailto:[email protected]> wrote:
Hi,

  I have recently purchased a machine with an Intel X520 NIC, having read the 
documentation I am confused as what options I have to maximize performance of 
"pfcount_multichannel".

   My initial experiments using the released code driver 
PF_RING-4.7.2/drivers/intel/ixgbe/ixgbe-3.1.15-FlowDirector-NoTNAPI and 
transparent mode = 2 show performance roughly as follows.

...

Absolute Stats: [channel=6][381188 pkts rcvd][0 pkts dropped]
Total Pkts=381188/Dropped=0.0 %
381188 pkts - 22871280 bytes [7401.6 pkt/sec - 3.55 Mbit/sec]
=========================
Actual Stats: [channel=6][12514 pkts][1003.1 ms][12475.0 pkt/sec]
=========================
Absolute Stats: [channel=7][381036 pkts rcvd][0 pkts dropped]
Total Pkts=381036/Dropped=0.0 %
381036 pkts - 22862160 bytes [7398.6 pkt/sec - 3.55 Mbit/sec]
=========================
Actual Stats: [channel=7][13010 pkts][1003.1 ms][12969.4 pkt/sec]
=========================
Absolute Stats: [channel=8][381835 pkts rcvd][0 pkts dropped]
Total Pkts=381835/Dropped=0.0 %
381835 pkts - 22910100 bytes [7414.1 pkt/sec - 3.56 Mbit/sec]
=========================
Actual Stats: [channel=8][13031 pkts][1003.1 ms][12990.4 pkt/sec]
=========================
Absolute Stats: [channel=9][381299 pkts rcvd][0 pkts dropped]
Total Pkts=381299/Dropped=0.0 %
381299 pkts - 22877940 bytes [7403.7 pkt/sec - 3.55 Mbit/sec]
=========================

...

Aggregate stats (all channels): [371781.1 pkt/sec][0 pkts dropped]

At 64 byte packets this, if my maths is correct, is 190MBits/s

I am using a 10 Gig source that is set to ignore PAUSE frames and deliver full 
line rate 64 byte packets so about 7.6 GBits/s, I assume that the traffic is 
getting buffered asno packets seem to be dropped and pfcount_multichannel keeps 
on processing even when the source has stopped sending.

# cat /proc/net/pf_ring/info

PF_RING Version     : 4.7.2 ($Revision: exported$)
Ring slots          : 4096
Slot version        : 13
Capture TX          : Yes [RX+TX]
IP Defragment       : No
Socket Mode         : Standard
Transparent mode    : No (mode 2)
Total rings         : 45
Total plugins       : 0

   Is this the correct/expected throughput for this driver in transparent mode? 
And, what do I need to get to the next level? Do I need the TNAPI driver? Or is 
it better to get the Silicom card? (It is my understanding that Silicom 
supports DMA and therefore it will be as fast (or faster) than TNAPI but use 
fewer CPU cycles, am I correct?).

    Many thanks.

Andrew







_______________________________________________

Ntop-misc mailing list

[email protected]<mailto:[email protected]>

http://listgateway.unipi.it/mailman/listinfo/ntop-misc






_______________________________________________

Ntop-misc mailing list

[email protected]<mailto:[email protected]>

http://listgateway.unipi.it/mailman/listinfo/ntop-misc






_______________________________________________

Ntop-misc mailing list

[email protected]<mailto:[email protected]>

http://listgateway.unipi.it/mailman/listinfo/ntop-misc






_______________________________________________

Ntop-misc mailing list

[email protected]<mailto:[email protected]>

http://listgateway.unipi.it/mailman/listinfo/ntop-misc

_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to