Chris with latest code from SVN it is now possible to: - Disable the allocation of additional per-consumer buffers (those available with alloc()/release()) via the dna_cluster_create() flags: dna_cluster_create(cluster_id, num_apps, DNA_CLUSTER_NO_ADDITIONAL_BUFFERS);
- Configure the number of per-slave rx/tx queue slots and additional buffers: dna_cluster_low_level_settings(dna_cluster_handle, 8192 /* rx queue slots */, 1024 /* tx queue slots */, 0 /* additional buffers */); (call this just after dna_cluster_create()) Regards Alfredo On Oct 17, 2012, at 8:10 PM, Alfredo Cardigliano <[email protected]> wrote: > > On Oct 17, 2012, at 7:27 PM, Chris Wakelin <[email protected]> wrote: > >> On 17/10/12 17:39, Alfredo Cardigliano wrote: >>> Chris >>> please see inline >>> >>> On Oct 17, 2012, at 6:00 PM, Chris Wakelin <[email protected]> >>> wrote: >>> >>>> I still can't get more than 12 cores used with Suricata on my Ubuntu >>>> 12.04 machine with ixgbe. Even with DNA + RSS and Suricata using dna0@0 >>>> ... dna0@15, it fails for pfring_open on dna0@12 to dna0@15 (though >>>> pfcount_aggregator manages the 16 queues in that case). >>> >>> You mean standard DNA (no DNA cluster, etc), right? >>> This is definitely strange as DNA memory is allocated when loading the >>> driver. >> >> Yes I meant standard DNA. >> >> Hmm. Strangely it's working now! Last night it didn't but I can't see >> why. I tried again this morning but thought it failed, when now it seems >> it didn't (silly me). I was probably mistaken about the discrepancy with >> pfcount_aggregator. >> >>> >>>> How is memory allocated in DNA? Are there kernel options I'm missing? >>> >>> No, there is no configuration for that. >>> >>>> With DNA clusters, I can't get pfdnacluster_master to manage more than >>>> 16 queues either. I would have expected my custom one with duplication >>>> should actually only use as much memory as it does without duplication >>>> as the duplicates are of course the same packets and therefore the same >>>> memory. >>> >>> Even if you are using duplication, memory with DNA clusters is allocated >>> when opening the socket. >>> Actually on my test system with 4Gb of RAM I can run up to two cluster with >>> 32 queues each. >>> Anyway memory management in libzero is something we are working on (there >>> is space for improvements). >> >> Is that with all the sockets open? I can certainly start >> pfdnacluster_master with that many queues, but the applications fail. > > Yes, up and running. I forgot to tell you I'm using the default > num_rx_slots/num_tx_slots (I don't know if you are using higher values). > >> How much memory is used per socket? Strangely I didn't have problems >> with testing e1000e DNA +libzero on another 64-bit system with less >> memory (16GB instead of 32) but running Ubuntu 10.04 instead of 12.04. >> >> Is the memory used what is shown in ifconfig? :- > > No > >> >> dna0 Link encap:Ethernet HWaddr 00:1b:21:cd:a2:74 >> inet6 addr: fe80::21b:21ff:fecd:a274/64 Scope:Link >> UP BROADCAST RUNNING PROMISC MULTICAST MTU:1522 Metric:1 >> RX packets:195271292 errors:0 dropped:0 overruns:0 frame:0 >> TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 >> collisions:0 txqueuelen:1000 >> RX bytes:157968626881 (157.9 GB) TX bytes:0 (0.0 B) >> Memory:d9280000-d9300000 >> >> Does the setting of "ethtool -g" make a difference (presumably the same >> as num_rx_slots= in the module parameters)? > > No, ethtool -g is not supported. > >> >> Sorry for the inquisition :-) but it would be nice to understand what's >> possible. > > np > > Alfredo > >> >> Best Wishes, >> Chris >> >> -- >> --+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+- >> Christopher Wakelin, [email protected] >> IT Services Centre, The University of Reading, Tel: +44 (0)118 378 2908 >> Whiteknights, Reading, RG6 6AF, UK Fax: +44 (0)118 975 3094 >> _______________________________________________ >> Ntop-misc mailing list >> [email protected] >> http://listgateway.unipi.it/mailman/listinfo/ntop-misc > _______________________________________________ Ntop-misc mailing list [email protected] http://listgateway.unipi.it/mailman/listinfo/ntop-misc
