Chris
the segfault has been fixed, please update from SVN.
It was only with the multi-process cluster.

Thank you
Alfredo

On Oct 18, 2012, at 8:20 PM, Alfredo Cardigliano <[email protected]> wrote:

> 
> On Oct 18, 2012, at 8:03 PM, Chris Wakelin <[email protected]> wrote:
> 
>> Hi Alfredo,
>> 
>> Thanks for the extra options. However, I updated to SVN 5758 and now
>> pfcount (and pfcount_aggregator) segfaults talking to a cluster (started
>> with "pfdnacluster_master -i dna0 -c 1 -n 12")
>> 
>>> Core was generated by `/opt/RDGpfring/bin/pfcount -i dnacl:1@0'.
>>> Program terminated with signal 11, Segmentation fault.
>>> #0  0x000000000040e212 in pfring_dna_cluster_open ()
>>> (gdb) bt full
>>> #0  0x000000000040e212 in pfring_dna_cluster_open ()
>>> No symbol table info available.
>>> #1  0x00000000004087d3 in pfring_open ()
>>> No symbol table info available.
>>> #2  0x0000000000406a7b in main ()
>>> No symbol table info available.
>> 
>> pfcount on plain DNA is fine.
> 
> Ok, I'll try to reproduce this.
> 
>> 
>> A couple of other questions :-) :-
>> 
>> 1) I see there's also a DNA_CLUSTER_DIRECT_FORWARDING flag. I'm guessing
>> that's for disabling the wait-for-apps-to-retrieve-packets feature? Does
>> it save memory?
> 
> No, you can use this flag to send a packet to an interface directly from the 
> distribution function.
> Actually it requires more memory (not so much).
> 
>> 
>> 2) What are slave rx and tx slots? If I'm only interested in receiving
>> packets, can tx slots be 0?
> 
> Number of slots of the the master <-> consumer queues.
> The minimum is 512 slots, however if you don't need tx you can set the 
> recv_only_mode (no tx memory allocation this way).
> 
>> 
>> 3) What are the additional buffers used for? Does disabling them break
>> anything?
> 
> They are used for extra buffer allocation with pfring_alloc_pkt_buff() / 
> pfring_release_pkt_buff() 
> (for instance if you want to put a packet aside for future access and recv 
> the next one with pfring_recv_pkt_buff()). 
> Usually you don't need them.
> 
> Regards
> Alfredo
> 
>> 
>> Best Wishes,
>> Chris
>> 
>> On 17/10/12 23:38, Alfredo Cardigliano wrote:
>>> Chris
>>> with latest code from SVN it is now possible to:
>>> - Disable the allocation of additional per-consumer buffers (those 
>>> available with alloc()/release()) via the dna_cluster_create() flags:
>>> dna_cluster_create(cluster_id, num_apps, DNA_CLUSTER_NO_ADDITIONAL_BUFFERS);
>>> 
>>> - Configure the number of per-slave rx/tx queue slots and additional 
>>> buffers:
>>> dna_cluster_low_level_settings(dna_cluster_handle, 8192 /* rx queue slots 
>>> */, 1024 /* tx queue slots */, 0 /* additional buffers */);
>>> (call this just after dna_cluster_create())
>>> 
>>> Regards
>>> Alfredo
>>> 
>>> On Oct 17, 2012, at 8:10 PM, Alfredo Cardigliano <[email protected]> 
>>> wrote:
>>> 
>>>> 
>>>> On Oct 17, 2012, at 7:27 PM, Chris Wakelin <[email protected]> 
>>>> wrote:
>>>> 
>>>>> On 17/10/12 17:39, Alfredo Cardigliano wrote:
>>>>>> Chris
>>>>>> please see inline
>>>>>> 
>>>>>> On Oct 17, 2012, at 6:00 PM, Chris Wakelin <[email protected]> 
>>>>>> wrote:
>>>>>> 
>>>>>>> I still can't get more than 12 cores used with Suricata on my Ubuntu
>>>>>>> 12.04 machine with ixgbe. Even with DNA + RSS and Suricata using dna0@0
>>>>>>> ... dna0@15, it fails for pfring_open on dna0@12 to dna0@15 (though
>>>>>>> pfcount_aggregator manages the 16 queues in that case).
>>>>>> 
>>>>>> You mean standard DNA (no DNA cluster, etc), right? 
>>>>>> This is definitely strange as DNA memory is allocated when loading the 
>>>>>> driver.
>>>>> 
>>>>> Yes I meant standard DNA.
>>>>> 
>>>>> Hmm. Strangely it's working now! Last night it didn't but I can't see
>>>>> why. I tried again this morning but thought it failed, when now it seems
>>>>> it didn't (silly me). I was probably mistaken about the discrepancy with
>>>>> pfcount_aggregator.
>>>>> 
>>>>>> 
>>>>>>> How is memory allocated in DNA? Are there kernel options I'm missing?
>>>>>> 
>>>>>> No, there is no configuration for that.
>>>>>> 
>>>>>>> With DNA clusters, I can't get pfdnacluster_master to manage more than
>>>>>>> 16 queues either. I would have expected my custom one with duplication
>>>>>>> should actually only use as much memory as it does without duplication
>>>>>>> as the duplicates are of course the same packets and therefore the same
>>>>>>> memory.
>>>>>> 
>>>>>> Even if you are using duplication, memory with DNA clusters is allocated 
>>>>>> when opening the socket.
>>>>>> Actually on my test system with 4Gb of RAM I can run up to two cluster 
>>>>>> with 32 queues each.
>>>>>> Anyway memory management in libzero is something we are working on 
>>>>>> (there is space for improvements).
>>>>> 
>>>>> Is that with all the sockets open? I can certainly start
>>>>> pfdnacluster_master with that many queues, but the applications fail.
>>>> 
>>>> Yes, up and running. I forgot to tell you I'm using the default 
>>>> num_rx_slots/num_tx_slots (I don't know if you are using higher values).
>>>> 
>>>>> How much memory is used per socket? Strangely I didn't have problems
>>>>> with testing e1000e DNA +libzero on another 64-bit system with less
>>>>> memory (16GB instead of 32) but running Ubuntu 10.04 instead of 12.04.
>>>>> 
>>>>> Is the memory used what is shown in ifconfig? :-
>>>> 
>>>> No
>>>> 
>>>>> 
>>>>> dna0      Link encap:Ethernet  HWaddr 00:1b:21:cd:a2:74
>>>>>       inet6 addr: fe80::21b:21ff:fecd:a274/64 Scope:Link
>>>>>       UP BROADCAST RUNNING PROMISC MULTICAST  MTU:1522  Metric:1
>>>>>       RX packets:195271292 errors:0 dropped:0 overruns:0 frame:0
>>>>>       TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
>>>>>       collisions:0 txqueuelen:1000
>>>>>       RX bytes:157968626881 (157.9 GB)  TX bytes:0 (0.0 B)
>>>>>       Memory:d9280000-d9300000
>>>>> 
>>>>> Does the setting of "ethtool -g" make a difference (presumably the same
>>>>> as num_rx_slots= in the module parameters)?
>>>> 
>>>> No, ethtool -g is not supported.
>>>> 
>>>>> 
>>>>> Sorry for the inquisition :-) but it would be nice to understand what's
>>>>> possible.
>>>> 
>>>> np
>>>> 
>>>> Alfredo
>>>> 
>>>>> 
>>>>> Best Wishes,
>>>>> Chris
>>>>> 
>>>>> -- 
>>>>> --+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+-
>>>>> Christopher Wakelin,                           [email protected]
>>>>> IT Services Centre, The University of Reading,  Tel: +44 (0)118 378 2908
>>>>> Whiteknights, Reading, RG6 6AF, UK              Fax: +44 (0)118 975 3094
>>>>> _______________________________________________
>>>>> Ntop-misc mailing list
>>>>> [email protected]
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>>> 
>>> 
>>> _______________________________________________
>>> Ntop-misc mailing list
>>> [email protected]
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>>> 
>> 
>> 
>> -- 
>> --+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+-
>> Christopher Wakelin,                           [email protected]
>> IT Services Centre, The University of Reading,  Tel: +44 (0)118 378 2908
>> Whiteknights, Reading, RG6 6AF, UK              Fax: +44 (0)118 975 3094
>> _______________________________________________
>> Ntop-misc mailing list
>> [email protected]
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> 

_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to