Another question. It seems that suricata can go into ZC mode
without using zbalance_ipc, however, the card I have (82599)
only supports RSS values of upto 16. Would I be able to take
advantage of all the cores I have with suri in this instance
if I moved to a card that can support more RSS entries
(e.g., fm10k : 128 ) ?

Jim

On 10/14/2016 03:53 AM, Alfredo Cardigliano wrote:
> Hi Jim
> please note that when using distribution to multiple applications (using a 
> comma-separated list in -n), 
> the fan-out API is used which supports up to 32 egress queues total, in your 
> case you are using 73 queues,
> thus I guess only the first 32 instances are receiving traffic (and maybe 
> duplicated traffic due to a wrong 
> egress mask) . I will add a check for this in zbalance_ipc to avoid this kind 
> of misconfigurations.
> 
> Alfredo
> 
>> On 13 Oct 2016, at 22:35, Jim Hranicky <j...@ufl.edu> wrote:
>>
>> I'm testing out a new server (36 cores, 72 with HT) using
>> zbalance_ipc, and it seems occasionally some packets are
>> getting sent to multiple processes. 
>>
>> I'm currently running zbalance_ipc like so: 
>>
>>  /usr/local/pf/bin/zbalance_ipc -i zc:ens5f0 -m 4 -n 72,1 -c 99 -g 0 -S 1
>>
>> with 72 snorts like so: 
>>
>>  /usr/sbin/snort -D -i zc:99@$i --daq-dir=/usr/lib64/daq \
>>  --daq-var clusterid=99 --daq-var bindcpu=$i --daq pfring_zc \
>>  -c /etc/snort/ufirt-snort-pf-ewan.conf -l /var/log/snort69 -R ($i + 1)
>>
>> I've got a custom HTTP rule to catch GETs with a particular 
>> user-agent. I run 100 GETs, and each GET request has the run
>> number and timestamp in the url. (GET /1/<ts>, GET /2/<ts>, etc) 
>> and this is what I end up getting when I check the GETs : 
>>
>>      1 GET /11
>>      1 GET /2
>>      1 GET /30
>>      1 GET /34
>>      1 GET /37
>>      1 GET /5
>>      1 GET /59
>>      1 GET /62
>>      1 GET /70
>>      1 GET /8
>>      1 GET /83
>>      1 GET /84
>>      1 GET /9
>>      1 GET /90
>>      1 GET /94
>>      1 GET /95
>>     16 GET /97
>>     20 GET /12
>>     20 GET /38
>>
>> Obviously I'm still running into packet loss, but several of the
>> GETs are getting sent to multiple processes: 
>>
>>    ens5f0.33 GET /12/2016-10-13.14:04:49 HTTP/1.1
>>    ens5f0.53 GET /12/2016-10-13.14:04:49 HTTP/1.1
>>    ens5f0.42 GET /12/2016-10-13.14:04:49 HTTP/1.1
>>    ens5f0.44 GET /12/2016-10-13.14:04:49 HTTP/1.1
>>    ens5f0.46 GET /12/2016-10-13.14:04:49 HTTP/1.1
>>    ens5f0.35 GET /12/2016-10-13.14:04:49 HTTP/1.1
>>    ens5f0.67 GET /12/2016-10-13.14:04:49 HTTP/1.1
>>    ens5f0.34 GET /12/2016-10-13.14:04:49 HTTP/1.1
>>    ens5f0.36 GET /12/2016-10-13.14:04:49 HTTP/1.1
>>    ens5f0.62 GET /12/2016-10-13.14:04:49 HTTP/1.1
>>    ens5f0.70 GET /12/2016-10-13.14:04:49 HTTP/1.1
>>    ens5f0.65 GET /12/2016-10-13.14:04:49 HTTP/1.1
>>    ens5f0.57 GET /12/2016-10-13.14:04:49 HTTP/1.1
>>    ens5f0.63 GET /12/2016-10-13.14:04:49 HTTP/1.1
>>    ens5f0.68 GET /12/2016-10-13.14:04:49 HTTP/1.1
>>    ens5f0.38 GET /12/2016-10-13.14:04:49 HTTP/1.1
>>    ens5f0.49 GET /12/2016-10-13.14:04:49 HTTP/1.1
>>    ens5f0.61 GET /12/2016-10-13.14:04:49 HTTP/1.1
>>    ens5f0.32 GET /12/2016-10-13.14:04:49 HTTP/1.1
>>    ens5f0.72 GET /12/2016-10-13.14:04:49 HTTP/1.1
>>
>> Is this an issue with the zbalance_ipc hash? I tried using
>>
>>  -m 1
>>
>> but it seemed like I ended up dropping even more packets. 
>>
>> Any advice/pointers appreciated. 
>>
>> --
>> Jim Hranicky
>> Data Security Specialist
>> UF Information Technology
>> 105 NW 16TH ST Room #104 GAINESVILLE FL 32603-1826
>> 352-273-1341
>> _______________________________________________
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> 
> _______________________________________________
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> 
_______________________________________________
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to