On Mar 5, 2013, at 4:48 PM, J of Core <[email protected]> wrote:

> I was wondering if someone would be so kind as to explain how the clustermode 
> / tuple hashing affects how pf_ring runs.  I did some reading on tuples, but 
> it's still a bit unclear to me.

In linux/pf_ring.h there are some comments describing each cluster mode:

  cluster_per_flow = 0,         /* 6-tuple: <src ip, src port, dst ip, dst 
port, proto, vlan>  */
  cluster_round_robin,
  cluster_per_flow_2_tuple,     /* 2-tuple: <src ip,           dst ip           
            >  */
  cluster_per_flow_4_tuple,     /* 4-tuple: <src ip, src port, dst ip, dst port 
            >  */
  cluster_per_flow_5_tuple,     /* 5-tuple: <src ip, src port, dst ip, dst 
port, proto      >  */
  cluster_per_flow_tcp_5_tuple, /* 5-tuple only with TCP, 2 tuple with all 
other protos   */

> Should the number that is set for the cluster mode be related to the number 
> of pf_ring / snort instances that I'm running?  

No, the cluster mode represent the way the hash is computed on the packet, then 
the packet is delivered to the instance hash%num_instances.

> Or is it determined some other way?  Basically I'm trying to understand how 
> to determine what number would be good to set for the clustermode, or if I 
> should just leave it alone.

If you are not sure, leave the default.

Best Regards
Alfredo

> 
> Thanks!
> 
> On Fri, Mar 1, 2013 at 3:43 PM, Alfredo Cardigliano <[email protected]> 
> wrote:
> Please make sure your traffic is well balanced (i.e. that you are not using 
> test traffic with a single flow), by default a 2-tuple hashing is used, you 
> can change this setting using the clustermode parameter as reported in 
> PF_RING/userland/snort/pfring-daq-module/README.1st
> 
>    --daq-var clustermode=<mode>
> 
> Best Regards
> Alfredo
> 
> On Mar 2, 2013, at 12:30 AM, J of Core <[email protected]> wrote:
> 
>> Thanks for the reply, Jesse.  I tried running multiples w/diff pid files, 
>> log files, etc, but when I watch the counters I still only see one instance 
>> increasing.
>> 
>> I also did s'more searching online and found this on the metaflows google 
>> group: 
>> https://groups.google.com/forum/?fromgroups=#!topic/metaflows/Tjagd3MPr70
>> 
>> According to that post, I should be able to "run the command twice with the 
>> same exact arguments and they will slip the traffic. The pfring kernel 
>> module will automatically detect how many processes are running and split 
>> the traffic accordingly"  -- but that isn't working for me either.  That 
>> post is from 2011 so I'm not sure if things have changed since then or not.
>> 
>> So not sure what I'm missing to make it distribute the traffic between 
>> processes/instances.  I'll keep investigating / testing :)
>> 
>> thx
>> 
>> 
>> On Fri, Mar 1, 2013 at 12:46 PM, Jesse Bowling <[email protected]> 
>> wrote:
>> Hi Kevin,
>> 
>> This is what I get for reading in reverse order. :)
>> 
>> You are correct in what you wrote: you do have it up and running it would 
>> seem. To run more instances, you need to start multiple instances of snort 
>> and make sure that you pass them the same clusterid.
>> 
>> The only tricky part is making sure that each snort instance has it's own 
>> PID file, config file, logging directory, etc; that's usually the hardest 
>> part of getting multiple snort instances up. :)
>> 
>> There are a few strategies for managing the snort instance configs, but the 
>> one I've seen described that I liked the most was to create a vanilla config 
>> that expresses the things you want for every instance, and then create 
>> individual configs for each instance specifying only the things that are 
>> different and including the vanilla one. For instance:
>> 
>> snort.master.conf:
>> 
>> config interface: eth0
>> include /rules/SOme_rule_file
>> etc
>> 
>> and then:
>> 
>> inst1.conf:
>> 
>> config logdir: /nsm/snort/inst1
>> include snort.master.conf
>> 
>> That makes it a little easier to maintain your conf files...
>> 
>> GL,
>> 
>> Jesse
>> 
>> On Fri, Mar 1, 2013 at 2:46 PM, Kevin Hanser <[email protected]> wrote:
>> So I appear to have pf_ring installed (via the RPMs) and snort working with 
>> it.  If I start up a snort instance using a command line similar to the 
>> metaflows article (except I'm doing passive instead of inline for the time 
>> being):
>> 
>> snort -c /etc/snort/snort.conf -y -i eth0 --daq-dir /usr/local/lib/daq --daq 
>> pfring --daq-var clusterid=10 --daq-mode passive
>> 
>> I get a status counter "device" created in /proc/net/pf_ring named 
>> <pid>-eth0.1.  If I watch this file with cat while sending some traffic to 
>> the machine, I see the counters increasing, and snort is logging the 
>> information.  So based on this, it seems that snort is working with pf_ring, 
>> which was my "first step" so that's pretty cool.
>> 
>> Now I'm trying to figure out how I distribute the load across multiple snort 
>> / pf_ring instances.  I started up multiple instances of snort, but when I 
>> watch the counters it seems that only the one I started last is getting all 
>> the traffic.  I'm probably missing something in how I start it up, but I'm 
>> unsure what.
>> 
>> What do I need to tell pf_ring / snort so that they distribute the load 
>> across the multiple rings / snorts?  Is that what the clusterid=10 means?  
>> Is that telling each pf_ring that it's part of the same cluster?  I'm still 
>> working on understanding how all this works together so if anyone has any 
>> thoughts / suggestions that would be great!  I'll keep researching and 
>> reading and testing on my own as well,
>> 
>> thx!
>> 
>> _______________________________________________
>> Ntop-misc mailing list
>> [email protected]
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>> 
>> 
>> 
>> 
>> -- 
>> Jesse Bowling
>> 
>> 
>> _______________________________________________
>> Ntop-misc mailing list
>> [email protected]
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>> 
>> 
>> _______________________________________________
>> Ntop-misc mailing list
>> [email protected]
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> 
> 
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> 
> 
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to