Hi Mike, 

> > Just wondering what the is the best method to handle the seperate 
> > exporters? (Separate flow-tools server for each, then "combine" the 
> > flows, or have them all exporting to the one flow-tools server?).
> > 
> > The routers will be in geographically disperse locations.
> 
> Since flow PDUs are UDP, we tend to try to put 
> flow-collectors geographically near the routers, then ship 
> the collected files to a processing box via a reliable 
> transport (i.e. tcp.)  At the collector box, there's a 
> /data/router_ directory for each router, and our analysis 
> scripts do things like flow-cat /data/router*/*$date* to 
> scoop up all the flow data; it's easier to recombine than to sift out.

What do you do with duplicate flows?

Example: Traffic destined for client xxx.xxx.xxx.1 comes in via Router B
(Internet Feed), which is then routed to client who is connected to
Router A - Both Router A + Router B will have a flow for this traffic,
so there is a chance of double billing?

As we have multiple upstream connections (All on different routers),
traffic destined for a given destination can potentially come in via any
of these Upstreams(Due to BGP) - How do we ensure that we sift out these
duplicates?

Regards,
Michael

> 
> The flow-collectors themselves can be very modest servers; 
> flow-capture doesn't take a lot of CPU, at least with the 
> routers we have.
> 
> Mike
> 
_______________________________________________
Flow-tools mailing list
[EMAIL PROTECTED]
http://mailman.splintered.net/mailman/listinfo/flow-tools

Reply via email to