Not really sure on CPU usage, I don't notice it and it is on a pretty
all-purpose AMD64. I capture on a 5 minute interval since that is what
I used in my previous setup.
I am running about 10 gig/month in data and currently haven't deleted
anything since I started in May. It looks like you can get about a
4.5:1 compression on the data using bzip2 though so I would be looking
at a little over 2G/month compressed data
My throughput on average is running about 15M and about 10M of that is
torrents that I seed for various IPTV programs and linux distributions.
With the current price of hard drives I don't see the need for getting
rid of the flow data from a technical standpoint. I'm not sure if I
want to keep that kind of data on my customers that long though. I have
just been too lazy to get rid of it at this point. Maybe after I spend
some more time thinking about what I actually want to save. Like port
statistics and possibly generic destination statistics (like they did x%
of traffic to y number of sites).
Sam Tetherow
Sandhills Wireless
David E. Smith wrote:
Sam Tetherow wrote:
I use nfcapd (part of nfdump) to capture the data, and have been using a
few of my own scripts to process the data. Not doing anything fancy
right now, just extracting data by IP address so I can graph user usage.
Ooh, that tickles my shell scripting fancy. ;)
How much disk and CPU space is that using for you, and how much
throughput are you tracking flows for? I know that Netflow only has to
keep some basic information on the packet, not the whole packet itself,
but even headers on my 20Mbps (peak) network could add up. (Also, how
far back do you keep flow data? Obviously, if you're only keeping a
week's worth, that's not as resource-intensive as a month, and so on.)
David Smith
MVN.net
--
WISPA Wireless List: [email protected]
Subscribe/Unsubscribe:
http://lists.wispa.org/mailman/listinfo/wireless
Archives: http://lists.wispa.org/pipermail/wireless/