> >Again, editcap only gives a single segment.  It does not 
> break up the file
> into many arbitrary-sized or -length chunks.  See 'man 
> >split' for a text
> version of what I'm talking about.
> 
> 1, Loop oer the huge capture with editcap

That has been suggested, but this is quite inefficient.

> 2, Dont create such huge captures.

I won't need to if I can raise the number of files in Ethereal's ring buffer.  :-)

> 3, Develop a new version of editcap that can do the kind of 
> split you look for.

Ah, the perennial answer in the open source world.  Since the maintainers of editcap 
actually answer their e-mail, I may just hork the source and re-write editcap as 
'splitcap', but I'm rusty enough on my C that this could take some time.  Editcap 
itself could be modified, but IMHO this should really be a separate utility.

But once finished, a 'splitcap' program would be much more efficient than looping over 
editcap, because splitcap would only need to read the input file once.

> Stoping and restarting the capture is an efficient method to 
> control the amount of state buildup.

This was addressed later in the thread.  Since I am not using any filtering at all 
during the capture process - remember that the goal is to grab everything - there 
isn't a lot of state to worry about.

I have had a packet capture going for about 5 days now on a link that averages 52Mb 
(peak 82Mb), and the process is only using 8MB of RAM.

If the process were to grow over time, I'm thinking that restarting it once per month 
wouldn't create to big a problem.  I could simply stop the capture, move the files, 
then restart it, taking care to age out the old files in some sane way.

> '&' means, run command in background.
> Thus, the script will NOT waint until the previous capture is 
> compressed before it starts a new one.
> There will only be a very short gap between each capture 
> where a few packets have been lost.

True, but there's still the turnaround in launching the next capture session.  The 
time to spawn a new shell and a new instance of tethereal, set the NIC to promiscuous 
mode and actually begin the new capture session is not even close to zero.

> You may want to compress the captures even for 75Mbit/s links 
> since the
> uncompressed data from such a link is ~8MByte/s -> ~3GByte/hour ->
> 75GByte/day -> ~250GByte for the capture to over just over 3 days.

That's not a problem.  We've found through testing that at peak times we're getting 
about 10-12MB/s actually written to disk, and we're ordering a 1.7TB (that's 1700GB) 
disk array to hold the data.  I think we'll have room.  :-)  Reiserfs is good...

> Probably an arbitrary limit that the developer of the 
> ringbuffer function
> picked as good enough for most situations.
> 
> You have the source, use it.

I may, but I brought the topic up here, as patching every single released version of 
Ethereal is creating endless work to solve a simple problem.  Even multiplying the 
maximum size of the ring buffer by 10 would be a tremendous boon, if it were so in the 
stock code.

--J

Reply via email to