On 2007-10-04 08:43, Steve Bertrand <[EMAIL PROTECTED]> wrote:
> Hi all,
> I've got a 28GB tcpdump capture file that I need to (hopefully) break
> down into a series of 100,000k lines or so, hopefully without the need
> of reading the entire file all at once.
> 
> I need to run a few Perl processes on the data in the file, but AFAICT,
> doing so on the entire original file is asking for trouble.
> 
> Is there any way to accomplish this, preferably with the ability to
> incrementally name each newly created file?

Depending on whether you want to capture only specific parts of the dump
in the 'split output', you may have luck with something like:

        tcpdump -r input.pcap -w output.pcap 'filter rules here'

This will read the file sequentially, which can be slower than having it
all in memory, but with a huge file like this it is probably a good idea :)

_______________________________________________
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to