Re: [Wireshark-users] Filtering a very large capture file

2007-01-29 Thread Stuart MacDonald
From: On Behalf Of ARAMBULO, Norman R. So you have captured a large data of 16Gb, is it from a large network? No, just a saturated link between two machines, over the course of 24 hours give or take. What is the average xx Mb/sec No idea. A rough guess puts it about 50

Re: [Wireshark-users] Filtering a very large capture file

2007-01-26 Thread Stuart MacDonald
From: On Behalf Of Jeff Morriss What about: - split the files into 1000 smaller files - use a (decent) shell with tshark to process those files with tshark The latter could be achieved in a Korn style shell with something like: (for f in *.eth do tshark -r $f -w - -R

Re: [Wireshark-users] Filtering a very large capture file

2007-01-26 Thread Stuart MacDonald
From: Stuart MacDonald [mailto:[EMAIL PROTECTED] I don't think the documentation mentions '-' is supported for -w. Cancel that, I just missed it last night. It was late. ..Stu ___ Wireshark-users mailing list Wireshark-users@wireshark.org http

Re: [Wireshark-users] Filtering a very large capture file

2007-01-26 Thread Stuart MacDonald
From: On Behalf Of Guy Harris On Jan 25, 2007, at 8:23 PM, Stuart MacDonald wrote: I've read the man pages on the tools that come with Wireshark. I was hoping to find a tool that opens a capture, applies a filter and outputs matching packets to a new file. Here's a sample run

Re: [Wireshark-users] Filtering a very large capture file

2007-01-26 Thread Stuart MacDonald
From: On Behalf Of Small, James I wonder if ngrep would work for you: http://ngrep.sourceforge.net/ Nifty! I bet it would, but the tcpdump solution earlier has worked for me. Thanks though! ..Stu ___ Wireshark-users mailing list

[Wireshark-users] Filtering a very large capture file

2007-01-25 Thread Stuart MacDonald
I have a very large capture file from tcpdump, 16 Gb. Wireshark crashes trying to open it, a known issue. For some of my investigation I used editcap and split it into smaller captures, and that worked okay, but there were 1000 of them and each is still slow to load/filter/etc; the size ranges