Hello Everyone,

On Thursday 27 March 2003 07:03, John Hawkinson wrote:
> David Young <[EMAIL PROTECTED]> wrote on Wed, 26 Mar 2003
>
> at 12:57:38 -0600 in <[EMAIL PROTECTED]>:
> > No command-line option is necessary.  Use a pipe: tcpdump -w - | gzip.
>
> As discussed on this list earlier this year,
>
>   tcpdump -w - | ( gzip > foo&)
>
> is necessary to allow ^C-ing of tcpdump without gzip dying, in many
> shells.

Thanks for the useful tips.

I think we are overloading and/or saturating the pipe (on Linux 2.4 that is) 
while capturing >40000 packets/sec of 100 bytes across 5 network cards 
(Broadcom gigE cards, they are very nice BTW), and tcpdump reports packet 
loss (our requirements are >100000 packets/sec per interface of 100-1500 
bytes packets).

I am using Python 2.2 to read the stdout of tcpdump (tried reading 8192 to 
16000000 bytes in a single stdin.read() operation and writting using 
gzip.write() compression level 6 that is). Since there is a need to 
constantly read the packets 24 Hours a day 7 days a week (without dropping 
even a single packet) without pausing for a moment, I couldn't use gzip 
utility and starting and stopping regularly to achieve file rotation 
operation.
 
There are no packet drops from the kernel device driver POV.

It's highly desireable to write the tcpdump output using gzip/bzip2 as it 
reduces a lot on IO requirements (although it needs a little bit of CPU time, 
that's fine).

I believe if tcpdump in itself handle the gzip compression there may not be 
multiple copying of data across pipes etc.. which would ensure that we loose 
no packet. Please feel free to correct me if I am wrong.

Thanks for your help.
-- 
Hari
[EMAIL PROTECTED]

-
This is the TCPDUMP workers list. It is archived at
http://www.tcpdump.org/lists/workers/index.html
To unsubscribe use mailto:[EMAIL PROTECTED]

Reply via email to