Re: [tcpdump-workers] newbie question

2004-12-30 Thread Navis
Thanks a lot for the explanation, it's very helping me. And I think, now
I must re-read the OS text book again :) 

Hope, you could answer my questions on the next days.

Best regards,
Navis Faisal

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Guy Harris
Sent: Friday, December 31, 2004 2:29 AM
To: tcpdump-workers@lists.tcpdump.org
Subject: Re: [tcpdump-workers] newbie question

Navis wrote:

 You said about buffer, could you explain about what this buffer is?

Packet capturing with libpcap uses a mechanism in the OS (or, in the 
case of Windows and WinPcap, a driver that comes with WinPcap that uses 
a mechanism in the OS, and that runs in the kernel).

Different mechanisms are used on different OSes, because different OSes 
provide different mechanisms, but most of them (probably all of them) 
put received packets into a memory buffer of some sort in the kernel, 
and support reading from that buffer by user-mode code.

Libpcap is one of the pieces of user-mode code that would read from that

buffer.

 Is it pcap reads from this buffer as soon the packet arrive or pcap
 reads the buffer after the buffer full, is it having any correlation
 with to_ms parameter on pcap_open_live()?

It depends on the OS.

Libpcap does, depending on the OS, a read() or a recvfrom() or a 
getmsg() or whatever call is done on Windows to read packets from the 
buffer.  Whether that call returns as soon as a packet arrives, or when 
the buffer fills up or a timeout expires, depends on the OS.

In BSD systems ({Free,Open,Net,Dragonfly}BSD and OS X), the read() 
returns when the buffer fills or the timeout expires; the timer starts 
as soon as the read is done, and the read will return even if no packets

have arrived.  There are actually *two* buffers, and when one buffer 
fills or the timeout expires, that buffer is made available to read, and

the other buffer, if it's been emptied by the user-mode code reading all

the packets in it, is made available to fill with packets.  The default 
buffer size can, on at least some of those systems, be changed with
sysctl.

In Solaris, the getmsg() returns when the number of bytes of packet 
information specified by libpcap (64KB) has been received or the timeout

expires; the timer starts as soon as the first packet is seen, so the 
getmsg() will *NOT* return unless at least one packet has been received.

  Packet chunks are buffered in STREAMS buffers, and stored at the 
stream head - the amount of data that can be buffered at the stream head

is controlled by the OS and the STREAMS modules, so I don't know how it 
corresponds to the chunk size, although it's probably larger than the 
chunk size.

In HP-UX, the getmsg() probably returns as soon as a packet arrives, and

there is no timeout; the getmsg() will *NOT*, as far as I know, return 
unless at least one packet has been received.  Packets are buffered in 
STREAMS buffers, and stored at the stream head - I don't know what the 
amount of data that can be stored at the stream head is.

In Linux, the recvmsg() returns as soon as a packet arrives, and there 
is no timeout; the recvmsg() will *NOT* return until at least one packet

has been received.  Packets are buffered in skbuffs, and stored in the 
buffer for the socket; the amount of data that can be buffered there is 
whatever the default is for a PF_PACKET socket (or, on 2.0 kernels, a 
PF_INET socket), but could be changed, I think, with a setsockopt() 
call setting SO_RCVBUF - libpcap doesn't change it.

In AIX, if BPF is used (the current default in libpcap, the read() 
returns, I think, as soon as a packet arrives - the buffering is similar

to what is done in BSD, but, due to problems in AIX's BPF 
implementation, libpcap turns on immediate mode so that BPF doesn't 
wait until the buffer is full.  I don't know how big the buffer is. If 
DLPI is used, it probably behaves similarly to HP-UX.

In SunOS 3.x, I don't know how the buffering works; in 4.x, it's done 
using STREAMS, but I don't know how much data can be queued up at the 
stream head.  There is a timeout in both cases, but I don't know whether

it starts when the read() is done or when the first packet arrives; I 
suspect it happens when the first packet arrives, by analogy with what's

done in 5.x.

In Digital UNIX, I think the buffering is similar to what's done in BPF;

I don't know what the buffer sizes are.

In IRIX, the read() returns as soon as a packet arrives, and there is no

timeout; the recvmsg() will *NOT* return until at least one packet ahs 
been received.  A socket is used, so the buffering is probably somewhat 
similar to Linux.

On Windows with WinPcap, I think the buffering is somewhat similar to 
what's done in BPF; see documents linked to by

http://winpcap.polito.it/docs/default.htm

for details.  The buffer size can be set with a WinPcap-specific API.

 And is it the packet always come in the right sequence,

I don't know whether any

Re: [tcpdump-workers] newbie question

2004-12-29 Thread Navis
Hi, sory if I'm asking again since I'm rather confuse with the
explanation. 

You said about buffer, could you explain about what this buffer is?

Is it pcap reads from this buffer as soon the packet arrive or pcap
reads the buffer after the buffer full, is it having any correlation
with to_ms parameter on pcap_open_live()?

And is it the packet always come in the right sequence, or we must
arrange coming packets on the right order if we want to read all of the
connection session?

Thankyou before


-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Guy Harris
Sent: Thursday, December 30, 2004 3:42 AM
To: tcpdump-workers@lists.tcpdump.org
Subject: Re: [tcpdump-workers] newbie question

It'd be caused by the sniffer not being able to read packets fast enough

that whatever buffer the OS uses in the capture mechanism doesn't fill 
up so that packets don't arrive when there's no room left in the buffer.



-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.