Re: [tcpdump-workers] Where does libpcap get the incoming network data? From the driver?

2011-03-07 Thread Fabian Schneider
Hi, 

that depends on the OS.

> 1. Does libpcap obtain incoming packet data from the nic's driver or from 
> somewhere else?
> 2. Does libpcap obtain outgoing packet data from the linux IP layer or from 
> somewhere else?

Actually it is in between. What happens is that libpcap requests a PF_PACKET 
socket which registers itself as a consumer of incoming packets on the same 
level as e.g. the IP Stack. Basically there is a centralized queue per NIC that 
is outside the driver context and keeps track of how many destinations packets 
need to be delivered. 

For more info you can check my master's thesis [1] in Section 2 or an Linux 
Journal article [2]. Note, that by now instead of copying the packets to the 
user space also memory mapped version of libpcap exist. But that does not 
change the place where the packets are obtained from.

best
Fabian

[1] http://www.net.t-labs.tu-berlin.de/~fabian/papers/da.pdf
[2] http://www.linuxjournal.com/article/4852

-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Best OS / Distribution for gigabit capture?

2011-02-07 Thread Fabian Schneider
Hi,

Regarding the OS we have done testing on this some five years ago. Back then we 
found that FreeBSD performed better than Linux. Yet there have been 
improvements proposed for both Linux (memory mapping, and Luca Deri's work) and 
FreeBSD ("zero-copy BPF and Alexandre Fiveg's work). To get details just google 
all this.

Yet, experience from operating a large scale packet capturing systems shows 
that the biggest challenge usually is to have a disk system that is fast enough 
to write the stream of packets to disk. You might want to check this first. 
(e.g. you can run a Bonnie++ to see how fast your disk system is.)

 Best
Fabian

Sent from iPhone -> might be shorter than usual

Am 06.02.2011 um 08:20 schrieb "M. V." :

> hi,
> 
> as i mentioned in my previous mail, (with the title: "HUGE packet-drop") i'm 
> having problem trying to dump gigabit traffic on harddisk with tcpdump on 
> Debian5.0. i tried almost everything but got no success. so, i decided to 
> start-over:
> 
> *) if anyone has experience on successful gigabit capture, what combination 
> of 
> "Operating-System / Distribution / Kernel Version / libpcap version / ..." do 
> you suggest for maximum zero-packet-loss capture?
> 
> thank you.
> 
> 
> 
>  -
> This is the tcpdump-workers list.
> Visit https://cod.sandelman.ca/ to unsubscribe.
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Multiple pcap filters on interface

2008-10-07 Thread Fabian Schneider

Hi Jim,

As the limitation really is in the kernel, and all your approaches turn 
out not to work as you expect you might want to consider other 
possibilities. E.g. it might be fairly easy to do something like you 
suggested via either in kernel firewalling solution (although i am not 
sure if those would actively send packets) or it might be worth taking a 
look at "click"[1] and writing a small click programm/module which can run 
in kernel (at least) on Linux boxes.

As you mentioned enterprise solution you might need to deal with a huge 
set of connections, therefore maybe some advice regarding the performance 
of the proposed solutions:

1) As you already mentioned this can easily get a bottleneck on commodity 
   hardware in Gigabit or faster environments. Maybe it would be worth 
   considering specialised hardware (e.g. I know that Endace DAG card 
   would be capable of doing want you what, although not out-of-the-box. 
   You would need a custom solution, which is possible with Endace).

2) I would not neccesarily say that the bpf filter expression is the 
   limiting factor, but again and again we experience short periods of no 
   packet being delivered to the pcap application or even drops when 
   changing the bpf expressions. Maybe this would not be a show stopper
   for your setting, if you do not care about few seconds out-of-order.

3) I would definitively not advise this solution as performance wise this 
   can be more harmful than solution 1. The problem is that due to the 
   increased neccessity of user space - kernel context changes the 
   performance might drop with to many capturing applications in parallel. 
   If you want to use this approach i would recommend Linux, as (Free)BSD 
   sets up a kernel buffer for every capturing application, where as Linux 
   has one big buffer for all packets and uses reference counting to 
   delete packets. (At least for me for the task at hand Linux looks more 
   suitable)

And yes, the 'one filter per interface per process' policy cannot be 
overcome with current OSes.

   best
   Fabian

[1] http://read.cs.ucla.edu/click/

-- 
Fabian Schneider (Dipl. Inf.), An-Institut Deutsche Telekom Laboratories
Technische Universitaet Berlin, Fakultaet IV -- E-Technik und Informatik
address: Sekr. TEL 4, Ernst-Reuter-Platz 7, 10587 Berlin
e-mail: [EMAIL PROTECTED], WWW: http://www.net.in.tum.de/~schneifa
phone: +49 30 8353 - 58513, mobile: +49 179 242 76 71

On Tue, 7 Oct 2008, Jim Mellander wrote:

> Hi:
> 
> I've working on a TCP connection-killer daemon that will receive
> requests of the following type:
> 
> 'kill all connections between host x & host y'
> 
> and craft response packets based on received packets.
> 
> Of course, it will have a mechanism for removing such requests from its
> active list.
> 
> There are a number of programs (tcpkill, couic) which take pcap
> expressions and send RST's in response to packets which match, but they
> are too limited for my purposes - I'm trying to develop an
> enterprise-capable tool.
> 
> I've thought of several mechanisms to program this:
> 
> 1. A master pcap filter of 'tcp', which would hoist all tcp packets to
> the user-level, then inspect IP's for match, either by direct packet
> inspection or by compiled pcap expressions in userland - maybe zero-copy
> bpf would help here.
> 
> 2. Incrementally build a pcap filter 'tcp and ((host a and host b) or
> (host c and host d))' ... etc. and apply to interface - my problem with
> this approach is the limited number of host pairs this would be able to
> accommodate.
> 
> 3. Have a manager program which forks off as needed separate processes
> to handle the requests individually.
> 
> All of the above are attempts to overcome the 'one filter per interface
> per process' model that I believe libpcap imposes - or am I wrong?  Is
> there something I've overlooked?
> 
> Any advice welcome - thanks in advance.
> 
> -- 
> Jim Mellander
> Incident Response Manager
> Computer Protection Program
> Lawrence Berkeley National Laboratory
> (510) 486-7204
> 
> The reason you are having computer problems is:
> 
> Internet outage
> -
> This is the tcpdump-workers list.
> Visit https://cod.sandelman.ca/ to unsubscribe.
> 
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] pcap_setbuf not available in linux

2008-05-26 Thread Fabian Schneider

Hi Ritesh,

1.   In this I want to set the kernel buffer for pcap driver, for this
function pcap_setbuff is not available, but this same function is available
in windows. So how we can set the pcap driver packet queue kernel buffer in
linux? Is their any way through which we can include the  pcap_setbuff call,
if not what is workaround?
  
please note first that the winpcap and libpcap are two independent 
projects. Furthermore winpcap has a fairly easy job to support kernel 
level changes since the Windows Kernel API does not change dramatically 
and can be tracked over several different version of Windows. Whereas 
libpcap support almost all types of *NIX'es and Linux'es. The problem 
here is that not all of the supported OS'ses have kernel buffers to 
tune, and more important every OS has it own way to deal with packets in 
kernel. Therefore the procedures to change the in-kernel buffers are 
significantly different as well.


For changing the kernel-buffers in FreeBSD and Linux I want to point you 
to our website which gives some hints on increasing the capturing 
performance:
www.net.t-labs.tu-berlin.de/research/hppc/. If you are furthermore 
interested in understanding how the kernel part of capturing in FreeBSD 
and Linux works feel free to read any of the cited literature on the 
upon mentioned site.


Regarding your second question i would have to look it up. Maybe there 
is another one on this list who can answer your question straight away.



best
Fabian


-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] pcap performance question

2008-05-20 Thread Fabian Schneider

Hi,

> I would like to capture TCP traffic to/from several groups of hosts, maybe we
> are talking about 20-30 groups in the maximum with something between 1-10
> hosts in each group. All these host have individual IPs and ports, there is no
> chance to capture parts of a network or something like that. I would like to
> do the job with libpcap under linux and winpcap under windows.

If I understood that correct, you would end up with a filter expression 
with 20-30 distinct IP ranges that will be concatenated with "and"s.

I can only tell this for Linux: 

> If you had to solve this problem, which way would you go?

I would definitively go with the huge expression rather that single 
threads. The reason is the following:

The filter expression will get transfered into BPF(-like for Linux) code 
which is then executed in kernel context. All the packets that match 
the expression end up in a queue to be withdraw by the user-space 
application usually using libpcap functionality. If you start several 
capture threads simultaneously you will end up with multiple queues and 
multiple processes requesting data from the kernel. This leads to a huge 
amount of Kernel-to-Userspace context switches which harm the capturing 
performance.

As shown in my master's thesis (Diplomarbeit) especially Linux cannot deal 
with the load of multiple concurrent capturing processes well. But even 
for complex filters (way more complex than your setting) the peformance is 
only slightly affected. See Sections 6.3.2 and 6.3.3 in my thesis:

http://www.net.t-labs.tu-berlin.de/~fabian/papers/da.pdf


   best
   Fabian Schneider

-- 
Fabian Schneider (Dipl. Inf.), An-Institut Deutsche Telekom Laboratories
Technische Universitaet Berlin, Fakultaet IV -- E-Technik und Informatik
address: Sekr. TEL 4, Ernst-Reuter-Platz 7, 10587 Berlin
e-mail: [EMAIL PROTECTED], WWW: http://www.net.in.tum.de/~schneifa
phone: +49 30 8353 - 58513, mobile: +49 179 242 76 71
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] NIC / driver performance with libpcap

2008-01-09 Thread Fabian Schneider

Hi Andy,


> The two metrics I looking at now are:
>
> - What throughput can I get before seeing dropped packets
> - CPU usage
  
maybe you want to take a look at [1] where I have done exactly this for a
special systems with Intel cards. If you want to read more background on  
this have a look at [2] where the measurement setup is explained in more  
detail.
  
We are trying to gather all things that enable high performance capturing
on the following web page:

http://www.net.t-labs.tu-berlin.de/research/hppc/

Everybody is welcome to supply further and newer hint and tips on this 
topic. 


   best
   Fabian

[1] http://www.net.t-labs.tu-berlin.de/papers/SWF-PCCH10GEE-07.pdf
[2] http://www.net.t-labs.tu-berlin.de/papers/S-PEPCSHN-05.pdf

-- 
Fabian Schneider (Dipl. Inf.), An-Institut Deutsche Telekom Laboratories
Technische Universitaet Berlin, Fakultaet IV -- E-Technik und Informatik
address: Sekr. TEL 4, Ernst-Reuter-Platz 7, 10587 Berlin
e-mail: [EMAIL PROTECTED], WWW: http://www.net.in.tum.de/~schneifa
phone: +49 30 8353 - 58513, mobile: +49 179 242 76 71
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] NIC / driver performance with libpcap

2008-01-08 Thread Fabian Schneider

Hi,

> If such a list does not exist, I'd be happy to collate whatever information
> available.

As I don't know such a list, I just tell what i know. By the way, which 
link layer technology are we talking about? And, I cannot say anything for 
Solaris.

For 1-GigEthernet my experience shows that the intel cards are working 
quite well. You do not want to use cheaper ones like e.g. Netgear those at 
least were not able to process a fully loaded link and then the driver 
crashed. Sysconnect cards are as well working ok, though to as fast as the 
intels. By the way you want to enable NAPI for Linux.

For 10-GigEthernet i heard (no own experience) that the Neterion Cards are 
good. Those are available in a special Solaris version as well. 


   bye
   Fabian Schneider

-- 
Fabian Schneider (Dipl. Inf.), An-Institut Deutsche Telekom Laboratories
Technische Universitaet Berlin, Fakultaet IV -- E-Technik und Informatik
address: Sekr. TEL 4, Ernst-Reuter-Platz 7, 10587 Berlin
e-mail: [EMAIL PROTECTED], WWW: http://www.net.in.tum.de/~schneifa
phone: +49 30 8353 - 58513, mobile: +49 179 242 76 71
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Packet capture performance comparison of

2007-07-02 Thread Fabian Schneider

Hi,

> I've read Fabian Schneider's thesis "Performance evaluation of packet
> capturing systems for high-speed networks", which compares capture
> performance under variable testing and generally finds that dual-core
> Opterons perform somewhat better under heavy capture load than dual-core
> Xeons. But now that quad-core Xeons are available, I'm curious whether
> anyone has measured capture improvement using four cores. 

Unfortunately we haven't got such a system by now. But we are trying to 
get one soon. But i do not expect to much improvement regarding the 
difference between dual- and quad-core because i think the main bottleneck 
is the memory accessibility. Nevertheless i think it is necessary by now 
to examine the performance of the new Intel Core processor architecture, 
which could yield major improvments. 

> I should expect four cores to do better, but I'd be interested in any 
> empirical results to that effect. I'm wondering, for example, how close 
> a box with a couple of dual-port PCIe Gb NICs (Endace or nPulse) and 
> dual quad-core processors could come to 4Gb/s aggregate capture speed, 
> while writing some packets to disk. Has anyone out there put together 
> such a box and come up with some performance statistics?

We are hopefully going to do this soon, but i cannot promise a date.

   best
   Fabian

-- 
Fabian Schneider (Dipl. Inf.), An-Institut Deutsche Telekom Laboratories
Technische Universitaet Berlin, Fakultaet IV -- E-Technik und Informatik
address: Sekr. TEL 4, Ernst-Reuter-Platz 7, 10587 Berlin
e-mail: [EMAIL PROTECTED], WWW: http://www.net.in.tum.de/~schneifa
phone: +49 30 8353 - 58513, mobile: +49 179 242 76 71

-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] Filter complexity and performance

2007-01-15 Thread Fabian Schneider

Hi,

> Will we be able to capture twice as few packets (hopefully not)? I was
> hoping to kinda avoid the need to do this test if anyone has already did
> some sort of evaluation... 

Complex filters are cheap in terms of capturing performance. For a 
detailed examination take a look at: 

http://www.net.informatik.tu-muenchen.de/~schneifa/papers/da.pdf

(on page 40 (in Document count) section 6.3.1) 

   bye
   Fabian

-- 
Fabian Schneider (Dipl. Inf.), An-Institut Deutsche Telekom Laboratories
Technische Universitaet Berlin, Fakultaet IV -- E-Technik und Informatik
address: Sekr. TEL 4, Ernst-Reuter-Platz 7, 10587 Berlin
e-mail: [EMAIL PROTECTED], WWW: http://www.net.in.tum.de/~schneifa
phone: +49 30 8353 - 58513, mobile: +49 160 479 4397
-
This is the tcpdump-workers list.
Visit https://cod.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] pcap_loop() not returning after pcap_breakloop()

2006-06-27 Thread Fabian Schneider

Hi,

> Expected, yes.  Linux's packet capture mechanism doesn't have the timeouts
> that the WinPcap driver, BPF, etc. do.

I thought (and i have a programm running with this) that you can use the 
to_ms value in pcap_open_live() to set such a timeout. The value won't be 
interpreted by some OS'ses like FreeBSD or if you are using the 
libpcap-mmap patch, resulting in a normal behaviour. But with Linux 
everything works.  So i set the to_ms value to 100, and everything 
works fine.

The problem with this solution is, that this to_ms parameter is not meant 
to be used like this (exerpt form the man page:)


pcap_open_live()
...
to_ms specifies the read timeout in milliseconds.  The read timeout is 
used to arrange that the read not necessarily return immediately when a 
packet is seen, but that it wait for some amount of time to allow more 
packets to arrive and to read  multiple  packets  from  the OS kernel in 
one operation.  Not all platforms support a read timeout; on platforms  
that  don't,  the read timeout  is ignored.  A zero value for to_ms, on 
platforms that support a read timeout, will cause a read to wait forever 
to allow enough packets  to  arrive,  with  no  timeout.
...
-

> >  How can I tell Linux to return from that readfrom() call that it's
> > blocking on?
> 
> You *might* be able to do it with pthread_cancel(), although that will,
> ultimately, terminate the thread (unless a cleanup handler never returns).

And this sound like a dirty hack, where additional effort is required to 
perform the normal cleanup at the end.

   regards
   Fabian Schneider

-- 
Fabian Schneider,  Technische Universität München
address: Boltzmannstr. 3, 85748 Garching b. Münchenn
e-mail: [EMAIL PROTECTED], WWW: http://www.net.in.tum.de/~schneifa 
phone: +49 89 289-18012, mobile: 0179/2427671-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] didnt grab packet

2006-06-09 Thread Fabian Schneider

Hi,

I found the bug in your code:

Please remove the semicolon after the paratheses of the if clause. And 
everything should work fine!

> if(packet == NULL);
> {
> printf("Didnt grab packet %s\n",errbuf);
> exit(1);
> }

   mfG  
   Fabian Schneider

-- 
Fabian Schneider,  Technische Universität München
address: Boltzmannstr. 3, 85748 Garching b. Münchenn
e-mail: [EMAIL PROTECTED], WWW: http://www.net.in.tum.de/~schneifa 
phone: +49 89 289-18012, mobile: 0179/2427671
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] How to make libpcap work in MMAP mode

2006-05-19 Thread Fabian Schneider

Hi,

i think you are speaking of the patched version of libpcap available from 
Phil Wood, right?

> I want to know how to make libpcap(version 0.9.20060417) work in 
> MMAP mode. Would somebody give me some help? Thanks in advance!

The trick with that version is that it uses the mmaped ringbuffer 
automatically. But you have to make sure, that the programm which uses the 
libpcap is linked against this libpcap and not against any other -- 
official/standard -- libpcap. 

You can check if you are using the correct version by calling you program 
like this: 

PCAP_VERBOSE=1  

That produce two additional lines of output (i think stderr, but i am not 
sure) indicating that the mmaped version is used. The official verison of 
libpcap does not support this environment variable and ignores it.  
For tcpdump the output looks like this for example:

> PCAP_VERBOSE=1 tcpdump -i eth1 -w /dev/null host 192.168.0.1   
libpcap version: 0.9.20050810b-mmap-net 
Kernel filter, Protocol 0300, MMAP mode (819200 frames, snapshot 96), socket 
type: Raw 
tcpdump: listening on eth1, link-type EN10MB (Ethernet), capture size 96 bytes
...

by the way you should additionally use PCAP_FRAMES=max with the mmaped 
version for maximal effeiciency. For more detail take a look at:
http://public.lanl.gov/cpw/ 


   regards
   Fabian Schneider

-- 
Fabian Schneider,  Technische Universität München
address: Boltzmannstr. 3, 85748 Garching b. Münchenn
e-mail: [EMAIL PROTECTED], WWW: http://www.net.in.tum.de/~schneifa 
phone: +49 89 289-18012, mobile: 0179/2427671-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] where does PCAP timestamp before or after the

2006-03-30 Thread Fabian Schneider

Hi,

> We want to know where/when does PCAP put the timestamp (from not 
> so accurate kernel time) on to the packets. I already know,  it does 
> when the kernel "sees" the packet. The question is: Is it after or before 
> the MAC scheduler? I mean, does it in TR or RX buffers or at higher 
> protocol layers take place?

Under Linux (at least 2.6.x) the received packets are timestamped in 
netif_rx() direcly after the driver has setup a sk_buff for the packet. 
This is before the receiving interrupt is distributing the sk_buff (the 
struct in which the Linux kernel stores packets) to the interested 
receivers of which one is the sniffing "socket" as they call it within the 
linux kernel and another is the TCP/IP stack (ip_rcv()). 

For the sending of the packet i am not sure, but i think this happens 
directly before the packet it handed to the driver function. 

But what exactly is the "MAC scheduler"? I have not yet heard of it.

   mfG  
   Fabian Schneider

-- 
Fabian Schneider,  Technische Universität München
address: Boltzmannstr. 3, 85748 Garching b. Münchenn
e-mail: [EMAIL PROTECTED], WWW: http://www.net.in.tum.de/~schneifa 
phone: +49 89 289-18012, mobile: 0179/2427671-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.


Re: [tcpdump-workers] How to set snaplen for tcpdump

2006-03-16 Thread Fabian Schneider

Hi,

> Default snaplen value for tcpdump is 96 bytes. I need to change the
> snaplen value. How to set it. What's the command for that.
> If any one has any idea, please pass it on.

Did you allready look into the manpage? 

SYNOPSIS
   tcpdump [ -AdDeflLnNOpqRStuUvxX ] [ -c count ]
   [ -C file_size ] [ -F file ]
   [ -i interface ] [ -m module ] [ -r file ]
   [ -s snaplen ] [ -T type ] [ -w file ]
   [ -E [EMAIL PROTECTED] algo:secret,...  ]
   [ -y datalinktype ]
   [ expression ]

   -s Snarf snaplen bytes of  data  from  each  packet
  rather than the default of 68 (with SunOS's NIT,
  the minimum is actually 96).  68 bytes  is  ade‐
  quate for IP, ICMP, TCP and UDP but may truncate
  protocol information from name  server  and  NFS
  packets  (see below).  Packets truncated because
  of a limited snapshot are indicated in the  out‐
  put  with  ‘‘[|proto]'', where proto is the name
  of the protocol level at  which  the  truncation
  has occurred.  Note that taking larger snapshots
  both increases the amount of time  it  takes  to
  process  packets and, effectively, decreases the
  amount of  packet  buffering.   This  may  cause
  packets to be lost.  You should limit snaplen to
  the smallest number that will capture the proto‐
  col  information  you're interested in.  Setting
  snaplen to 0 means use the  required  length  to
  catch whole packets.

So -s is the command-line option you want to use!

   regards  
   Fabian Schneider

-- 
Fabian Schneider,  Technische Universität München
address: Boltzmannstr. 3, 85748 Garching b. Münchenn
e-mail: [EMAIL PROTECTED], WWW: http://www.net.in.tum.de/~schneifa 
phone: +49 89 289-18012, mobile: 0179/2427671-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.