Fabian Schneider wrote:

I thought (and i have a programm running with this) that you can use the to_ms value in pcap_open_live() to set such a timeout. The value won't be interpreted by some OS'ses like FreeBSD or if you are using the libpcap-mmap patch, resulting in a normal behaviour. But with Linux everything works. So i set the to_ms value to 100, and everything works fine.

Really?  I think you have it backwards; see below.

The problem with this solution is, that this to_ms parameter is not meant to be used like this (exerpt form the man page:)

--------------------------------------------------------
pcap_open_live()
...
to_ms specifies the read timeout in milliseconds. The read timeout is used to arrange that the read not necessarily return immediately when a packet is seen, but that it wait for some amount of time to allow more packets to arrive and to read multiple packets from the OS kernel in one operation. Not all platforms support a read timeout; on platforms that don't, the read timeout is ignored. A zero value for to_ms, on platforms that support a read timeout, will cause a read to wait forever to allow enough packets to arrive, with no timeout.

Yes, I know - I wrote that section, and did so to discourage people from thinking the timer is guaranteed to

        1) exist

and

        2) be started when you do a read.

The timer first appeared in BPF (so it *IS* interpreted by the BSDs; I think you have it backwards above). It's used to do "batching" of packets, so that one read() call can deliver multiple packets, reducing CPU overhead; the timer is there to keep the read() from waiting forever for enough packets to arrive.

For some unknown reason, the BPF timer is started at the time you do a read, rather than when the first packet arrives. That can lead people to believe that the timer guarantees that a call such as pcap_dispatch(), pcap_next(), or pcap_next_ex() will block for no longer than the specified timeout period.

The timing mechanism in Solaris's bufmod is similar - but the timer starts when the first packet arrives; that means there is *NO* maximum amount of time that pcap_dispatch(), pcap_next(), or pcap_next_ex() will block - if no packets arrive, they'll stay blocked.

Linux PF_PACKET sockets have no buffering or timeout mechanism.

 How can I tell Linux to return from that readfrom() call that it's
blocking on?
You *might* be able to do it with pthread_cancel(), although that will,
ultimately, terminate the thread (unless a cleanup handler never returns).

And this sound like a dirty hack, where additional effort is required to perform the normal cleanup at the end.

If the application is shutting down, the threads will be terminating anyway; I think that was the case the person who asked about this was talking about.

The *ideal* would be if all packet capture mechanisms had a way in which some OS call could cause a blocking read/recvfrom/whatever to terminate prematurely with a "call was terminated early" indication.

If pcap_breakloop() is called in a signal handler, and the signal in question isn't set up to restart system calls, that should let the loop terminate cleanly. If it's not called in a signal handler, i.e. if there's no signal that was delivered to the process, that won't help.
-
This is the tcpdump-workers list.
Visit https://lists.sandelman.ca/ to unsubscribe.

Reply via email to