> If your application is expecting a low rate of packet delivery and needs to 
> see packets as soon as they arrive, it should simply call 
> pcap_set_immediate_mode() if it is available, regardless of what operating 
> system it's running on or what version of libpcap it's using (as long as that 
> version *has* pcap_set_immediate_mode()); "show me packets as soon as they 
> arrive" is *exactly* what "immediate mode" means.  (On systems with BPF, it 
> turns on BPF's "immediate mode" - the name for it in libpcap was taken from 
> BPF! - which disables the buffering and timeout.)

This is where my problem lives:  if you use an older version of libpcap such as 
many Linux distributions still provide, you will get the behavior using 
pcap_open_live equivalent to immediate mode.  This seems to be the difference 
in TPACKET v2 vs TPACKET v3 support.  You're right that it is absolutely 
platform specific.  However, I don't see a method to place logic into code to 
invoke pcap_set_immediate_mode only if it exists, as I can't trust it to exist, 
since it is new.  Yet I must specify it to retain legacy behavior.  This puts 
me in an odd dependency loop that I must make I suppose 2 versions of the 
application, one against libpcap 1.5 and one against libpcap 1.2 and expect the 
newer one to fail to compile against the older lib.

I was hoping there was some sort of API version that I could identify using a 
#ifdef to macro out the code rather than doing ugly things like autoconf 
external version identification, but I couldn't find it, just the 
pcap_lib_version, which doesn't help for that situation.

I realize this behavior is not guaranteed cross-platform, and that there are 
very much 2 different expectations for the result of the pcap_loop.  In my 
case, I'm looking for pcap_loop to trigger without a delay as soon as a packet 
is available - and this is the behavior that existed in previous versions (and 
indeed if I do immediate mode).  I absolutely understand that many applications 
using libpcap do not want this behavior and instead want the throughput 
efficiency of buffering and timer delivery - it all depends on if an 
application is after latency or throughput.  The only reason I suggest it is it 
is a non-obvious change from older versions.

I don't suggest this to be a bug - it's not, it's just a change in default 
behavior from before which has some pretty significant impact.

________________________________

The information in this e-mail is intended only for the person or entity to 
which it is addressed.

It may contain confidential and /or privileged material. If someone other than 
the intended recipient should receive this e-mail, he / she shall not be 
entitled to read, disseminate, disclose or duplicate it.

If you receive this e-mail unintentionally, please inform us immediately by 
"reply" and then delete it from your system. Although this information has been 
compiled with great care, neither IMC Financial Markets & Asset Management nor 
any of its related entities shall accept any responsibility for any errors, 
omissions or other inaccuracies in this information or for the consequences 
thereof, nor shall it be bound in any way by the contents of this e-mail or its 
attachments. In the event of incomplete or incorrect transmission, please 
return the e-mail to the sender and permanently delete this message and any 
attachments.

Messages and attachments are scanned for all known viruses. Always scan 
attachments before opening them.
_______________________________________________
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Reply via email to