[tcpdump-workers] Re: Support for saving pcapng

2024-05-20 Thread Guy Harris
On May 20, 2024, at 9:56 AM, Michael Richardson  wrote:

>> implementation under the APSL license, I wonder if the community is
>> allowed to submit a pull request for it. Are there any restrictions or
>> guidelines we should be aware of in this regard?  Thanks for your time
>> and patience.
> 
> My understanding is that the APSL is not compatible with the BSD 2-clause.

The main problem with the APSL that I know of is the patent clause, which might 
mean that, if somebody uses APSLed code as a result of using libpcap to read or 
write pcapng files, their license to use it would terminate immediately and 
without notice if they file a patent lawsuit against Apple (unless they do so 
in response to an Apple patent lawsuit against them).

Not being a lawyer, I don't know whether the patent clause would apply to users 
of libpcap (or tcpdump, if we pick up any APSL-licensed code), or just to the 
Tcpdump Group (I suspect we're unlikely to apply for any patents).

See my longer email reply to the original message.
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[tcpdump-workers] Re: Support for saving pcapng

2024-05-20 Thread Guy Harris
On May 20, 2024, at 8:31 AM, luoyuxuan.c...@gmail.com wrote:

>I've noticed that the question about libpcap's support for writing files 
> in the pcapng format has been brought up multiple times in the mailing list. 
> Yet, I'm still curious about the current status of this function. Can anyone 
> provide an update: is it currently in progress or still pending 
> implementation?

The current status is that there is a GitHub issue for adding full pcapng 
support to libpcap:

https://github.com/the-tcpdump-group/libpcap/issues/1321

because support for writing pcapng files without full support for reading those 
files is not all that useful.

>Additionally, given Apple's implementation under the APSL license,

...which is a license not compatible with the GPL:

https://www.gnu.org/philosophy/apsl.html

as it contains a patent clause:

12. Termination.
 12.1 Termination. This License and the rights granted hereunder 
will terminate:
  (a) automatically without notice from Apple if You fail to 
comply with any term(s) of this License and fail to cure such breach within 30 
days of becoming aware of such breach;
  (b) immediately in the event of the circumstances described 
in Section 13.5(b); or
  (c) automatically without notice from Apple if You, at any 
time during the term of this License, commence an action for patent 
infringement against Apple; provided that Apple did not first commence an 
action for patent infringement against You in that instance.

I don't know whether that clause would mean that anybody who has written code 
that uses pcapng support APIs in a version of libpcap that implements those 
APIs with APSL-licensed code would, "automatically without notice from Apple", 
remove their permission to use the pcapng support if they file a lawsuit 
accusing Apple of infringing a patent of theirs, but, if so, I would really 
prefer libpcap not to have any such code.  (The question is whether that would 
apply to users of libpcap or tcpdump; I don't expect the Tcpdump Group to file 
for any patents ourselves.)

> I wonder if the community is allowed to submit a pull request for it.

Yes, but, in addition to the patent clause of the APSL, Apple's APIs have some 
issues.

For one thing, the pcap-ng.3 man page in the Darwin libpcap source on GitHub:


https://github.com/apple-oss-distributions/libpcap/blob/main/libpcap/pcap_ng.3

(yes, Apple's public repositories are on a website run by a wholly-owned 
subsidiary of Microsoft) says:

   Opening a pcap-ng file
 To open a handle for a pcap-ng capture file from which to read pcap-ng
 blocks use either pcap_ng_fopen_offline() or pcap_ng_open_offline().  As
 these functions return a NULL value if the file is not in the pcap-ng
 format, one should then try opening the file using
 pcap_fopen_offline(3PCAP) or pcap_open_offline(3PCAP).

 To open a new pcap-ng capture file to save pcap-ng blocks use either
 pcap_ng_dump_open() or pcap_ng_dump_fopen().

 The above functions return a pcap_t that may be used with most of the
 pcap(3PCAP) functions that accept a capture handle.

That is not the API that I would prefer.  When I originally wrote the limited 
pcap-reading code in libpcap, my intent was to allow code to read both pcap 
*and* pcapng files without any changes to the code, rather than requiring some 
awkward pair of opens.  I would prefer that, for code that fully supports 
pcapng, only one open be necessary, and that it get a fake SHB, a fake IDB 
(without any interface name provided, as that's not stored in pcap files), and 
a sequence of fake IDBs.

In addition, the routine to write a pcapng block returns no value:

 void
 pcap_ng_dump(u_char *user, struct pcap_pkthdr *h, u_char *sp);

which means that it shares pcap_dump()'s inability to report errors, including 
"no more space on the file system".  See, for example,

https://github.com/the-tcpdump-group/libpcap/issues/1047

I would prefer to have new APIs in which the callback returns a value:

0 on success;

a PCAP_ERROR_ value on an error;

a PCAP_WARNING_ value on a warning condition;

with errors and warnings breaking out of the loop, causing the callback return 
value to be returned.  This would, for example, allow out-of-space conditions 
to be reported.
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[tcpdump-workers] Re: Dropping support in tcpdump for older versions of libpcap?

2024-05-19 Thread Guy Harris
On Apr 12, 2024, at 6:49 PM, Guy Harris  wrote:

> Is there any reason not to require libpcap 1.0 or later?  If there is, is 
> there any reason not to require libpcap 0.7 or later?

OK, support removed, in the main branch. for libpcaps with only pre-1.0 APIs.  
The 4.99 branch still supports them, although I don't know whether we've tested 
all the way back to libpcap 0.4 (the last LBL release).

___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[tcpdump-workers] Re: pcap-savefile(5) in libpcap-1.10

2024-05-10 Thread Guy Harris
On May 10, 2024, at 1:39 PM, Denis Ovsienko  wrote:

> I have been looking through commits and the 1.10.5 section of libpcap
> change log, and the recent changes to the link-layer header type field
> structure look like a potential place for things to go wrong.
> 
> Specifically, the new prose says:
> 
>  P  (1  bit):  A bit that, if set, indicates that the
>  Frame Check Sequence (FCS) length value is present and,
>  if  not  set,  indicates that the FCS value is not
>  present.
> 
> ...and:
> 
>  FCS  len  (4  bits): A 4-bit unsigned value giving the
>  number of 16-bit (2-octet) words of FCS that are appended
>  to each  packet, if the P bit is set; if the P bit is not
>  set, and the FCS length is not indicated by the
>  link-layer type value, the FCS length is unknown.   The
>  valid  values of the FCS len field are between 0 and 15;
>  Ethernet, for example, would have an FCS length value of
>  2, corresponding to a 4-octet FCS.

This all began with this thread:

https://seclists.org/tcpdump/2007/q1/83

from 2007-02.  This thread:

https://seclists.org/tcpdump/2007/q1/94

continues it with a repost.  In

https://seclists.org/tcpdump/2007/q1/97

I first proposed "repurpose the upper 16 bits":

> Or perhaps the link type value in the file header should be interpreted as 
> having bitfields, with the lower 16 bits being the link layer type, and an 
> indication of whether there's an FCS present being somewhere in the upper 16 
> bits.
> 
> NetBSD already uses the upper 16 bits for its own purpose - if the upper 16 
> bits are 0x0224, the lower 16 bits are a NetBSD address family value. (Given 
> that AF_INET6, for example, has at least 3 different values on various 
> BSD-flavored OSes, 0x0224 should be treated as NetBSD-specific, with other 
> values used for other OSes.)
> 
> We could, for example, use the uppermost nibble as an FCS length indication, 
> with the bit below it being an indication of whether the FCS length is known 
> or not. That doesn't touch any of the bits in 0x0224.
> 
> For all current DLT_ values, the bit would be clear, so the FCS length isn't 
> known; that's the case for Ethernet, as not only is it not known whether any 
> given DLT_EN10MB capture has FCSes in the packets or not (some do, some 
> don't), it's not even known which *packets* in a capture that does have FCSes 
> (packets sent by the machine doing the capture don't, but there's not a 
> per-packet way of indicating that).I think it would be possible to make this 
> work with pcap-NG as well.
> 
> This has the advantage that "what is the link-layer header?" and "do frames 
> have FCSes?" are separate questions, answered in separate bitfields of the 
> link type value.

I think NetBSD never did much with their extension; we never did, either.

Florent Drouin, the person who asked for the "MTP2 with an FCS" DLT_ sent a 
patch to implement that idea in

https://seclists.org/tcpdump/2007/q1/101

The current pcap Editor's Draft:


https://ietf-opsawg-wg.github.io/draft-ietf-opsawg-pcap/draft-ietf-opsawg-pcap.html#name-file-header

says:

LinkType and additional information (32 bits):a 32-bit unsigned value 
that contains the link-layer type of packets in the file and may contain 
additional information.

The LinkType and additional information field is in the form

 1   2   3
 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|FCS len|R|P| Reserved3 |Link-layer type|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

Figure 2: LinkType and additional information

The field is shown as if it were in the byte order of the host reading 
or writing the file, with bit 0 being the most-significant bit of the field and 
bit 31 being the least-significant bit of the field.

Link-layer type (16 bits): a 16-bit value indicating link-layer type 
for packets in the file; it is a value as defined in the PCAP LinkType list 
registry, as defined in [I-D.ietf-opsawg-pcaplinktype].

Reserved3 (10 bits) :not used - MUST be set to zero by pcap writers, 
and MUST NOT be interpreted by pcap readers; a reader SHOULD treat a non-zero 
value as an error.

P (1 bit): a bit that, if set, indicates that the Frame Check Sequence 
(FCS) length value is present and, if not set, indicates that the FCS value is 
not present.

R (1 bit): not used - MUST be set to zero by pcap writers, and MUST NOT 
be interpreted by pcap readers; a reader SHOULD treat a non-zero value as an 
error.

FCS len (4 bits): a 4-bit unsigned value indicating the number of 
16-bit (2-octet) words of FCS that are 

[tcpdump-workers] Re: Question about an uninitialized array in bpf_filter

2024-04-29 Thread Guy Harris
On Apr 29, 2024, at 7:19 AM, Michal Ruprich  wrote:

> I was wondering, whether the mem[BPF_MEMWORDS] array in function 
> pcapint_filter_with_aux_data in bpf_filter.c should be initialized? If the 
> switch hits cases BPF_LD|BPF_MEM or BPF_LDX|BPF_MEM the variables A or X are 
> filled with random uninitialized data from the array. Is it the case that 
> this never happens before mem is filled with relevant data?

Only if an invalid BPF program that does a load from a memory location without 
storing something there first is used as a filter.

The BPF validator in libpcap doesn't check for that.  It would require a 
dataflow analysis, but perhaps it should check for that.
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[tcpdump-workers] Re: Dropping support in tcpdump for older versions of libpcap?

2024-04-19 Thread Guy Harris
On Apr 19, 2024, at 5:49 AM, Denis Ovsienko  wrote:

> On Fri, 12 Apr 2024 18:49:05 -0700
> Guy Harris  wrote:

...

> Since tcpdump is the reference implementation of a program that uses
> libpcap, it may be a good occasion to improve the solution space such
> that other software can copy something that works well in tcpdump.  It
> is not entirely obvious the LIBPCAP_HAVE_PCAP_ macros would be worth
> the burden of maintenance, but the version macros should be a
> straightforward improvement, something such as:
> 
> #define PCAP_VERSION_MAJOR 1
> #define PCAP_VERSION_MINOR 11
> #define PCAP_VERSION_PATCHLEVEL 0
> #define PCAP_VERSION_AT_LEAST(a, b, c) ...
> 
> (The GCC and Clang version checks in compiler-tests.h would be examples
> of a good macro structure; Sun C, XL C and HP C version checks look
> unwieldy and error-prone).

Presumably meaning that we should export version information in the way GCC and 
Clang do, rather than in the ways that Sun/Oracle C, XL C and HP C do, the 
latter being why we have to go through all that extra pain (they provide a 
single #define with the version number components packed in it - or two 
different defines in different versions as XL C does - rather than separate 
#defines for major and minor versions, as GCC and Clang do).

> There could be a run-time check as well:
> 
> extern int pcap_version_at_least (unsigned char major, unsigned char
> minor, unsigned char patchlevel);

So how would that be used?

If a program is dynamically linked with libpcap, and includes calls to routines 
that were added in libpcap 1.12 or later, if you try to run it with libpcap 
1.11, the run-time linker will fail to load it, as some symbols requested by 
the executable won't be present in the library. The only OS on which this can 
be made to work is macOS, with its weak linking mechanism:


https://developer.apple.com/library/archive/documentation/DeveloperTools/Conceptual/DynamicLibraries/100-Articles/DynamicLibraryDesignGuidelines.html

although Apple didn't set up the header file to weakly link symbols until Xcode 
15, and Sonoma was the first release built with Xcode 15, so I think the first 
OS in which you can arrange that the run-time linker not fail would be Sonoma.

With the macOS scheme, there are *run-time* checks for the OS version you're 
running on, although Apple have done rather a crap job of documenting the 
mechanism, especially as used in C and C++ code, where it's done with the 
__builtin_available() pseudo-function, e.g.:

if (__builtin_available(macOS 10.12, *)) {
if (clock_gettime(CLOCK_REALTIME, ) == 0) {
printf("Realtime seconds: %ld\n", value.tv_sec);
}
} else {
// clock_gettime not available!
return 1;
}

as per

https://epir.at/2019/10/30/api-availability-and-target-conditionals/

If you *don't* do the check, you'll get, I think, a run-time failure if you try 
to call a routine that's not available; there's a compiler option, 
-Wunguarded-availability, to produce a warning if you make a call to a routine 
that's not available on the minimum-targeted OS version.

16-bit Windows also supported that - in the same way that macOS used to do it, 
with "the pointer to the function is NULL if a weakly-linked symbol isn't 
available" - but they decided that was too ugly, and got rid of it in 32-bit 
and 64-bit Windows:

https://devblogs.microsoft.com/oldnewthing/20160317-00/?p=93173

Apple probably also decided it was too ugly, and added __builtin_available() 
(and Objective-C @available, and something similar for Swift) as well as the 
compiler warning.

The Microsoft blog post indicates how you do this in Windows, namely by loading 
the library at run time with LoadLibrary() and attempting to get pointers to 
individual routines with GetProcAddress() and testing if the result is NULL; 
the same thing can be done on UN*Xes with dlopen() and dlsym().

But *all* of those require either run-time checks for a particular OS version 
in macOS, in cases where you're using the libpcap that comes with macOS, or 
require loading the library at run time, finding particular routines at run 
time, and checking at run time whether the routine was found.

> The latter could be available via a build helper binary, such as (using
> the binary operators from test(1) and version-aware comparison):
> 
> pcap-version -ge 1 # same as 1 0 0
> pcap-version -ge 1 10 # same as 1 10 0
> pcap-version -ne 1 10 4
> pcap-version -eq 1 10 4
> pcap-version -ge 1 9 1 && pcap-version -le 1 9 3

So would this be used in a Makefile/configure script/CMakeFile.txt/etc. to 
check whether the libpcap on the system is sufficiently recent to include the 
routines your program needs, and fail if it isn't?

>> Is there any reason not to requir

[tcpdump-workers] Re: RadioTap Parsing as seperate library

2024-04-15 Thread Guy Harris
On Apr 15, 2024, at 3:47 PM, Ravi chandra  wrote:

> I am planning to create an ieee 802.11 packet RadioTap parsing
> code/library [offlines processing of pcap-ng files. Decoding each and
> every field and write it to a .csv file].

If that's all you're doing, is there some reason why you don't just use TShark 
and do

tshark -T fields -E separator=, -E quote=d -e {radiotap field} -e 
{another radiotap field} ...

> Meanwhile, before asking [did my homework] of going through source
> code and found the following.
> 
> [1] Compared to the Wireshark library, RadioTap library files

By "Radiotap library files" do you mean this library:

https://github.com/radiotap/radiotap-library

> are NOT updated in the radiotap-library.

What do you mean by "NOT updated"?  Do you mean that the recent commits haven't 
significantly changed the library?  If so, maybe there's not much that needs 
changing.

> [2] I see RadioTap headers/files/parsing functions have additional
> arguments [which are specific to wireshark]. In other words, there is
> NO direct way to call RadioTap headers easily to integrate with
> libpcap_open_offline and pcap_next.

Note that tcpdump has its own code to parse radiotap headers, and that code 
doesn't use the Radiotap library.
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[tcpdump-workers] Dropping support in tcpdump for older versions of libpcap?

2024-04-12 Thread Guy Harris
A while ago, tcpdump and its configuration script were modified - mainly by 
Bill Fenner, as I remember - so that it didn't require a contemporary version 
of libpcap, and could be built with older versions of libpcap.

The intent, as I remember, was to allow somebody who was using a system that 
provided both libpcap and tcpdump to build a more recent version of tcpdump 
without having to download and build a newer version of libpcap.

Currently, at least in theory, we support versions of libpcap at least as old 
as 0.4, which was the last version released by LBL.

tcpdump, for example, supports versions of libpcap that don't include 
pcap_findalldevs(); that routine first appeared in libpcap 0.7, which was 
released in 2001, almost 23 years ago.

It also supports versions of libpcap that don't include pcap_create() and 
pcap_activate(); those first appeared in libpcap 1.0, which was released in 
2008, almost 16 years ago.

Is there any reason not to require libpcap 1.0 or later?  If there is, is there 
any reason not to require libpcap 0.7 or later?
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[tcpdump-workers] Re: openwrt Conclusions from CVE-2024-3094 (libxz disaster)

2024-04-01 Thread Guy Harris
On Apr 1, 2024, at 6:53 AM, Michael Richardson  wrote:

> I wonder if we should nuke our own make tarball system.

I.e., replace:

to get {libpcap,tcpdump,tcpslice} version X.Y.Z, download 
{libpcap,tcpdump,tcpslice}-X.Y.Z.tar.{compression-suffix}

with

to get {libpcap,tcpdump,tcpslice} version X.Y.Z, do

git clone {repository}

and then check out Git tag {libpcap,tcpdump,tcpslice}-X.Y.Z?

If so, do we

1) require people to have autotools installed and run ./autogen.sh

or

2) generate the configure scripts on some standard platform and check 
it in

so that they have a configure script?  Or is there some other way to arrange 
that people can get the configure scripts?
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[tcpdump-workers] Re: SIGINFO/SIGUSR1 and SIGUSR2

2024-03-28 Thread Guy Harris
On Mar 28, 2024, at 2:19 PM, Denis Ovsienko  wrote:

> Yes, AIX and Haiku sometimes make portability issues manifest.

And, in this case, Solaris doesn't have SIGINFO, either; SunOS 0.x-4.x didn't 
have it, as BSD hadn't picked it up, and they didn't pass it along to be put 
into SVR4, so it's not in the SVR4-based SunOS 5.x.

As noted, neither does Linux.

I.e., at this point, if it's not named "somethingBSD" or "Mac OS X/OS X/macOS", 
it doesn't have SIGINFO.

> Changing the compiled-in defaults would be one thing, and given how long
> ago the current behaviour was implemented, it would be best to think
> twice before changing it.  There are users with learned keystrokes and
> scripts that work, let's keep it this way when possible.

The only change I'm suggesting to the compiled-in defaults is to change the 
default for SIGUSR1 from the current default of "print_stats if the system 
doesn't have SIGINFO, kill the process if it doesn't" to "print_stats 
regardless of whether the system has SIGINFO"; neither the default for SIGINFO 
(print_status if the system has it) nor the default for SIGUSR2 
(flush_savefile) would be changed.

I don't see a way in which any remotely reasonable learned keystroke or script 
would depend on SIGUSR1 killing the process on *BSD/macOS, so I don't see an 
issue with SIGINFO *and* SIGUSR1 both causing stats to be printed.

> Allowing to override the defaults at run time

Which is what I was talking about there.
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[tcpdump-workers] Re: SIGINFO/SIGUSR1 and SIGUSR2

2024-03-28 Thread Guy Harris
On Mar 28, 2024, at 3:01 AM, Denis Ovsienko  wrote:

> There is a rather old pull request at [1], which was supposed to make
> use of the then-unused SIGUSR2, but whilst it was waiting, another pull
> request used the signal for another code path.
> 
> There is a potential way to manage this kind of contention by
> naming the available actions and disassociating them from the available
> signals.  For example, let's call the existing SIGINFO/SIGUSR1 action
> "print_stats", the existing SIGUSR2 action -- "flush_savefile" and the
> action proposed in the pull request -- "rotate_savefile".  Perhaps an
> easy action would be "ignore" to do nothing instead of the default
> something.  Then these command-line options would allow to associate
> the signals with the actions:
> 
> [--siginfo=] (on platforms with SIGINFO only)

SIGUSR1 and SIGUSR2 are required by POSIX, so all UN*Xes should and, as far as 
I know, do support it.  They were introduced in System III (they may have been 
introduced inside AT prior to that, but the first release with SIGUSR1 and 
SIGUSR2 that was made generally available was System III).  It looks as if they 
might first have appeared on the BSD side of the fence in 4.3BSD.

SIGINFO was a later addition, not in 4.3BSD.  It appears to have been in 
4.3-Reno, along with SIGUSR1 and SIGUSR2:


https://man.freebsd.org/cgi/man.cgi?query=sigvec=0=0=4.3BSD+Reno=default=html

so I suspect few, if any, UN*Xes have, or had, SIGINFO without also having 
SIGUSR1 and SIGUSR2.

As for "not UN*X but tries hard to look like it", Haiku has SIGUSR1 and SIGUSR2 
but not SIGINFO.

SIGINFO is largely a BSDism, not adopted by Linux or System V Release 4 (which 
may have come out before *BSD* added it) or Haiku or AIX:


https://www.ibm.com/docs/en/aix/7.3?topic=s-sigaction-sigvec-signal-subroutine

etc..

So I wouldn't worry abut platforms that only have SIGINFO; given that, on the 
platforms that offer it (BSDs, including CupertinoBSD), it's defined to mean 
"give me a status report" - unlike SIGUSR1 and SIGUSR2, which are explicitly 
defined *not* to have a standard meaning, leaving it up to the application to 
choose how to use it - I wouldn't bother with a --siginfo option.

Instead, we could have SIGUSR1 default to "print statistics" even on systems 
that *have* SIGINFO, continue to have SIGUSR2 default to "flush the savefile", 
and allow --sigusr1= and --sigusr2= to reassign either of those to 
"flush_savefile" or "rotate_savefile".  That means you can't, on platforms 
without SIGINFO, have "print_stats", "flush_savefile", and "rotate_savefile" 
signals, but that's because you don't have three signals to reassign.  On 
platforms *with* SIGINFO, you can use the other two for "flush_savefile" and 
"rotate_savefile".
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[tcpdump-workers] Re: HP-UX support and portability

2024-03-12 Thread Guy Harris
On Mar 12, 2024, at 2:07 PM, Rick Jones via tcpdump-workers 
 wrote:

> If https://en.wikipedia.org/wiki/HP-UX#Version_history is any indication,
> there are ~21 months left on HP's (er, sorry, HPE's) own support for HP-UX.

As far as I know, now that Itania are no longer being manufactured and shipped, 
and given that HPE haven't, as far as I know, shown any sign of plans to port 
HP-UX to x86-64, the future is something like "no more HP-UX, just the ability 
to run HP-UX Itanium binaries on x86-64 Linux with binary-to-binary translation 
and either HP-UX system call emulation or HP-UX shared library call emulation".

I can't find much to indicate the details of the strategy, except that it 
involves "Linux containers" in some fashion; if one of those particular "Linux 
containers" won't run native Linux/x86-64 applications and emulated 
HP-UX/Itanium apps in parallel, maybe there'd be some demand for the HP-UX 
tcpdump running in a container; otherwise, running a Linux tcpdump using Linux 
libpcap would probably be the future.
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[tcpdump-workers] Re: Sharing code between print-icmp.c and print-icmp6.c

2024-02-24 Thread Guy Harris
On Feb 5, 2024, at 9:38 AM, Bill Fenner  wrote:

> Is this a reasonable way to proceed?

Yes.

Perhaps have a file icmp-common.c or print-icmp-common.c with code and data 
structures common to ICMP(v4) and ICMPv6?
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[tcpdump-workers] Test

2024-02-24 Thread Guy Harris
Is the list working?
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[tcpdump-workers] Re: Link Layer Type Request NETANALYZER_NG

2023-12-30 Thread Guy Harris
On Mar 8, 2021, at 12:07 AM, Jan Adam via tcpdump-workers 
 wrote:

> We have created a public document on our website You can point to for the 
> description.
> 
> Here is the link:  https://kb.hilscher.com/x/brDJBw
> 
> It contains a more detailed description of the fields in the footer structure.
> It also contains a C – like structure definition of the footer.

As of 2023-12-30, that link pops up a login page at 
https://kb.hilscher.com/sslvpn_logon.shtml.

Furthermore, web.archive.org does not have an archived version of that page.

Is there a publicly-available version of that description to which we can 
point, or from which we can make our own copy of the description so that we can 
put it on our website?  (I'd prefer the former, as it allows you to update the 
description if new features are added.)
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s

[tcpdump-workers] Removing untested libpcap support for older platforms

2023-10-05 Thread Guy Harris
The MS-DOS support in libpcap was recently removed in the main branch; the 
comment for the pull request is "MSDOS packet driver interface is no longer 
testable", and the comment for the first commit is "MSDOS packet driver 
interface is no longer testable, anyone needs it can use a previous version".

We've also removed support for older versions of the Linux kernel in recent 
releases.

We now also require C99 support by the compiler and library.

Should we also consider removing support for some older UN*X platforms, such as:

SunOS prior to SunOS 4 - pcap-nit.c; the last such version, SunOS 3.5, 
was released in January 1988

SunOS 4.x - pcap-snit.c; the last such version, SunOS 4.1.5, was 
released in November 1994

HP-UX 9 - some code in pcap-dlpi.c;  the last such version, 9.10, was 
released in early-to-mid 1995

HP-UX 10 prior to 10.20 - some code in pcap-dlpi.c; - the only such 
version, 10.0, was released in 1995

SINIX - some code in pcap-dlpi.c; - the last release, Reliant UNIX 
5.45, was released some time in the early 2000s (?)

IRIX - pcap-snoop.c; - the last update was released some time in the 
mid-to-late 2000s (?)

DEC OSF/1^W^WDigital UNIX^WTru64 UNIX - pcap-pf.c; the last release, 
Tru64 UNIX 5.1B-6, was released in October 2010

(release information from that famously reliable site, Wikipedia:


https://www.theonion.com/wikipedia-celebrates-750-years-of-american-independence-1819568571

and some Google searching).

We don't have any buildbots doing tests of those platforms, and I don't know 
when the last test of whether libpcap will compile or work on those platforms 
was done.

However, there may well be people using them - pull request #1203:

https://github.com/the-tcpdump-group/libpcap/pull/1203

was a change to fix libpcap so that it builds on Mac OS X 10.4, the last 
version of which, 10.5.8, was released on August 13, 2009.

So the questions are:

1) Which of these have a significant user base?

2) Which of them have a user base willing to provide a buildbot so that 
we check for code rot?

(Note that users in group 1) but not in group 2) are at risk of code rot either 
rendering libpcap unbuildable or making the result of the compile not work 
correctly.)
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[tcpdump-workers] Test 2

2023-10-05 Thread Guy Harris
Another test of subscribing via the Web
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[tcpdump-workers] This is a test - ignore

2023-10-05 Thread Guy Harris
Testing to see whether subscribing worked.
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[tcpdump-workers] Re: libpcap : An entry in the manual about multithreading

2023-05-11 Thread Guy Harris
On May 7, 2023, at 9:27 AM, Michael Richardson  wrote:

> Frederick Virchanza Gotham  wrote:
>> I think that there should be a page in the libpcap manual that
>> explicitly explains the multithreading capabilities and limitations.
> 
> okay, that sounds reasonable.
> git clone  https://github.com/the-tcpdump-group/tcpdump-htdocs

Either a separate page or something in the main pcap(3PCAP) page.

>> Some libraries have an entry in the manual stating that the library is
>> not threadsafe at all. Nine times out of ten, you're safe to use these
>> libraries from multiple threads so long as you use an exclusive lock.
> 
> I don't think that libpcap has been tested in this way.
> I think it would work, and I don't think we use any thread local storage.

We do now, as of


https://github.com/the-tcpdump-group/libpcap/commit/b10eefd47f979a339aaeb247bf47cc333aa7ba91

which was done in order to fix a case where some libpcap routines weren't 
thread-safe.

>> Some other libraries are thread-safe but the manual states that the
>> handle returned from the 'open' function can only be used by one
>> thread.
> 
> This is where libpcap is.

Yes.  We do not support using a pcap_t handle from multiple threads without the 
caller using a mutex, except for calling pcap_breakloop() from a thread other 
than the thread using the pcap_t.

>> I remember seeing a page somewhere on the internet that read something
>> along the lines of: "The libpcap library is thread-safe, i.e. multiple
>> threads can call libpcap functions concurrently. However once you've
>> obtained a handle from pcap_open_live, that handle can only be used
>> exclusively by one thread -- with the exception of the pcap_breakloop
>> function".
> 
> I don't think we ever said this.

We have never stated that in the documentation, but I may have answered a 
question on a Q site with such an answer.

That's certainly our *policy*, at least as of making the filter compiler 
reentrant; prior to doing so, pcap_compile() was known to be very much not 
thread-safe.  To fix that, I changed it to use a reentrant parser and scanner 
(which is why it now requires Bison or Berkeley YACC, and a sufficiently recent 
version of Flex), so different threads should be able to compile pcap filters 
in parallel.

However, it was later discovered that we'd missed 
pcap_datalink_val_to_description_or_dlt(), pcap_statustostr(), bpf_image(), and 
pcap_next_etherent(), all of which work around C's tragic low level of support 
for strings by formatting into a static buffer and returning a pointer to that, 
so that the callee doesn't have to free the string when it's no longer needed:

https://github.com/the-tcpdump-group/libpcap/issues/1174

which was fixed by the aforementioned commit.

>> I think the debug build of libpcap should have runtime asserts to
>> ensure that the same thread is always operating on any given pcap_t*
>> handle. For example there could be a global map of pcap_t* handles to
>> thread ID's, something like:
> 
>>struct Mapping { pcap_t *handle; pthread_t thread_id; };
> 
>>Mapping mappings[32u];
> 
> I could tolerate this.

Except, of course, that pcap_breakloop() shouldn't do that check, as noted 
above.

That would certainly test the thread safety of code *using* libpcap, as long as 
they're testing with the debug build of libpcap, and as long as this is a 
platform for which "release" and "debug" builds are provided.  Unfortunately, 
the only platform I know of where that's a common notion if Windows, unless 
I've missed something in the lands of Apple, Linux, *BSD, Solaris, etc. 
developers, and I don't know whether Npcap provides separate release and debug 
builds of Npcap.

It wouldn't test the thread safety of libpcap *itself*, however.  That would 
require tests of the sort done by the people who submitted the bug mentioned 
above.
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[tcpdump-workers] Re: Pcap debug at runtime

2023-02-20 Thread Guy Harris
On Feb 20, 2023, at 12:15 PM, Paschal Chukwuebuk Amusuo  
wrote:

> Please, is there a way to print out debug statements at runtime when using 
> pcap?

Debug statements in your program?  Add printf() or fprintf(stderr, ...) or... 
calls to your program.

Debug statements in libpcap?  Get the libpcap source, add printf() or 
fprintf(stderr, ...) or... calls to it, build it, install it, and compile your 
program with it.
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[tcpdump-workers] Re: [tcpdump] About struct in_addr / struct in6_addr

2023-02-20 Thread Guy Harris
On Feb 20, 2023, at 2:20 AM, Guy Harris  wrote:

> So the code is correct, but could easily be misintrpreted.  Perhaps it'd be 
> better if we used the values from af.h rather than using AF_INET and 
> AF_INET6.  

Done in 0dc32a024773968cb1ae00729758e61b7418564a

I'll see whether anything else uses numbers rather than AFNUM_ values.
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[tcpdump-workers] Re: [tcpdump] About struct in_addr / struct in6_addr

2023-02-20 Thread Guy Harris
On Feb 20, 2023, at 12:17 AM, Denis Ovsienko  wrote:

> AF_INET6 looks a bit more convoluted.  There is some code that uses
> AF_INET6 to dissect wire encoding, which is usually a wrong idea.  For
> example, pimv2_addr_print() switches on AF_INET and AF_INET6, and the
> PIMv2 header encoding (RFC 4601 Section 4.9.1) clearly says the AF is
> the IANA AF [1]:
> 
> 1: IP
> 2: IP6

And RFC 7761:

https://www.rfc-editor.org/rfc/rfc7761#section-4.9

says the same thing.

> Which is different from most OS definitions of AF_INET and AF_INET6,
> but this function has been implemented this way since 1999, and somehow
> it seems to be able to decode a few PIMv2 packet captures I found on
> the Internet.  So cases like this will require more attention and some
> of the remaining AF_INET6 instances may become wire encoding constants
> rather than the OS AF_INET6 constant.

That's handled by the code at the beginning of pimv2_addr_print():

if (addr_len == 0) {
if (len < 2)
goto trunc;
switch (GET_U_1(bp)) {
case 1:
af = AF_INET;
addr_len = (u_int)sizeof(nd_ipv4);
break;
case 2:
af = AF_INET6;
addr_len = (u_int)sizeof(nd_ipv6);
break;
default:
return -1;
}
if (GET_U_1(bp + 1) != 0)
return -1;
hdrlen = 2;
} else {
switch (addr_len) {
case sizeof(nd_ipv4):
af = AF_INET;
break;
case sizeof(nd_ipv6):
af = AF_INET6;
break;
default:
return -1;
break;
}
hdrlen = 0;
}

so, after that code, af is either AF_INET for IPv4 addresses or AF_INET6 for 
IPv6 addresses, and af is what's tested against those two values.

So the code is correct, but could easily be misintrpreted.  Perhaps it'd be 
better if we used the values from af.h rather than using AF_INET and AF_INET6.  
(And perhaps the values from af.h should be renamed AFNUM_IPv4 and AFNUM_IPv6, 
to make them look even less like socket API AF_ values.)
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


[tcpdump-workers] Re: [tcpdump] About struct in_addr / struct in6_addr

2023-02-18 Thread Guy Harris
On Feb 18, 2023, at 10:27 AM, Denis Ovsienko  wrote:

> OS IPv6 support would be a very reasonable requirement for tcpdump 5.

Which would, among other things, let us remove the tests for various add-on 
IPv6 stacks in configure.ac.
___
tcpdump-workers mailing list -- tcpdump-workers@lists.tcpdump.org
To unsubscribe send an email to tcpdump-workers-le...@lists.tcpdump.org
%(web_page_url)slistinfo%(cgiext)s/%(_internal_name)s


Re: [tcpdump-workers] AC_LBL_FIXINCLUDES does not make it into configure

2023-02-10 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 27, 2023, at 4:53 AM, Denis Ovsienko  wrote:

> On Fri, 27 Jan 2023 01:40:48 -0800
> Guy Harris  wrote:
> 
>> *don't* support C99 inline?  If not, we could just remove the call
>> from configure.ac and the definition from aclocal.m4.
> 
> In 2002 in commit b1263c6 you wrote it was some HP C compiler that
> Autoconf 2.13 could not drive.  I have never seen HP-UX in the wild, but
> assuming the amount of improvement made in Autoconf during the
> subsequent 10 years (Autoconf 2.69 is from 2012) and the amount of
> improvement made in HP-UX (which had the most recent release in 2022),
> to me it would make the most sense to say the problem AC_LBL_C_INLINE
> solved (HP C specifics) no longer exists unless proven otherwise, and
> AC_LBL_C_INLINE should be removed with a good change log entry.

That commit was

commit b1263c69c58e58e326997ec8b2db81d6e3666bc6
Author: Guy Harris 
Date:   Fri Jun 28 10:45:40 2002 +

Some versions of the HP C compiler can handle inlines, but not if they
return a structure pointer.  Check whether the C compiler can handle
inline functions that return a structure pointer, not whether they can
handle inline functions that return an int, as at least some versions of
autoconf's AC_C_INLINE do.

I presume that, given the increased use of, and thus demand for, inline as a 
keyword in C, HP eventually fixed the problem (I *think* the problem was that 
the compiler rejected code that inlined structure-pointer-returning functions, 
rather than generating bad code for it).  (If not, maybe the autoconf 
developers added a check for that.)

If anybody still has such a problem, they have my sympathy, just as people 
stuck with, say, compilers that don't support function prototypes do, but I'm 
not sure they should have support for building tcpdump (or libpcap or tcpslice) 
on their machine with their current compiler.

As far as I'm concerned, replacing it with AC_C_INLINE, as you did, is the 
right thing to do, at least for now.  We could consider removing it in the 
future, given that we require C99 and C99 has inline as a keyword.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Pcap delivers packets every 200ms

2023-02-02 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Feb 2, 2023, at 7:42 AM, Paschal Chukwuebuk Amusuo via tcpdump-workers 
 wrote:

> Please, is there any way to force pcap to deliver packets once it receives 
> the packet?
> Currently, pcap delivers packets to my application at intervals and it 
> batches the packets before delivering them. There are substantial time 
> differences between when the packet is received by pcap and when it is 
> finally delivered by the application.

pcap does not itself buffer packets.  Packet capture mechanisms, such as 
PF_PACKET sockets in memory-mapped mode on Linux, BPF devices on 
macOS/*BSD/AIX/Solaris 11, and NPF for Windows, do the buffering.

This is intentional; it's done to reduce the overhead of per-packet capture by:

doing only one wakeup per batch of packets rather than per packet;

if the mechanism copies from the kernel to user space, doing one copy 
per batch of packets rather than per packet;

packing multiple packets into a single chunk of the buffer.

The buffering has a timeout, so that packets don't have to wait for a buffer to 
fill up before being delivered to userland code such as libpcap.  Libpcap 
allows the application to choose the timeout.

See the "packet buffer timeout" section of the main pcap man page:

https://www.tcpdump.org/manpages/pcap.3pcap.html

> In the screenshot I attached, 6 packets were received within 400ms but all 
> delivered at the same time.

That's probably because your application has requested a 400ms timeout in a 
call to pcap_open_live() or pcap_set_timeout() by passing 400 as the timeout 
value (which is in milliseconds).  You can either 1) choose a shorter timeout 
or 2) use immediate mode, as per Denis's message.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] CPPFLAGS in C-only context

2023-01-28 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 28, 2023, at 2:01 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> On Mon, 23 Jan 2023 22:16:24 +
> Denis Ovsienko via tcpdump-workers 
> wrote:
> 
>> It looks like either in a C project context CPPFLAGS works in a
>> non-obvious way, or is a no-op.
> 
> ...or, rather, is the C preprocessor flags variable (just as
> "./configure --help" says it), and C++ compiler flags variable has
> always been CXXFLAGS.

Unfortunately, "+" is not a letter in the Roman alphabet, so C++ causes some 
naming problems.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] AC_LBL_FIXINCLUDES does not make it into configure

2023-01-27 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 22, 2023, at 9:59 AM, Denis Ovsienko via tcpdump-workers 
 wrote:

> I have also removed AC_LBL_C_INLINE and a conditional substitute for
> Tru64 pfopen() from tcpslice.  Interestingly, tcpslice and tcpdump,
> which don't call pfopen(), used to have this substitute, and libpcap,
> which does call pfopen(), does not have it.

That dates back to tcpdump 3.4.  I don't know why they decided to compile 
pfopen() into tcpdump and tcpdslice if it's not in a system library, rather 
than compiling it into libpcap if it's not in a system library.  Perhaps they 
wanted to be able to build versions of libpcap that would work both on Tru64 
UNIX versions in which pfopen() is in a system library and versions in which 
it's not and all you have is pfopen.c source code under /usr/examples.

I don't know what older versions those might be, and I suspect we have little 
if any reason to continue to make it possible to build tcpdump or tcpslice on 
those older versions - it looks as if Tru64 UNIX 4.x and 5.x have pfopen() in 
system libraries; according to

https://en.wikipedia.org/wiki/Tru64_UNIX

4.0A through 4.0F all date back to the previous millennium.

> In tcpdump it is a bit more entrenched, so I did not touch it yet.

It looks as if you removed the pfopen() stuff from tcpdump's configure script 
in 43670fb635503e69cdbf8055134a0befb94d2e15.

The AC_LBL_C_INLINE stuff is still there, but doesn't look *that* entrenched; 
are there any compilers that we need to support and that *don't* support C99 
inline?  If not, we could just remove the call from configure.ac and the 
definition from aclocal.m4.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] [OPSAWG] I-D Action: draft-ietf-opsawg-pcapng-00.txt

2023-01-24 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 24, 2023, at 2:02 PM, Michael Richardson  wrote:

> With this document adoption, we finally have all the PCAP related documents
> in the DT.   One thing that was mentioned to me is that the PCAPNG document
> has an IANA Registry for Block Type Codes.
> 
> The document is going through the WG as Informational, and I think that it
> *can* create registries. (Whereas, had it gone via ISE, it could not)
> I'm not sure if it *should* though, and the recommendation was that we move
> this section from pcapng to pcaplink types.

Which section is that?

> Allocation of the current types can stay in pcapng though.

"Stay in pcapng" as in "stay in the pcapng I-D", or something else?

> Either way, the IANA Considerations need to be adjusted, as they recommend a
> PR on github right now.

Would the new recommendation be a email to the opsawg list?

> I still hate the name PCAP *NG*, and I wish we could call it PCAPv2 instead.

pcap is already up to major version 2, with the current version being 2.4.

If we're going to rename it, perhaps we should rename it to something not 
including "pcap", as you can have a useful "pcapng" file with no "p"s - 
"packets" - in it.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] AC_LBL_FIXINCLUDES does not make it into configure

2023-01-19 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 19, 2023, at 3:20 PM, Denis Ovsienko  wrote:

> * AC_LBL_SSLEAY -- is there anything useful to take from here?

No, it's been replaced by the "Check for OpenSSL/libressl libcrypto" code in 
configure.ac.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] AC_LBL_FIXINCLUDES does not make it into configure

2023-01-18 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 18, 2023, at 3:32 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> As it turns out, there is another unused macro (AC_LBL_HAVE_RUN_PATH),
> tcpslice became the first to lose this luggage.

Unused in libpcap back to 0.4 and tcpdump back to 3.4, so it may be another one 
used in some LBL projects but not libpcap or tcpdump.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] AC_LBL_FIXINCLUDES does not make it into configure

2023-01-18 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 18, 2023, at 1:07 AM, Denis Ovsienko  wrote:

> Thank you for explaining the context Guy, it is very educational.

A significant part of what's in autoconf, and a significant part of what's in 
at least some configure scripts, dates back to old UN*Xes.

ISO C and POSIX have, over time, rendered a lot of old-time tests unnecessary 
except for hobbyists and ancient "if it ain't broke don't fix it" systems:


https://www.theregister.com/2001/04/12/missing_novell_server_discovered_after/

(although that one was Netware rather than UN*X, there may be old UN*X versions 
running on old hardware still out there).

> Is AC_LBL_UNION_WAIT of a similar origin?

Probably.

>  Neither tcpdump nor libpcap use it.

I think BSD's "union wait" has been supplanted by various POSIX-specified 
macros to pull apart an exit status stored in an int, and, in the 3.4/0.4 
timeframe, I don't think tcpdump or libpcap had any code to wait for a child 
process, and thus didn't need AC_LBL_UNION_WAIT.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] AC_LBL_FIXINCLUDES does not make it into configure

2023-01-17 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 17, 2023, at 3:13 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> In tcpdump commit cee234c there are three messages changed in
> aclocal.m4, but only two messages changed in the resulting configure
> script.  After a brief look it is clear that it is the third message
> (the one in AC_LBL_FIXINCLUDES) that does not make it to the script,
> but I don't understand whether this means a bug or just some dead code.

AC_LBL_FIXINCLUDES is defined in aclocal.m4 for tcpdump, but isn't used in 
configure.ac for tcpdump.

This appears to date back to tcpdump 3.4 (the last LBL release).

So that code is, for tcpdump, not only merely dead, it's really most sincerely 
dead.

I think aclocal.m4 may have started out as a library of LBL autoconf macros; 
they may have copied them to both the tcpdump and libpcap releases, bringing 
along macros not used by both packages (maybe some not used by either package). 
 They were identical in libpcap 0.4 and tcpdump 3.4.

They've since moved apart; given that it has not been used by tcpdump at least 
since 3.4, we can probably just remove it from tcpdump's aclocal.m4.

And perhaps we could remove it, and the call to it, from libpcap's configure 
script as well, as it may only be needed if you're configuring libpcap to build 
on and for SunOS 3 and SunOS 4.  If you're curious what AC_LBL_FIXINCLUDES is 
for, and why most platforms don't need it, continue reading.

What AC_LBL_FIXINCLUDES does is "if using gcc, make sure we have ANSI ioctl 
definitions".

Back when ANSI C compilers were rare, system header files back in the 
mid-to-late 1980's had to work with pre-C89 compilers.

In V7 UNIX, ioctl codes were of the form (('c' << 8) | code), where 'c' was an 
octet value, normally a printable character, indicating the type of object 
(typically, a device) for which the ioctl was intended, and code was a 
numerical value specifying a particular code.  It was up to the individual 
ioctl call in the kernel to move the argument from or to userland.  V7 was 
originally on a PDP-11, where an int is 16 bits, and an ioctl code was an int, 
so that ate up all 16 bits.

4.2BSD, which ran on VAXes where an int is 32 bits, expanded the ioctl codes to 
include an indication of whether the argument should be copied into the kernel, 
copied out of the kernel, copied into the kernel and copied back out of the 
kernel, or left up to the ioctl handler to process, and the size of the 
argument in bytes.  They introduced some macros:

_IO() - for ioctls where the kernel does no copying;

_IOR() - for ioctls where the kernel copies data out ("R" for "read", 
i.e., the kernel provides data to userland);

_IOW() - for ioctls where the kernel copies data in ("W" for "write", 
i.e., data is provide to the kernel by userland);

_IOWR() - for ioctls where the kernel copies data in and back out.

The 4.2BSD definitions for those are:

#define _IO(x,y)(IOC_VOID|('x'<<8)|y)
#define _IOR(x,y,t) 
(IOC_OUT|((sizeof(t)_MASK)<<16)|('x'<<8)|y)
#define _IOW(x,y,t) 
(IOC_IN|((sizeof(t)_MASK)<<16)|('x'<<8)|y)
/* this should be _IORW, but stdio got there first */
#define _IOWR(x,y,t)
(IOC_INOUT|((sizeof(t)_MASK)<<16)|('x'<<8)|y)

This relied on x, within 'x', being expanded by the C preprocessor; you'd write 
an ioctl as something such as

#define TIOCGETD_IOR(t, 0, int) /* get line discipline 
*/

As I remember, ANSI C indicated that it should *not* be expanded, which broke 
that; the current definitions, in systems that use BSD-style ioctls, are 
something such as the 4.4BSD definitions:

#define _IOC(inout,group,num,len) \
(inout | ((len & IOCPARM_MASK) << 16) | ((group) << 8) | (num))
#define _IO(g,n)_IOC(IOC_VOID,  (g), (n), 0)
#define _IOR(g,n,t) _IOC(IOC_OUT,   (g), (n), sizeof(t))
#define _IOW(g,n,t) _IOC(IOC_IN,(g), (n), sizeof(t))
/* this should be _IORW, but stdio got there first */
#define _IOWR(g,n,t)_IOC(IOC_INOUT, (g), (n), sizeof(t))

and you'd write an ioctl as something such as

#define TIOCGETD _IOR('t', 26, int) /* get line discipline */ 

(yes, they changed the code).

This meant that an ANSI C compiler for a UN*X with BSD-style ioctls defined in 
a non-ANSI-compatible style couldn't rely on the system include files' 
definitions of ioctl - it would have to have its own header files.

GCC handled this with a script called fixincludes, which tweaked 
non-ANSI-C-compatible things in header files.  The purpose of 
AC_LBL_FIXINCLUDES is to handle pre-ANSI UN*Xes with GCC, to make sure that the 
compile is getting the fixincludes-modified header files.  It tests for _IO not 
handling a first argument that's a C character constant rather than a character 
to be stuffed into a C character constant.

This was not an issue for tcpdump 3.4, as it didn't do any ioctls.


Re: [tcpdump-workers] [tcpdump] About HAVE_NO_PRINTF_Z

2023-01-12 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 11, 2023, at 11:06 PM, Guy Harris via tcpdump-workers 
 wrote:

> On UN*Xes, the C library is typically the system API library, so it's bundled 
> with the OS rather than the compiler, so I don't know whether this is an 
> issue of Sun C 5.9 or SunOS 5.9 (the core OS part of Solaris 9).

Solaris 9 printf() man page:

https://docs.oracle.com/cd/E19683-01/816-0213/6m6ne387j/index.html

"An optional h specifies that a following d, i, o, u, x, or X conversion 
character applies to a type short int or type unsigned short int argument (the 
argument will be promoted according to the integral promotions, and its value 
converted to type short int or unsigned short int before printing); an optional 
h specifying that a following n conversion character applies to a pointer to a 
type short int argument; an optional l (ell) specifying that a following d, i, 
o, u, x, or X conversion character applies to a type long int or unsigned long 
int argument; an optional l (ell) specifying that a following n conversion 
character applies to a pointer to a type long int argument; an optional ll (ell 
ell) specifying that a following  d, i, o, u, x, or X conversion character 
applies to a type long long or unsigned long long argument; an optional ll (ell 
ell) specifying that a following n conversion character applies to a pointer to 
a long long argument; or an optional L specifying that a following e, E, f, g, 
or G conversion character applies to a type long double argument. If an h, l, 
ll, or L appears with any other conversion character, the behavior is 
undefined."

No mention of z.

Solaris 10 printf() man page:

https://docs.oracle.com/cd/E19253-01/816-5168/6mbb3hrj1/index.html

"Length Modifiers

The length modifiers and their meanings are:

...

z

Specifies that a following d, i, o, u, x, or X conversion 
specifier applies to a size_t or the corresponding signed integer type 
argument; or that a following n conversion specifier applies to a pointer to a 
signed integer type corresponding to size_t argument."

So I suspect it's more like "C on Solaris 9 only supports %z if the compiler 
includes a library with a printf family that supports it and the compiler 
driver causes programs to be linked with that library before -lc; C on Solaris 
10 and later supports %z even if the compiler relies on the system library for 
printf-family functions".

I don't know whether any C99-supporting versions of GCC, when built on Solaris 
9 for Solaris 9, provides their own printf-family functions with %z support.  
The GCC 4.6.4 manual says, in section 2 "Language Standards Supported by GCC":

https://gcc.gnu.org/onlinedocs/gcc-4.6.4/gcc/Standards.html#Standards

in subsection 2.1 "C language":

The ISO C standard defines (in clause 4) two classes of conforming 
implementation. A conforming hosted implementation supports the whole standard 
including all the library facilities; a conforming freestanding implementation 
is only required to provide certain library facilities: those in , 
, , and ; since AMD1, also those in; 
and in C99, also those in  and . In addition, complex 
types, added in C99, are not required for freestanding implementations. The 
standard also defines two environments for programs, a freestanding 
environment, required of all implementations and which may not have library 
facilities beyond those required of freestanding implementations, where the 
handling of program startup and termination are implementation-defined, and a 
hosted environment, which is not required, in which all the library facilities 
are provided and startup is through a function int main (void) or int main 
(int, char *[]). An OS kernel would be a freestanding environment; a program 
using the facilities of an operating system would normally be in a hosted 
implementation.

GCC aims towards being usable as a conforming freestanding 
implementation, or as the compiler for a conforming hosted implementation. By 
default, it will act as the compiler for a hosted implementation, defining 
__STDC_HOSTED__ as 1 and presuming that when the names of ISO C functions are 
used, they have the semantics defined in the standard. To make it act as a 
conforming freestanding implementation for a freestanding environment, use the 
option -ffreestanding; it will then define __STDC_HOSTED__ to 0 and not make 
assumptions about the meanings of function names from the standard library, 
with exceptions noted below. To build an OS kernel, you may well still need to 
make your own arrangements for linking and startup. See Options Controlling C 
Dialect.

GCC does not provide the library facilities required only of hosted 
implementations, nor yet all the facilities required by C99 of freestanding 
implementations; to use the facilities of a hosted environment, you will need

Re: [tcpdump-workers] [tcpdump] About HAVE_NO_PRINTF_Z

2023-01-11 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 11, 2023, at 10:44 PM, Francois-Xavier Le Bail via tcpdump-workers 
 wrote:

> The commit fbd44158e0d5e6bb0c9b05671f702ebcf68cc56d was:
> ---
>Mend "make check" on Solaris 9 (Autoconf only).
> 
>Sun C 5.9 does not support C99. GCC 4.6.4 recognizes -std=gnu99, but
>does not support the z length modifier in printf(3). In either case 18
>tests fail in the following manner:
> 
>< [...]: domain [length 0 < 12] (invalid)
>---
>> [...]: domain [length 0 < zu] (invalid)
> 
>Make these tests conditional and disable them when HAVE_NO_PRINTF_Z is
>defined. Modify the Autoconf leg of the build process to define the
>macro when printf() does not handle %zu as expected. The CMake leg looks
>broken on Solaris 9 with 2.8.9 now, so leave it be for now.
> ---
> 
> I think that if a compiler builds a tcpdump that outputs "zu" when it must 
> output "12", it's an error and this compiler must be tag "unsupported".

It's probably more the library - %z is interpreted by the library at run time, 
not by the compiler at compile time (except to the extent that the compiler 
does format/argument checking).

On UN*Xes, the C library is typically the system API library, so it's bundled 
with the OS rather than the compiler, so I don't know whether this is an issue 
of Sun C 5.9 or SunOS 5.9 (the core OS part of Solaris 9).

Unfortunately, I'm not sure to what extent either autoconf's "is C99 
supported?" or CMake's "is C99 supported?" can, or does, check for library 
support, and various "I want C99" flags to the compiler may affect which 
version of the language the compiler accepts and supports but it might not 
guarantee that the library with which the program will be linked supports that 
version of the language.

But it might make sense to just say "%z" is required by current versions of 
tcpdump (and libpcap), even if the lack of that support can't be discovered 
until "zu" shows up in output, as long as we don't have to worry about older OS 
versions if printf-formatting routines are part of an OS library rather than a 
C compiler support library.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Autoconf with Debian patches

2023-01-08 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 8, 2023, at 5:24 AM, Denis Ovsienko  wrote:

> Thank you for this information.  Let me add that Ubuntu 20.04 defaults
> to 2.69, but Ubuntu 22.04, FreeBSD, NetBSD, OpenBSD and OmniOS all
> currently default to Autoconf 2.71.

...and macOS doesn't ship with autoconf in the first place, so the user would 
have to install a third-party version.

The current GNU release is 2.71, and the current Homebrew release:

https://formulae.brew.sh/formula/autoconf#default

is 2.71, so...

> Would it make the most sense to make 2.71 the nominal version (especially for 
> releases), but to maintain
> backward compatibility with 2.69 for quite a while longer?

...yes.

That means people should be careful when updating the configure script, and 
might call for at least one part of the CI process involve testing, on a 
machine with 2.69 installed, both "does it work if you just use the packaged 
configure script?" and "does it work if you get rid of configure and 
config.h.in, run autoconf to generate the script with 2.69, and use the 
resulting configure script?", to catch cases where people aren't careful.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Autoconf with Debian patches

2023-01-07 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 7, 2023, at 8:51 AM, Denis Ovsienko  wrote:

> On Fri, 6 Jan 2023 17:13:20 -0800
> Guy Harris  wrote:
> 
>> On Jan 6, 2023, at 3:31 PM, Denis Ovsienko 
>> wrote:
>> 
>>> It is the latter, and a custom Autoconf seems an unreasonable
>>> requirement for contributing.  
>> 
>> Reasonable, or unreasonable?
> 
> Unreasonable, if it is more complicated than installing an Autoconf
> package using the package manager of the OS.

Which it is likely to be.

>> (By the way, have other Linux distributions applied the same changes
>> that Debian and its derivatives have?  If not, then users of those
>> distributions would be in the same situation as macOS and FreeBSD
>> users.)
> 
> I do not remember to what extent these patches have propagated beyond
> Debian and Ubuntu.  Maybe somebody else has other distributions ready to
> check?

Fedora 36 and later appear to ship autoconf 2.71; the Debian sid package for 
autoconf 2.71 applies no patches to it, as, I presume, all of the Debian 
packages are applied (the off_t patch is already incorporated in 2.71).  
Debian's currently shipping 2.69, which requires their pile of patches.

Fedora shipped autoconf 2.69, without a patch like the Debian off_t patch but 
with a patch like the Debian "add runstatedir" patch.  I don't know what RHEL 
has.

Looking at the Arch Linux repository, there doesn't appear to be a version of 
the off_t patch from when they shipped 2.69; they're currently shipping 2.71.  
The same applies to Gentoo.

But at least some of them have 2.71 patches, so there's no guarantee that all 
the releases that have 2.71 will generate exactly the same script.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Autoconf with Debian patches

2023-01-06 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 6, 2023, at 3:31 PM, Denis Ovsienko  wrote:

> It is the latter, and a custom Autoconf seems an unreasonable
> requirement for contributing.

Reasonable, or unreasonable?

Whatever version is chosen as the standard autoconf, if the goal is to have the 
version of the configure script in the repository always be generated by the 
standard autoconf, some users will have to build and install that version if 
they will be changing the configure script, and, for other contributions, 
they'll either have to build or install that version or they will have to take 
care not to check in the configure script if they haven't changed configure.ac 
or aclocal.m4.

(By the way, have other Linux distributions applied the same changes that 
Debian and its derivatives have?  If not, then users of those distributions 
would be in the same situation as macOS and FreeBSD users.)

> Or the --runstatedir and LARGE_OFF_T bits between releases could appear
> and disappear at random

Meaning we let users check in the configure script in whatever form it exists 
on their machine?

> until it is a release time, then the standard
> could be enforced as and if necessary.

I.e., part of the process of making a release would be regenerating the 
configuration file using Debian autoconf and checking in the regenerated 
version.?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Autoconf with Debian patches

2023-01-06 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 6, 2023, at 2:24 PM, Denis Ovsienko  wrote:

> On Fri, 6 Jan 2023 13:25:14 -0800
> Guy Harris  wrote:
> 
>> If we switch to making Debian Autoconf the new standard and keeping
>> the generated configure script in the repository, would that mean
>> that developers working from the repository would either have to
>> install Debian Autoconf or use "git add -p" instead of "git add"?
> 
> Yes.  Right now it is the other way around (contributors that use
> Debian or its derivatives have to filter their output).  So perhaps
> this switch would not be convenient for macOS and FreeBSD users.

If we go that way, we should document it when addressing developers.

Is there a place where people can download a tarball for Debian autoconf and 
just do ./configure, make, and make install, or will they have to download the 
Debian package and apply the patches?  If the latter, we should, at minimum, 
give documentation on how to do that - or we could just do that ourselves and 
have a "Debian autoconf" source tarball to download.

An alternative would be *not* to keep the generated configure script in the 
repository (that's what Wireshark ended up doing before it ceased to use 
autoconf/automake), and generate it as part of the release-build process, which 
we would do on a machine on which Debian autoconf was installed.

That requires that developers have autoconf installed if they're not going to 
be using CMake, but there are already tools they need installed (a C compiler, 
make, Flex, Bison/Berkeley YACC, ...) so I don't see that as a problem.

It also means that configure.ac and aclocal.m4 would have to work with various 
sufficiently-recent versions of autoconf.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Autoconf with Debian patches

2023-01-06 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 4, 2023, at 2:30 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> As some have experienced before, attempts to regenerate the configure
> script often result in two groups of unnecessary changes (runstatedir
> and LARGE_OFF_T), both of which come from Debian-specific patches to
> Autoconf because traditionally the configure scripts were generated
> using non-Debian Autoconf.  In practice this means that a regenerated
> revision of a configure script almost always requires "git add -p"
> instead of "git add".
> 
> This has been discussed in some detail in [1], and my understanding is
> that making Debian Autoconf the new standard should make this problem
> smaller (it certainly would in my development environment).  Would
> anybody like to make their point for or against such a switch in one of
> the next releases?

If we switch to making Debian Autoconf the new standard and keeping the 
generated configure script in the repository, would that mean that developers 
working from the repository would either have to install Debian Autoconf or use 
"git add -p" instead of "git add"?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Resend: Request for new DLT Value

2022-11-15 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Nov 15, 2022, at 3:50 PM, Chris Brandson via tcpdump-workers 
 wrote:

> The ITU Recommendation G.9959 document can be found here 
> https://www.itu.int/rec/T-REC-G.9959 . 
> Work is ongoing on a wireshark dissector (hence the request for DLT LINKTYPE) 
> and the TAP encapulation is still in development and will be published 
> shortly. 

Let us know when you have a draft ready for us to look at.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] upcoming tcpslice release

2022-10-15 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Oct 15, 2022, at 8:03 AM, Denis Ovsienko via tcpdump-workers 
 wrote:

> As it turns out, on Linux tcpslice currently fails to build with the
> current master branch of libpcap.  This reproduces in all Linux CI
> builds and also on my Ubuntu 20.04 PC.  The root cause seems to be in
> libpcap via pcap-config:
> 
> /usr/bin/ld: cannot find -lsystemd
> clang: error: linker command failed with exit code 1 (use -v to see 
> invocation)
> 
> LIBS='../libpcap/libpcap.a -lnl-genl-3 -lnl-3  -ldbus-1 -lpthread -lsystemd  '

Fixed in

commit 588f0bb996230a84a8cf10ddf30cc514e3ba5a68 (HEAD -> master, origin/master, 
origin/HEAD)
Author: Guy Harris 
Date:   Sat Oct 15 15:18:13 2022 -0700

configure: use pcap-config --static-pcap-only if available.

If we're linking with a libpcap in ../libpcap*, it's static, but we only
need to link with the libraries on wich it immediately depends, we don't
need to link with the libraries on which those libraries depend, etc..

So, if ../libpcap*/pcap-config supports --static-pcap-only, use that.

Regenerate configure script.

This should only be an issue for programs that link statically with libpcap 
(libpcap.a) but don't link completely statically.  I don't know if anything 
does that other than tcpdump (from which this change was taken) and tcpslice, 
if they're building with a libpcap.a from a libpcap source and build tree in 
the same parent directory as the tcpdump/tcpslice source and build tree.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] DLT type for Libpcap Library

2022-08-29 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Aug 29, 2022, at 6:13 AM, Christian  wrote:

>> "Defined" in what sense?
> 
> First of all, I want to define a header, with a magic byte maybe, a time 
> stamp, length of the whole packet and so on. Something which wraps my actual 
> data and which libpcap can recognize or rather expect as data which can be 
> read from my device node.

Unless you will be submitting a pull request to incorporate support for that 
header into the standard libpcap release, none of that involves us.

> Right now, if I try to connect tcpdump with my device node for reading and 
> receiving data, I only get a:
> 
> listening on kpnode0, link-type 147, snapshot length 262144 bytes
> 
> pcap_stats: this operation isn't properly handelst by that device.

*That* has nothing to do with the definition of the header.

Your pcap module must set the "stats_op" member of the pcap_t structure to 
point to a function that will provide the results for pcap_stats().  It is 
currently not doing so.

> My kernel module provides data in packets which is preceded by an header 
> which I deliberately defined for libpcap to recognized as data from MY device.

I mentioned the only places where *libpcap* cares about the header.  If your 
header provides data in big-endian or little-endian fashion, regardless of the 
byte order of the machine on which it's running, and if you have no changes to 
the pcap compiler to add new filter expressions for your packets, then libpcap 
has nowhere that would need to handle your header and has no place anywhere 
that would handle your header.

> My question now is, where should I define my datatype within the libpcap 
> source code?

As per the above, perhaps nowhere.

It's not as if you can make *any* change to libpcap that will, by itself, cause 
tcpdump, or Wireshark, or any other packet analyzer using libpcap to be able to 
understand your packets.  That's not how libpcap is intended to work, it's not 
how libpcap is designed to works nd it's not how libpcap works.  Different 
sniffers have different mechanisms for parsing packets, so it's not as if 
libpcap could even be designed to do that.

> I associate my data type with the free user defied DLT_USER0, so that is the 
> reason why pcap mentioned link-type 147. Im not stuck on that user defined 
> type. Maybe it's better to define a whole new data type like e.g. 
> DLT_USB_LINUX.

If you do *that*, then you will need to make a publicly-available document that 
specifies how your header is structured, or provide enough information to allow 
us to provide such a document, so that we can document it on

https://www.tcpdump.org/linktypes.html

Then we will assign a number to your link-layer header type.

However, once that's done, if you want tcpdump to be able to handle your 
packets, somebody would then have to write code for tcpdump to have it analyze 
those packets, and if they wanted that to be a standard feature of tcpdump, 
they'd have to provide a pull request with that change.  The same applies for 
Wireshark - and the code for tcpdump wouldn't work for Wireshark, as those two 
programs are structured differently internally.

> Anyway it's nothing destined for release. For now Im just happy if libpcap 
> excepts my header data type to read. Filtering and all this comes later. I 
> guess I have to make changes in my kernel probe, or write a BPF function?

You would have to write *tcpdump* code in order for tcpdump to handle code from 
your pcap module.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] DLT type for Libpcap Library

2022-08-29 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Aug 24, 2022, at 11:31 AM, Christian via tcpdump-workers 
 wrote:

> Hello everyone, another question that I have is which DLT-type I should use 
> for my libpcap-module. Since Im writing a module which acquires data from a 
> kernel module, which in turn has no IP-based packages at all. I have to 
> define my very own data-type from the base onwards. But because this is 
> nothing worth to release (maybe only for documentation of an example) I would 
> rather use a DLT_USERn linktype. But this is only defined on applications 
> which use pcap lib, not libpcap itself?

"Defined" in what sense?

The only ways in which the code in the libpcap library "defines" a 
LINKTYPE_/DLT_ value's format are

1) the code that compiles filter expressions needs to know the format 
of the data in a packet of a given link-layer type;

2) in order to deal with some link-layer header types where data is in 
the byte order of the host that wrote the file, libpcap, when reading a file, 
may have to byte-swap host-byte-order fields from the byte order of the host 
that wrote the file into the byte order of the host that's reading the file if 
the two are different, and the remote-pcap protocol code must do so with packet 
data from a remote server if the byte orders of the two hosts are different.

Code that reads pcap and pcapng files, whether with libpcap or independent code 
for reading pcap and pcapng files, has to provide its *own* code to interpret 
the packets; if a new LINKTYPE_/DLT_ value is added, neither tcpdump nor 
Wireshark nor any other program will acquire the ability to handle that file 
format as a result of any changes to libpcap for that format - new code will 
have to be written for those programs.

I.e., making tcpdump or Wireshark or... handle your data-link type is up to 
you.  You'l have to modify tcpdump or Wireshark, or add a plugin for Wireshark.

(And note that code that processes those files doesn't define the formats; they 
follow the definitions of the formats.  The *definitions* of the formats are 
currently at

https://www.tcpdump.org/linktypes.html

However, those definitions themselves may refer to other specifications.  For 
example, the format of LINKTYPE_ETHERNET/DLT_EN10MB packet data is really 
defined by the LAN/MAN Standards Committee of the IEEE Computer Society, not by 
The Tcpdump Group or the libpcap code.)

> Another question is: how to map the structure(s) in which I define my data 
> types with the symbol in dlt.h?

"Map" in what sense?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] configure script problem while working on extention

2022-08-16 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Aug 16, 2022, at 12:49 PM, Christian  wrote:

>>> configure:6075: checking for pcap_loop
>>> configure:6075: gcc -o conftest -g -O2   conftest.c -L/usr/local/lib 
>>> -Wl,-rpath,/usr/local/lib -lpcap  >&5
>>> /usr/bin/ld: /usr/local/lib/libpcap.so: undefined reference to 
>>> `scsimon_create'
>>> /usr/bin/ld: /usr/local/lib/libpcap.so: undefined reference to 
>>> `scsimon_findalldevs'
>> Has the pcap.c in the libpcap that was built and installed in /usr/local/lib 
>> been modified to add a pcap module "scsimon", in addition to your "kpnode" 
>> module?
>> 
>> If so, is there a pcap-scsimon.c, or whatever, that defines them, and was it 
>> also added to Makefile.in when the library was built?
> 
> My dumbness again, scsimon is just a synonym for kpnode. This is the actual 
> latest config.log

I.e., that other log was from some *earlier* attempt to configure tcpdump?

> configure:5389: checking whether to look for a local libpcap
> configure:5410: result: yes

OK, so *this* time you're building with the library from a local build, rather 
than with a library that was installed.

> configure:5415: checking for local pcap library
> configure:5445: result: ../libpcap/libpcap.a
> configure:5908: checking for pcap-config
> configure:5926: found ../libpcap/pcap-config
> configure:5938: result: ../libpcap/pcap-config
> configure:6075: checking for pcap_loop
> configure:6075: gcc -o conftest -g -O2   conftest.c ../libpcap/libpcap.a   >&5
> /usr/bin/ld: ../libpcap/libpcap.a(pcap.o):(.data.rel.ro+0x10): undefined 
> reference to `kpnode_findalldevs'
> /usr/bin/ld: ../libpcap/libpcap.a(pcap.o):(.data.rel.ro+0x18): undefined 
> reference to `kpnode_create'

...

> which shows the actual problem, if I evoke nm libpcap.so.1-11.0-PRE-GIT  | 
> grep kpnode
> 
> I got U kpnode_create
> 
> U kpnode_findalldevs
> 
> So the symbols are known but seems to be undefined. How to fix this?

Make sure that libpcap.a includes pcap-kpnode.o, by making sure that 
pcap-kpnode.c is in the list of source modules to be compiled and included in 
libpcap.

For Makefile.in, that means adding it to

MODULE_C_SRC = @MODULE_C_SRC@

after @MODULE_C_SRC@ so you have

MODULE_C_SRC = @MODULE_C_SRC@ pcap-kpnode.c
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] configure script problem while working on extention

2022-08-15 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Aug 15, 2022, at 1:37 PM, Christian  wrote:

> configure:6075: checking for pcap_loop
> configure:6075: gcc -o conftest -g -O2   conftest.c -L/usr/local/lib 
> -Wl,-rpath,/usr/local/lib -lpcap  >&5
> /usr/bin/ld: /usr/local/lib/libpcap.so: undefined reference to 
> `scsimon_create'
> /usr/bin/ld: /usr/local/lib/libpcap.so: undefined reference to 
> `scsimon_findalldevs'

Has the pcap.c in the libpcap that was built and installed in /usr/local/lib 
been modified to add a pcap module "scsimon", in addition to your "kpnode" 
module?

If so, is there a pcap-scsimon.c, or whatever, that defines them, and was it 
also added to Makefile.in when the library was built?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] configure script problem while working on extention

2022-08-15 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
What are the contents of config.log?
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] configure script problem while working on extention

2022-08-15 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Aug 15, 2022, at 12:50 AM, Denis Ovsienko via tcpdump-workers 
 wrote:

> On Sun, 14 Aug 2022 11:49:57 -0700
> Guy Harris via tcpdump-workers 
> wrote:
> 
>> Or is this a ZIP archive provided by somebody other than tcpdump.org?
> 
> github.com -> code -> download ZIP. I vaguely remember there was a
> "download tar.gz" there as well, but not anymore.  Anyway, git clone is
> better order of magnitudes, in that it allows to tell which commit the
> working copy is at.

So they're building from the current main branch, rather than from a release 
tarball (or from the source used to build the versions of libpcap and tcpdump 
in the OS they're using)?

If so, then, yes, they should use git clone, not only for the reason you 
mention, but because it's not a snapshot, it's a repository, so it can be 
updated to the current state of the main branch.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] configure script problem while working on extention

2022-08-14 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Aug 12, 2022, at 7:27 AM, Christian via tcpdump-workers 
 wrote:

> I pick up this thread of mine again from 7th march of this year (wireshark 
> extension for a Kernel Module (like Usbmon)​ ) enhanced with a configure 
> issue,

Unless I've missed something, "again" means "again, in a different mailing 
list", as the only previous message I can find about it was to the 
wireshark-dev mailing list...

> which was discussed lot of times ( tcpdump configure script doesn't correctly 
> handle static builds ). But Im not sure, if this is a real issue for github.
> 
> In my case, I was able to build Tcpdump with these steps:

...and my response, in the previous mail thread, to the question

> The functions kpnode_findalldevs and kpnode_create are in my files 
> pcap-kpnode.c and pcap-kpnode.h. They are not finished yet but the subject of 
> this mail is for now, how to connect these functions into libpcap and 
> Wireshark so that they are evoked if a device /dev/kpnode emerges.

was

> You do it in libpcap.

So:

> Get libpcap with git, step into the directory invoke: ./configure 
> --disable-dbus --without-dbus --without-dpdk --disable-rdma
> 
> then make and make install.

OK, so far so good.

> Then I opened the tcpdump.zip archive

(.zip?  Not .tar.gz?  The current releases from

https://www.tcpdump.org/index.html#latest-releases

are provided in .tar.gz form, as are all the other release in

https://www.tcpdump.org/release/

Gzipped tarballs are probably easier to extract on a UN*X, as they're likely to 
have either a version of tar that reads gzipped files or have gzcat and tar; to 
unpack a zip archive requires a command such as unzip or a GUI tool that 
unpacks zip archives.  Perhaps you mean ".zip archive" in a metaphorical sense 
of "some form of archive"?  Or is this a ZIP archive provided by somebody other 
than tcpdump.org?)

> within the libpcap directory. step into the directory, call ./configure and 
> it build. success!

There's no requirement to unpack the tcpdump source in the libpcap source 
directory.  If you *haven't* installed the libpcap that you built from source, 
the best place to unpack it is in the *parent* directory of the libpcap source 
directory, but if you *have* installed that libpcap, the tcpdump configure 
script won't have to look for it in a directory at the same level as the 
tcpdump source directory, so you can unpack the tcpdump repository anywhere.

> Then I took my changes for libpcap from march, a pcap-kpnode.c and 
> pcap-kpnode.h (attached)

No, they're not attached.  Either you forgot to attach them or some mail 
software stripped the attachments.  Michael/Denis/François - do we strip 
attachments at any point before sending messages to the list?

> further I added into pcap.c:
> 
> 100: #include "pcap-kpnode.h"
> 
> 690: {kpnode_findalldevs, kpnode_create }
> 
> and in Makefile.in I added my sourcefiles
> 
> after that, I evoked make clean and the configure call again like that one 
> before with all these switches. Then make and make install. The library was 
> successfully build, also with my changes. Then I unzipped the tcpdump archive 
> again to start from scratch and this time ./configure leads to that error 
> message about no pcap_loop support. I added the config.log as well.

There were no attachments to the copy of your message that I received, so if 
you attached it, something stripped the attachment.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] [tcpdump] About struct in_addr / struct in6_addr

2022-07-17 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 17, 2022, at 3:39 PM, Bill Fenner  wrote:

> IMO it is safe to drop support for OSes lacking native IPv6 support.

Yeah.  Back when IPv6 support was added to tcpdump, it was an experimental new 
technology and the configure script had to figure out which of several add-on 
IPv6 packages you had installed.  Now a significant amount of Wikipedia 
vandalism comes from IPv6 addresses rather than IPv4 addresses. :-)--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] [tcpdump] About struct in_addr / struct in6_addr

2022-07-17 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 17, 2022, at 11:09 AM, Francois-Xavier Le Bail 
 wrote:

> Remain some stuff about 'struct in6_addr'. Any need to keep them?
> 
> $ git grep -l 'struct in6_addr'
> CMakeLists.txt
> cmakeconfig.h.in
> config.h.in
> configure
> configure.ac
> netdissect-stdinc.h

That's there for the benefit of OSes whose APIs don't have standard IPv6 
support; if there are any left that we care about (or if there are old non-IPv6 
versions we care about for any OSes we support), then it might be useful, but 
I'm not sure it would build (we use gethostbyaddr(), so *maybe* it'll compile, 
and maybe gethostbyaddr() will fail when passed AF_INET6 and the code will just 
show the IPv6 address rather than a name).

Should we care about it, or should we just drop support for OSes lacking native 
IPv6 support in 5.0?

--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] [tcpdump] About struct in_addr / struct in6_addr

2022-07-17 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 17, 2022, at 10:10 AM, Francois-Xavier Le Bail via tcpdump-workers 
 wrote:

> The current nd_ipv4 and nd_ipv6 types were added in 2017 for alignment 
> reasons.
> 
> Since then,
> most of the 'struct in_addr' were replaced by 'nd_ipv4',
> most of the 'struct in6_addr' were replaced by 'nd_ipv6'.
> 
> Remain:
> pflog.h:110:struct in_addr  v4;
> pflog.h:111:struct in6_addr v6;
> 
> Should they be replaced also?

Yes.  Dne in 71da7b139eb418ac91f1169c550e8a4dc970a692.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] NetBSD CI breakage

2022-07-14 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 10, 2022, at 2:48 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> The last CI build of the libpcap-1.10 branch failed on netbsd-aarch64
> because the latter now uses GCC 12.  Commit 4e7f6e8 makes a lazy fix
> for that in the master branch; if a more sophisticated solution is not
> required,

I changed it to a slightly different fix.

The problem was that, on platforms without a cloning BPF device, the BPF device 
open code iterates over BPF device names, and the loop index was a signed 
integer, so, in theory, if you have 2^31 BPF devices, from /dev/bpf0 to 
/dev/bpf2147483647 open, the loop index will go from 2147483647 to -2147483648, 
and, while 2147483647 requires 10 characters, -2147483648 requires 11.  Thus, 
the length of the buffer had to be increased.

I changed the index to an unsigned integer, so it goes from 0 to 4294967295, 
all of which require 10 characters.

On most OS versions without a cloning BPF device, you're unlikely to have 2^32 
BPF devices (almost certainly not on an ILP32 platform, for obvious reasons!), 
or even 2^31 BPF devices, so, in practice, this won't be a problem.

The only OS I know of that 1) has no cloning BPF device and 2) auto-creates BPF 
devices if you try to open one that's past the maximum unit number (it's named 
after a British naturalist and evolutionist whose last name is not "Huxley" 
:-)).  It uses "bpf%d" to generate the device names, so it could, in principle, 
create a device named /dev/bpf-2147483648, but the default upper limit on the 
number of BPF devices is 256, so you'd have to sysctl it up to a value above 
2^31 (the sysctl value is unsigned, so you can do it; that also means that 
"bpf%d" should be "bpf%u", so it's time to file a Radar^WFeedback on that).

> a simple cherry-pick into libpcap-1.10 should be sufficient
> to pass CI again.

I've backported a bunch of changes to 1.10, including your change and mine for 
that; the netbsd-aarch64 build now seems to be working for libpcap-1.10.

(Or should that be netbsd-a64, or netbsd-arm64?  Thanks, Arm, for making 
"architecture" names so complicated)
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] RFC: TLS in rpcaps

2022-07-05 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 4, 2022, at 4:49 PM, Ryan Castellucci via tcpdump-workers 
 wrote:

> 1) TLS compression support is a foot-bazooka, is exploitable in practice, and 
> should be removed. A modified version of the CRIME[1] attack should be 
> completely feasible. I can't imagine any remotely feasible mitigation. 
> Fortunately, I don't see any reason why removing it (perhaps making the 
> rpcapd option that turns it on do nothing) would cause any compatibility 
> issues.

The only thing that -C appears to do is cause ssl_init_once() to call 
SSL_COMP_get_compression_methods(), which, according to


https://www.openssl.org/docs/man3.0/man3/SSL_COMP_get_compression_methods.html

"returns a stack of all of the available compression methods or NULL on 
error.", so I'm not sure what -C, which is presumably "the rpcapd option that 
turns [TLS compression] on", actually *does*.

> 2) What should the default verification behavior be? I worry about breaking 
> people's installs if suddenly it's enabled in enforcing mode by default, but 
> also most people are never going to bother to set things up properly without 
> incentive. A middle ground could be to have soft failures by default - print 
> a warning to stderr which can be turned of by passing a command line option 
> such as --insecure, with a --tls-verify flag to make it a hard failure.

What does "setting things up properly" involve?  Presumably it's something more 
than just "not having an expired certificate"; if somebody can't be bothered to 
do *that*, my sympathy is limited.

> 3) libpcap seems to lose track of the hostname between establishing the 
> control connection. Path of least resistance seems to be adding `char 
> *rmt_hostname` to `struct pcap_rpcap`, saved via strdup. Is this going to 
> upset anyone?

It's a private data structure, and it consumes very little memory unless you 
have a huge number of pcap_t's open, so I'm not sure how much justification is 
there fore being upset.
'
> 4) What level of control should be exposed for the tls settings within 
> libpcap?

What settings are there that might be exposed, other than "should I check the 
validity of certificates"?

> 5) If control over cipher suites is provided, standard tools don't change 
> TLSv1.3 settings via cipher suite list.

"Standard tools" meaning "programs that use TLS" or something else?

And does "control" mean "disallow cipher suites that are allowed by default", 
"allow cipher suites that are disallowed by default", or something else?

> 6) Would anyone be willing to hand-hold a bit on the "active" mode? It seems 
> a bit weird, and I'm not confident I understand what's going on.

"Active mode":


https://www.winpcap.org/docs/docs_412/html/group__remote.html#RunningModes

is a hack to allow remote capture from interfaces on a firewalled remote 
machine.  To start a capture, a capture program that supports active mode would 
be run on the client machine, and it would open a listening socket for rpcapd.  
rpcapd would then be run in active mode on the machine on whose interface(s) 
capture is to be done, with the host name/address and port number of the 
capturing application provided as arguments to the -a flag, and would attempt 
to connect to that host and port.  Once the connection is made, the capturing 
machine (the machine that *accepted* the connection) would send an 
authentication request message to the machine on whose interface(s) the capture 
is to be done (the machine that *initiated* the connection), and that and all 
messages would work exactly the same was as if the capturing machine had 
initiated a connection to the machine on whose interface(s) the capture is to 
be done.

So the only part of the traffic that changes is the connection initiation.

Given that there are, as far as I know, zero capturing programs that support 
the not-exactly-clean API for active model (neither tcpdump nor Wireshark do), 
I've never tested that even without* TLS, much less *with* TLS, so that may 
require work even before any additional work is done.

I'd like to make remote capture work with the create/activate API, which might 
allow a cleaner active mode API, with less hackery necessary for programs to 
use it.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] endianness of portable BPF bytecode

2022-06-10 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jun 10, 2022, at 1:59 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> Below is a draft of such a file format.  It addresses the following
> needs:
> * There is a header with a signature string to avoid false positive
>  detection as some other file type that begins exactly with particular
>  bytecode (ran into this during disassembly experiments).
> * There are version fields to address possible future changes to the
>  encoding (either backward-compatible or not).

Is the idea that a change that's backward-compatible (so that code that handles 
the new format needs no changes to handle the old format, but code that handles 
only the old format can't handle the new format) would involve a change to the 
minor version number, but a change that's not backward-compatible (so that to 
handle both versions would require two code paths for the two versions) would 
involve a change to the major version number?

> File format:
> 
> 0   1   2   3
> 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
> |  'c'  |  'B'  |  'P'  |  'F'  |
> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

Is the 'c' part of the retronym "cBPF" for the "classic BPF" instruction set, 
as opposed to the eBPF instruction set?  (I didn't find any file format for 
saving eBPF programs, so this format could be used for that as well, with the 
magic number 'e' 'B' 'P' 'F'.)

> Type=0x02 (LINKTYPE_ID)
> Length=4
> Value=

This could be 2 bytes long - pcapng limits link-layer types to 16 bits, and 
pcap now can use the upper 16 bits of the link-layer type field for other 
purposes.

> Type=0x03 (LINKTYPE_NAME)
> Length is variable
> Value=

E.g. either its LINKTYPE_xxx name or its DLT_xxx name?

> Type=0x04 (COMMENT)
> Length is variabe
> Value=

"Generating software description" as in the code that generated the BPF program?

> Type=0x05 (TIMESTAMP)
> Length=8
> Value=

Is this the time the code was generated?

Is it a 64-bit time_t, or a 32-bit time_t and a 32-bit microseconds/nanoseconds 
value?  I'd recommend the former, unless we expect classic BPF to be dead by 
2038.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] What's the correct new API to request pcap_linux to not open an eventfd

2022-05-20 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 20, 2022, at 10:56 AM, Bill Fenner via tcpdump-workers 
 wrote:

> I'm helping to debug a system that uses many many pcap handles, and never
> calls pcap_loop - only ever pcap_next.

Both pcap_loop() and pcap_next() ultimately go to the same place.

Note, BTW, that pcap_next() sucks; it's impossible to know whether it returns 
NULL because of an error or because the timeout expired and no packets had 
arrived during that time.  pcap_next_ex() doesn't have that problem.  (On 
Linux, the turbopacket timer doesn't expire if no packets have arrived, so, *on 
Linux*, NULL should, as far as I know, be returned only on errors.)

> We've found that each pcap handle has an associated eventfd, which is used to 
> make sure to wake up when
> pcap_breakloop() is called.  Since this code doesn't call pcap_loop or
> pcap_breakloop, the eventfd is unneeded.

If it called pcap_breakloop(), the eventfd would still be needed; otherwise, a 
call could remain indefinitely stuck in pcap_next() until a packet finally 
arrives.  Only the lack of a pcap_breakloop() call renders the eventfd 
unnecessary.

So how is this system handling those pcap handles?

If it's putting them in non-blocking mode, and using some 
select/poll/epoll/etc. mechanism in a single event loop, then the right name 
for the API is pcap_setnonblock().  There's no need for an eventfd to wake up 
the blocking poll() if there *is* no blocking poll(), so:

if non-blocking mode is on before pcap_activate() is called, no eventfd 
should be opened, and poll_breakloop_fd should be set to -1;

if non-blocking mode is turned on after pcap_activate() is called, the 
eventfd should be closed, and poll_breakloop_fd should be set to -1;

if non-blocking mode is turned *off* afterwards, an eventfd should be 
opened, and poll_breakloop_fd should be set to it;

if poll_breakloop_fd is -1, the poll() should only wait on the socket 
FD;

so this can be handled without API changes.

If it's doing something else, e.g. using multiple threads, then:

> I'm willing to write and test the code that skips creating the breakloop_fd
> - but, I wanted to discuss what the API should be.  Should there be a
> pcap.c "pcap_breakloop_not_needed( pcap_t * )" that dispatches to the
> implementation, or should there be a linux-specific
> "pcap_linux_dont_create_eventfd( pcap_t * )"?

...it should be called pcap_breakloop_not_needed() (or something such as that), 
with a per-type implementation, and a *default* implementation that does 
nothing, so only implementations that need to do something different would need 
to do so.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 9, 2022, at 10:05 PM, Tomasz Moń  wrote:

> On Tue, May 10, 2022 at 6:57 AM Guy Harris  wrote:
>> On May 9, 2022, at 9:41 PM, Tomasz Moń  wrote:
>>> Also Wireshark would have to show "USB Full/Low speed capture" section with 
>>> only the single byte denoting
>>> full or low speed, followed by "USB Link Layer" (as shown currently for
>>> usbll captures).
>> 
>> No, it wouldn't.  It would just display that as an item in "USB Link Layer".
> 
> If you displayed that in USB Link Layer, without marking it as
> Wireshark generated field (and it shouldn't be marked as Wireshark
> generated because it was in capture file) it would be confusing.

Then show it as "USB physical layer information", similar to what's done for 
"802.11 radio layer information".
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 9, 2022, at 9:41 PM, Tomasz Moń  wrote:

> On Mon, 2022-05-09 at 13:19 -0700, Guy Harris wrote:
>> On May 9, 2022, at 1:02 PM, Tomasz Moń  wrote:
>> 
>>> "Why this doesn't match all the documents on USB that I have
>>> read?".
>> 
>> What is the "this" that wouldn't match?
> 
> Packet Bytes as shown by Wireshark.

OK, thet suggests that it's time to finally default to *NOT* showing metadata 
in the packet bytes pane of Wireshark and in hex dump data in tcpdump, as the 
only time its raw content is of interest is if you're debugging either 1) 
software that generates those headers or 2) software that dissects those 
headers.

*That* will quite effectively prevent people from asking where that byte is 
defined in a USB spec, as that byte won't be there in the first place.

> Also Wireshark would have to show "USB Full/Low speed capture" section with 
> only the single byte denoting
> full or low speed, followed by "USB Link Layer" (as shown currently for
> usbll captures).

No, it wouldn't.  It would just display that as an item in "USB Link Layer".
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 9, 2022, at 1:02 PM, Tomasz Moń  wrote:

> The same as why URB level captures are confusing. Maybe not to the same
> level as that would be just a single byte (and the URB metadata
> contains way more information), but it would still raise the questions
> like "where in USB specification this byte is defined?",

To what extent are people analyzing 802.11 captures raising the question "where 
in the 802.11 specification are the fields of the radiotap header defined?"

If the answer is "to a minimal extent" or "it doesn't happen", what about USB 
would make the answer different?

> "Why this doesn't match all the documents on USB that I have read?".

What is the "this" that wouldn't match?

--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 9, 2022, at 12:31 PM, Tomasz Moń  wrote:

> There is no such thing as "low-speed bus" because low-speed is only
> allowed for non-hub devices. USB hosts and hubs *must* support atleast
> full and high speed. USB devices are allowed to be low-speed (such
> devices can operate *only* at low speed).

So what is the term used for a cable between a low-speed-only device and either 
a host or a hub?

The USB 2.0 spec appears to use "bus" for an "edge", in the graph-theory sense:

https://en.wikipedia.org/wiki/Glossary_of_graph_theory#edge

rather than for the entire tree.

What *is* the correct term to use for a single cable, the traffic on which one 
might be sniffing?

> It is important that the analysis engine know whether the packets were
> full or low-speed as there are slightly different rules. There is not
> so clear distinction between layers as USB does not really use ISO/OSI
> model.
> 
> So I think it definitely makes sense to have separate link types for
> full-speed and low-speed.

It makes sense to indicate whether packets are full-speed or low-speed; nobody 
is arguing otherwise.

The question is whether the right way to do that is to have separate link 
types, so that you can't have a mix of full-speed and low-speed packets in a 
single pcap capture or on a single interface in a pcapng capture, or to have a 
single link-layer type with a per-packet full-speed/low-speed indicator.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 9, 2022, at 12:40 PM, Tomasz Moń  wrote:

> On Mon, 2022-05-09 at 12:02 -0700, Guy Harris wrote:
>> On May 9, 2022, at 7:41 AM, Tomasz Moń  wrote:
>> 
>>> That would require defining pseudoheader that would have to be
>>> included in every packet.
>> 
>> Is that really a great burden?
> 
> I think it would make it harder to understand the protocol for
> newcomers that use tools like Wireshark to try to make sense of USB.

In what fashion would it do so?
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 9, 2022, at 7:41 AM, Tomasz Moń  wrote:

> That would require defining pseudoheader that would have to be included
> in every packet.

Is that really a great burden?

> And it would only solve the corner case that the
> currently available open-source friendly sniffers do not presently
> handle.

The point isn't to just handle what currently available open-source friendly 
sniffers handle.  I'd prefer to leave room for future sniffers that *do* handle 
it.

> I think it is fine to assume that any tool that would create full-speed
> captures that contain both full-speed and low-speed data should be able
> to write pcapng file (or simply create two separate pcap files).

I think that, if you're capturing on a link between a full/low-speed host and a 
full/low-speed hub, with low-speed devices plugged into that hub, it would not 
make sense to treat that link as two interfaces, with one interface handling 
full-speed packets and one interface handling low-speed packets; I see that as 
an ugly workaround.

So I see either

1) a link-layer type for full/low-speed traffic, with a per-packet 
pseudo-header

or

2) don't support full/low-speed traffic capture, just support 
full-speed-only and low-speed-only traffic capture

as the reasonable choices.

(Note that both tcpdump and Wireshark still have their Token Ring dissection 
code; heck, Wireshark has even had 3MB Xerox PARC Ethernet dissection code for 
a while now!)--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 9, 2022, at 1:58 AM, Tomasz Moń  wrote:

> On Mon, May 9, 2022 at 9:17 AM Guy Harris  wrote:
>> On May 8, 2022, at 10:47 PM, Tomasz Moń  wrote:
>>> On Sun, May 8, 2022 at 8:53 PM Guy Harris  wrote:
>>>> At least from a quick look at section 5.2.3 "Physical Bus Topology" of the 
>>>> USB 2.0 spec, a given bus can either be a high-speed bus or a 
>>>> full/low-speed bus.
>>> 
>>> The full/low-speed bus applies only to upstream link from full speed hub.
>> 
>> So what happens if you plug a low-speed keyboard or mouse into a host that 
>> supports USB 2.0?  Does that link not run at low speed?
> 
> The link will run at low speed.

So what kind of bus is that link?  High-speed, full/low-speed, or low-speed?

>> "super-speed" is USB 3.0, right?  No LINKTYPE_/DLT_ has been proposed for 
>> the 3.0 link layer, as far as I know.
> 
> Yes, "super-speed" is USB 3.0. I don't know of any open source sniffer
> nor any tools that would really want to export the packets to pcap
> format.

And, if there ever *are* (I see no reason to rule it out), they can ask for 
another link-layer type when they need it.

>> But no full-speed or low-speed will go over that connection, either, so it's 
>> never the case that, in a capture on a USB cable, there will be both 
>> high-speed and full/low-speed traffic, right?
> 
> Yes. You either get solely high-speed traffic or full/low-speed traffic.

OK, so it makes sense to have a separate link-layer type for high-speed 
traffic, rather than a single link-layer type for "USB link-layer with metadata 
header, with the per-packet metadata header indicating the speed".

But, if, as you said earlier:

> If you capture at the connection between low speed device and
> host/hub, there will only ever be low speed packets. It would be a
> LINKTYPE_USB_2_0_LOW_SPEED capture.
> 
> The problematic case (and the reason why full/low-speed bus is
> mentioned) is the LINKTYPE_USB_2_0_FULL_SPEED. It is the case when you
> capture at the connection between full speed hub and the host (and
> possibly full speed device connected to a full speed hub if there are
> low speed devices connected to the full speed hub). If there is low
> speed device connected to downstream hub port, then when the host
> wants to send packets to the low speed device, these will be sent at
> low speed to the hub. However, there will be PRE packet (sent at full
> speed) before every low speed transaction.

can there be separate link-layer types for full-speed and low-speed traffic, or 
does there need to be a single type for full/low-speed traffic, with a 
per-packet metadata header indicating the speed"?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 9, 2022, at 1:33 AM, Tomasz Moń  wrote:

> On Mon, May 9, 2022 at 9:21 AM Guy Harris  wrote:
>> On May 8, 2022, at 11:09 PM, Tomasz Moń  wrote:
>> 
>>> Device to device communication is not possible.
>> 
>> Is the idea that the topology of USB is a tree, with the host at the root, 
>> and only the leaf nodes (devices, right?) are end nodes?
> 
> To some degree, yes. Note that the hubs are devices as well.

(So "communication is not possible" in "Device to device communication is not 
possible." preferably refers not to sending USB link layer messages from device 
to device, but refers to higher protocol layers; otherwise, you wouldn't be 
able to plug a disk, network device, keyboard, mouse, etc. into a hub and have 
it communicate with a host also plugged into the hub.)
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 8, 2022, at 11:09 PM, Tomasz Moń  wrote:

> Note that end nodes cannot directly communicate with each other. The
> communication is always between host and a device. 

Those two sentences, when combined, imply that either

1) a host is not an end node

or

2) a device is not an end node

or both.  Which is the case?

> Device to device communication is not possible.

Is the idea that the topology of USB is a tree, with the host at the root, and 
only the leaf nodes (devices, right?) are end nodes?

And, given that this means that "end node" is not the correct term for a piece 
of equipment that isn't a hub, what *is* the correct term?
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 8, 2022, at 10:47 PM, Tomasz Moń  wrote:

> On Sun, May 8, 2022 at 8:53 PM Guy Harris  wrote:
>> At least from a quick look at section 5.2.3 "Physical Bus Topology" of the 
>> USB 2.0 spec, a given bus can either be a high-speed bus or a full/low-speed 
>> bus.
> 
> The full/low-speed bus applies only to upstream link from full speed hub.

So what happens if you plug a low-speed keyboard or mouse into a host that 
supports USB 2.0?  Does that link not run at low speed?

>> The idea, then, is presumably that a capture tool is capturing on a single 
>> bus (single wire), so it's either capturing on a high-speed bus or a 
>> full/low-speed bus.
> 
> I assume that by single wire you meant "single wire pair"
> (differential pair). USB 2.0 has only single differential pair, formed
> by D+ and D- signal wires, so the high/full/low speed communication
> always occurs on the same wire pair.

Sorry - that's "wire" in the sense of "cable", not in the literal sense.

>> It looks as if a high-speed bus will always run at 480 Mb/s, so that capture 
>> would be a LINKTYPE_USB_2_0_HIGH_SPEED capture.  Is that correct?
> 
> Yes. If you connect high-speed hub to high-speed host (or super-speed
> host, but super-speed host essentially contains high-speed host, aka

> dual-bus) the communication on the connecting wires will be at high
> speed (480 Mb/s). Similarly if high-speed device is connected to
> high-speed host (or hub) then the communication will be at high speed.

"super-speed" is USB 3.0, right?  No LINKTYPE_/DLT_ has been proposed for the 
3.0 link layer, as far as I know.

But no full-speed or low-speed will go over that connection, either, so it's 
never the case that, in a capture on a USB cable, there will be both high-speed 
and full/low-speed traffic, right?

(And presumably this is for captures on a single USB cable; if you're capturing 
on more than one cable, that's with more than one capture interface, so that's 
a job for pcapng, with different interfaces having different link-layer types.)

>> For full/low-speed buses, will those also always run at full peed or low 
>> speed, so that there would never be a mixture of full-speed and low-speed 
>> transactions?
> 
> If you capture at the connection between low speed device and
> host/hub, there will only ever be low speed packets. It would be a
> LINKTYPE_USB_2_0_LOW_SPEED capture.
> 
> The problematic case (and the reason why full/low-speed bus is
> mentioned) is the LINKTYPE_USB_2_0_FULL_SPEED. It is the case when you
> capture at the connection between full speed hub and the host (and
> possibly full speed device connected to a full speed hub if there are
> low speed devices connected to the full speed hub). If there is low
> speed device connected to downstream hub port, then when the host
> wants to send packets to the low speed device, these will be sent at
> low speed to the hub. However, there will be PRE packet (sent at full
> speed) before every low speed transaction.

So, as per a few paragraphs above ("If you connect high-speed hub to high-speed 
host ... the communication on the connecting wires will be at high
speed (480 Mb/s)."), if you have a high-speed hub connected to a high-speed 
host, and the high-speed hub has full-speed or low-speed devices downstream, 
the packets from the host to the hub, ultimately intended for the full-speed or 
low-speed device, are sent as high-speed traffic, and only the downstream 
traffic from the host to the full-speed or low-speed device is full-speed or 
low-speed?

However, if you have a full-speed hub connected to a full-speed or high-speed 
host, and the full-speed hub has low-speed devices downstream, the packets from 
the host to the hub, ultimately intended for the low-speed device, are sent as 
a full-speed PRE packet followed by a transaction sent as low-speed traffic?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-08 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 8, 2022, at 1:30 PM, Michael Richardson  wrote:

> I guess I would have thought that a physical bus could have a mix of
> different devices which operate at different speeds.  As such, I wondered if
> you really needed pcapng to be able to mix LINKTYPES in the same file, or
> a different bit of meta-data to indicate bus speed for each frame captured.
> 
> But, maybe I'm wrong and that actually requires there to be a USB hub out 
> there.

"Bus" is a bit weird here.

To quote section 4.1.1 "Bus Topology" of the USB 2.0 pec:

The USB connects USB devices with the USB host. The USB physical 
interconnect is a tiered star topology. A hub is at the center of each star. 
Each wire segment is a point-to-point connection between the host and a hub or 
function, or a hub connected to another hub or function. Figure 4-1 illustrates 
the topology of the USB.

and Figure 5-6 "Multiple Full-speed Buses in a High-speed System" seems to use 
the term "bus" to refer to wire segments.

I think a point-to-point connection between the host and another entity may 
always run at a single speed, as well as a connection between a hub and a 
function.

It might also be the case that a hub-to-hub connection also runs at a single 
speed.  Section 11.14 "Transaction Translator" says:

A hub has a special responsibility when it is operating in high-speed 
and has full-/low-speed devices connected on downstream facing ports. In this 
case, the hub must isolate the high-speed signaling environment from the 
full-/low-speed signaling environment. This function is performed by the 
Transaction Translator (TT) portion of the hub.

so if you have a full-speed or low-speed device plugged into a USB 2.0 hub, and 
that hub is connected to a host, the host-to-hub link is high-speed, and the 
hub-to-device link is full-speed or low-speed, and the hub does the 
translation.  That way, you can plug a high-speed device and a full-speed or 
low-speed device into the hub, and the host will be able to talk at high speed 
to the high-speed device.

USB isn't a shared bus like non-switched Ethernet; it's more like switched 
Ethernet or point-to-point Ethernet, with links being point-to-point, either a 
direct connection between end nodes or connections to a switching device that 
handles speed translation if two end nodes of different speed capabilities are 
communicating.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-08 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 8, 2022, at 4:48 AM, Tomasz Moń via tcpdump-workers 
 wrote:

> I would like to remedy the situation by requesting additional speed
> specific link layer header types, for example:
>  * LINKTYPE_USB_2_0_LOW_SPEED
>  * LINKTYPE_USB_2_0_FULL_SPEED
>  * LINKTYPE_USB_2_0_HIGH_SPEED
> 
> The description for existing LINKTYPE_USB_2_0 could be updated to
> mention that for new captures, the speed specific link layer header
> types should be used to enable better dissection.

To quote a comment of yours in the Wireshark issue:

> I should have gone for three separate link-layer header types for "USB 
> 1.0/1.1/2.0 packets" each at different capture speed (low/full/high). I think 
> technically we can still add these alongside the current "unknown speed" one. 
> The reason behind having separate link-layer header types is that the capture 
> tool must know the capture link speed (agreed speed does not change during 
> the transmission, and the handshaking is not on packet level) and the capture 
> link speed is useful when analyzing packets.

At least from a quick look at section 5.2.3 "Physical Bus Topology" of the USB 
2.0 spec, a given bus can either be a high-speed bus or a full/low-speed bus.

The idea, then, is presumably that a capture tool is capturing on a single bus 
(single wire), so it's either capturing on a high-speed bus or a full/low-speed 
bus.

It looks as if a high-speed bus will always run at 480 Mb/s, so that capture 
would be a LINKTYPE_USB_2_0_HIGH_SPEED capture.  Is that correct?

For full/low-speed buses, will those also always run at full peed or low speed, 
so that there would never be a mixture of full-speed and low-speed transactions?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] wireshark extension for a Kernel Module (like Usbmon)

2022-03-07 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Mar 7, 2022, at 5:55 AM, Christian via tcpdump-workers 
 wrote:

> hello out there, I created a kernel probe module and I want to watch the
> outputs of that module with pcap/Wireshark or tcpdump... Just like
> usbmon. My prefered tool is dumpcap. So I defined a char device in the
> dev-directory /dev/kpnode from which the pcap interface can read the
> output of that module. In order to enable reading, I started to place a
> handler function into libpcap:
> 
> In pcap.c I put in
> 
> #ifdef PCAP_SUPPORT_KPNODE
> #include "pcap-kpnode.h"
> #endif
>  and later:
> #ifdef PCAP_SUPPORT_KPNODE
> { kpnode_findalldevs, kpnode_create },
> #endif

That's the correct way to add it to the table of libpcap modules.

> further down:
> #ifdef PCAP_SUPPORT_KPNODE
> || strstr(device, "kpnode") != NULL
> #endif

That's presumably in pcap_lookupnet(); if so, that's the correct way to add 
kpnode there.

(I need to change that to use a better mechanism, so that it's the 
responsibility of the module to handle that, rather than hardcoding module 
information in a function.)

> The functions kpnode_findalldevs and kpnode_create are in my files
> pcap-kpnode.c and pcap-kpnode.h. They are not finished yet but the
> subject of this mail is for now, how to connect these functions into
> libpcap and Wireshark so that they are evoked if a device /dev/kpnode
> emerges.
> 
> Further I added an entry to configure.ac: AC_DEFINE(PCAP_SUPPORT_KPNODE,
> 1, [target host supports Linux kpmode])
> 
> Im not sure if editing the autoconf input file is too much, because I
> don't want to commit my changes to other platforms, it's just a small
> project of my own.

If you're just doing it on your own, and you will be using this modified 
libpcap only on systems where kpnode is available, the easiest way to do it 
would be to leave out the #ifdef`s for PCAP_SUPPORT_KPNODE.

If your entry in configure.ac unconditionally sets PCAP_SUPPORT_KPNODE, it's 
not useful, as it's equivalent to just removing the #ifdefs and hardwiring 
kpnode support into your version of libpcap.

If it *doesn't* unconditionally set PCAP_SUPPORT_KPNODE, then you might as well 
leave the #ifdefs in.

> But there are also some entries for USBMON in e.x.
> CMakeList.txt and more.

If you're not planning on committing your changes, and you don't plan to use 
CMake in the build process, there's no need to modify CMakeList.txt and 
anything else CMake-related, such as cmakeconfig.h.in.

> After execution of the configure script I put
> manually my files into the EXTRA_DIST list.

EXTRA_DIST is useful only if you plan to do "make releasetar" to make a source 
tarball - and if you want to do *that*, add it to Makefile.in, not to Makefile, 
so you won't have to fix Makefile manually.

> But so far, when I build the pcap library not even the symbol kpnode
> appears in the binary

Do you mean that a symbol named "kpnode" doesn't appear in the (shared) library 
binary?

Or do you mean that symbols with "kpnode" in their names, such as 
kpnode_findalldevs and kpnode_create, don't appear in the library binary?

If so, are you looking for *exported* symbols or *all* symbols?  On most 
platforms - and Linux is one such platform - we compile libpcap so that *only* 
routines we've designated as being libpcap APIs are exported by the library; 
others are internal-only symbols.  For example, if I do

$ nm libpcap.so.1.11.0-PRE-GIT | egrep usb_
0002f480 t swap_linux_usb_header.isra.0
ee60 t usb_activate
eb00 t usb_cleanup_linux_mmap
f300 t usb_create
f150 t usb_findalldevs
e670 t usb_inject_linux
e6b0 t usb_read_linux_bin
e860 t usb_read_linux_mmap
e660 t usb_setdirection_linux
edc0 t usb_set_ring_size
ed20 t usb_stats_linux_bin

on my Ubuntu 20.04 VM, it shows symbols for the Linux usbmon module, *but* they 
aren't exported symbols - they're shown with 't', not 'T'.  By contrast, if I do

4$ nm libpcap.so.1.11.0-PRE-GIT | egrep pcap_open
00012ea0 T pcap_open
0001bdc0 T pcap_open_dead
0001bce0 T pcap_open_dead_with_tstamp_precision
0001b9a0 T pcap_open_live
0002cf20 T pcap_open_offline
0001ab10 t pcap_open_offline_common
0002cde0 T pcap_open_offline_with_tstamp_precision
00015b70 t pcap_open_rpcap

symbols such as pcap_open(), pcap_open_live(), pcap_open_offline(), etc. *are* 
exported symbols - they're shown with 'T'.

So, to check for symbols, you should do "nm" and pipe the result to "egrep 
kpnode_".  Those symbols should show up with 't', not 'T', as they aren't part 
of the API - kpnode_findalldevs() should automatically get called if a program 
calls pcap_findalldevs() (e.g., if tcpdump is compile with this library, 
"tcpdump -D" should cause kpnode_findalldevs() to be called, and should show 
the kpnode device(s)), and kpnode_create() should automatically get 

Re: [tcpdump-workers] Selectively suppressing CI on some sites for a commit?

2022-01-06 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 6, 2022, at 3:22 PM, Guy Harris via tcpdump-workers 
 wrote:

> On Jan 6, 2022, at 3:00 PM, Denis Ovsienko via tcpdump-workers 
>  wrote:
> 
>> Do you think https://www.tcpdump.org/ci.html should document [skip cirrus] 
>> and [skip appveyor]?
> 
> [skip appveyor], possibly.

Cirrus documents that any of [skip ci], [ci skip], or [skip cirrus] in the 
first line of the commit message will suppress a CI build:

https://cirrus-ci.org/guide/writing-tasks/

AppVeyor documents that any of [skip ci], [ci skip], or [skip appveyor] in the 
commit message title (first line, presumably) will suppress a CI build:

https://www.appveyor.com/docs/how-to/filtering-commits/

It appears that a "GitHub skip hook" may have been first introduce in Buildbot 
0.9.11:

https://docs.buildbot.net/0.9.11/relnotes/index.html

with the hook being configurable by a regex match.  The 0.9.11 documentation of 
the "skips" parameter of the GitHub hook:

https://docs.buildbot.net/0.9.11/manual/cfg-wwwhooks.html#chsrc-GitHub

does not say anything about the skip item having to be on the first line of the 
commit message; it does say that the default parameter is

[r'\[ *skip *ci *\]', r'\[ *ci *skip *\]']

so either [skip ci] or [ci skip] (with arbitrary numbers of blanks thrown in 
after [, between the words, or before ]) should work.

OpenCSW's buildbot:

https://buildfarm.opencsw.org/buildbot/

claims to be running Buildbot 0.8.14; from the tests I ran, it skips the build 
if [skip ci] is on the first line of the message, but not if it's after that 
line.  I don't know whether there was a "skip ci" feature in older versions, or 
if the OpenCSW people implemented it themselves, checking only the first line.

All the Buildbot instances we've set up appear to be running Buildbot 3.4.0, 
which appears to handle [skip ci] anywhere in the commit message.

With a test I did by doing commits adding or removing blank lines from 
CMakeLists.txt, and with various commit messages, it appears that:

if the first line of the commit message ends with [skip ci], *all* CI 
builds are being suppressed (Cirrus, AppVeyor, OpenCSW, the buildbots we set 
up);

if some *other* line of the commit message is [skip ci], our buildbots 
skip the build, but Cirrus CI, AppVeyor, and OpenCSW don't skip it;

which appears to agree with what's documented above plus the hypothesis that 
OpenCSW's buildbot supports [skip ci] on the first line only.

So:

to suppress *all* builds, put [skip ci] on the first line;

to suppress only AppVeyor builds (which currently means "do only UN*X 
builds"), put [skip appveyor] on the first line;

to suppress only Cirrus builds (which means "skip x86-64 Linux, x86-64 
macOS, and x86-64 FreeBSD", but that doesn't suppress ARM64 FreeBSD or 
non-x86-64 Linux, so I'm not sure how useful it is), put [skip cirrus] on the 
first line;

to suppress only our buildbot builds, put [skip ci] somewhere *other* 
than the first line;

to supporess any set of builders that's the union of the three lines 
above, do the items for the builders in question.

There does not seem to be a way to do *only* Windows builds.  Putting [skip 
cirrus] on the first line and [skip ci] elsewhere in the commit message is the 
closest to that, but it won't suppress the OpenCSW builds, meaning "only 
Windows and Solaris".--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Selectively suppressing CI on some sites for a commit?

2022-01-06 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 6, 2022, at 3:00 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> On Thu, 6 Jan 2022 14:11:54 -0800 Guy Harris via tcpdump-workers 
>  wrote:
> 
>> I've just updated the libpcap .appveyor.yml to get Npcap from
>> npcap.com (the Npcap site has been moved there); I added [skip
>> cirrus] to skip Cirrus CI for that change, and it appears to work.
> 
> That's nice to know.  Either this is a relatively recent skip pattern in
> Cirrus CI, or I didn't notice it before (see my message to the list
> from 21 August 2020).

...or it doesn't work, even though the CI page on tcpdump.org didn't show the 
builds as being in progress.  It looks as if the libpcap builds *did* occur, 
and a tcpdump build (with the equivalent .appveyor.yml update) is in progress.

> Do you think https://www.tcpdump.org/ci.html should document [skip cirrus] 
> and [skip appveyor]?

[skip appveyor], possibly.  [skip cirrus], no, as my inference that it worked 
appears to be wrong.

>> Are there other comments to add to suppress OpenCSW CI and to
>> suppress the other CI sites that have been set up?  The only one I
>> want *not* suppressed is AppVeyor.
> 
> Not immediately, or not at all.  However, there are only two Buildbot
> places where all skip patterns are processed (or not).
> 
> ci.tcpdump.org recognizes [skip ci] because that's the default
> behaviour in that version of Buildbot.  Following the documentation,
> several months and Buildbot versions ago I tried adding [skip buildbot]
> to the list of skip patterns, but for some reason it had no effect
> (could be a user error or a bug). Would it help to try again?

I tried it with the tcpdump build, and it *appears* to work with the Tcpdump 
Group buildbots (the RISC-V one is running, but it's still working on a build 
from a change François submitted 3 hours ago, so it hasn't even started my 
change; that buildbot appears not to be the fastest computer in existence, 
shall we say).

> I am not familiar with OpenCSW Buildbot setup, but from the build
> history it is obvious it disregards [skip ci], so it looks likely it
> would disregard [skip buildbot] too.

It appears to disregard it.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


[tcpdump-workers] Selectively suppressing CI on some sites for a commit?

2022-01-06 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
I've just updated the libpcap .appveyor.yml to get Npcap from npcap.com (the 
Npcap site has been moved there); I added [skip cirrus] to skip Cirrus CI for 
that change, and it appears to work.

Are there other comments to add to suppress OpenCSW CI and to suppress the 
other CI sites that have been set up?  The only one I want *not* suppressed is 
AppVeyor.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] New DLT_ type request

2022-01-06 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 5, 2022, at 6:53 PM, Timotej Ecimovic  
wrote:

> No. Like the document describes: tooling that deals with deframing is 
> expected to remove the starting `[`, the ending `]` and the 2 byte length 
> right after the `[`.
> In case of creating a PCAPNG file out of this stream, the payload of the 
> packet blocks will NOT contain the framing. So the "packet" starts with the 
> debug message.

I.e., in LINKTYPE_SILABS_DEBUG_CHANNEL files, the packet doesn't include the 
'[', the length value, or the ']'?

>> What do the bits in the "Flags" field of the 3.0 debug message mean?  Does 
>> "few bytes of future-proofing flags" mean that there are currently no flag 
>> bits defined, so that the field should always be zero, but there might be 
>> flag bits defined in the future?
> They mean. "Reserved for future use". The value currently can be arbitrary 
> and until someone defines values for them, they have no meaning. I'll make 
> this more specific in the doc.

So is there something in the debug message to indicate whether the field has no 
meaning and should be ignored, or has a meaning and should be interpreted?
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] New DLT_ type request

2022-01-05 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 5, 2022, at 9:38 AM, Timotej Ecimovic via tcpdump-workers 
 wrote:

> I'm requesting an addition of the new DLT type. I'd call it: 
> DLT_SILABS_DEBUG_CHANNEL.
> The description of the protocol is here:
> https://github.com/SiliconLabs/java_packet_trace_library/blob/master/doc/debug-channel.md

...

> In case of errors (such as the ] not being present after the length bytes) 
> the recovery is typically accomplished by the deframing state engine reading 
> forward until a next [ is found, and then attempting to resume the deframing. 
> This case can be detected, because the payload of individual message contains 
> the sequence number.

So, presumably:

1) all packets in a LINKTYPE_SILABS_DEBUG_CHANNEL capture begin with a 
'[';

2) all bytes after the '[' and the payload bytes specified by the 
length should be ignored as being from a framing error, unless there's just one 
byte equal to ']'?

I.e., code reading the capture file does *not* have to do any deframing?

What do the bits in the "Flags" field of the 3.0 debug message mean?  Does "few 
bytes of future-proofing flags" mean that there are currently no flag bits 
defined, so that the field should always be zero, but there might be flag bits 
defined in the future?

> The types supported are listed in this file.

The file in question:


https://github.com/SiliconLabs/java_packet_trace_library/blob/master/silabs-pti/src/main/java/com/silabs/pti/debugchannel/DebugMessageType.java

lists a bunch of message types; is there a document that describes the format 
of messages with each of those types?


--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] [libpcap] Keep Win32/Prj/* files ?

2021-12-06 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Dec 6, 2021, at 10:55 AM, Denis Ovsienko via tcpdump-workers 
 wrote:

> On Mon, 29 Nov 2021 19:20:32 +0100 Francois-Xavier Le Bail via 
> tcpdump-workers  wrote:
> 
>> Does anyone use these files?
>> Win32/Prj/wpcap.sln
>> Win32/Prj/wpcap.vcxproj
>> Win32/Prj/wpcap.vcxproj.filters
> 
> It looks like CMake has superseded these files, as far as it is
> possible to tell without Windows.

They are not used by the CMake build process on Windows, so they would be used 
only by people trying to build *without* CMake.

The CMake files are likely to be better maintained than the "use Visual Studio 
directly" files, as you don't need Visual Studio, and don't need to know how 
Visual Studio solution or project files work internal, in order to modify the 
CMake files.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] NetBSD breakage

2021-08-11 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Aug 11, 2021, at 3:09 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> The other matter is that the gencode.h/grammar.h pair works best when
> it is included early.

Perhaps the gencode.h/grammar.h pair works best when it doesn't include 
grammar.h. :-)

I've checked in a change to remove the include of grammar.h from gencode.c; it 
builds without problems on macOS, and I suspect it will build without problems 
everywhere, as what grammar.h defines are:

1) the names for tokens (which may be done with an enum in a fashion 
that causes large amounts of pain if another header you include helpfully - but 
uselessly, for our purposes - names for the machine's registers, and you are 
unlucky enough to be compiling for a machine that has a register named "esp", 
causing a collision with the "esp" token in pcap filter language for ESP; 
fortunately, such machines are rare :-) :-) :-) :-) :-) :-();

2) a union of value types for all symbols in the grammar.

As far as I can tell, neither token names or values nor a value type union are 
passed to any of the gencode.c routines called from grammar.y.  We *do* pass 
values for symbols, but we select the particular union member, rather than just 
blindly passing the union as a whole.

So far, all the libpcap builds om www.tcpdump.org are green except for the 
Windows build, which is listed as pending; it's about 2/3 of the way through 
the build matrix.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] build failures on Solaris

2021-08-08 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Aug 8, 2021, at 2:26 AM, Denis Ovsienko  wrote:

> GCC+CMake fails early now (see attached).

Good!  That reveals the *underlying* problem:

1) CMake, by default, checks for both a C *and* a C++ compiler;

2) if it's checking for both compilers, the way CMake determines 
CMAKE_SIZEOF_VOID_P is to:

check for a C compiler;

set CMAKE_C_SIZEOF_DATA_PTR to the size of data pointers in that C 
compiler with whatever C flags are being used;

set CMAKE_SIZEOF_VOID_P to CMAKE_C_SIZEOF_DATA_PTR;

check for a C++ compiler;

set CMAKE_CXX_SIZEOF_DATA_PTR to the size of data pointers in that C++ 
compiler with whatever C++ flags are being used;

set CMAKE_SIZEOF_VOID_P to CMAKE_CXX_SIZEOF_DATA_PTR;

3) Sun/Oracle's C and C++ compilers default to building *32-bit* code;

4) the version of GCC installed on the Solaris 11 builder appears to default to 
building 64-bit code;

5) there does not appear to be a version of G++ installed, so CMake finds 
"/usr/bin/CC", which is the Sun/Oracle C++ compiler;

6) as a result of the above, CMake ends up setting CMAKE_SIZEOF_VOID_P to 4, 
which can affect the process of finding libraries;

7) nevertheless, the C code (which is *all* the code - ain't no C++ in tcpdump) 
is compiled 64-bit;

8) hilarity ensues.

I've checked in a change to explicitly tell CMake "this is a C-only project, 
don't check for a C++ compiler", so it should now think it's building 64-bit 
when building with GCC.

See whether that fixes things.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] build failures on Solaris

2021-08-08 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 31, 2021, at 3:37 AM, Denis Ovsienko via tcpdump-workers 
 wrote:

> # Solaris 11 with GCC #
> This is the opposite: the pre-compile libpcap feature test programs
> fail to link so all libpcap feature tests fail. However, libpcap is
> detected as available and the build process resorts to missing/ and
> produces a binary of tcpdump that is mostly functional:
> 
> $ /tmp/tcpdump_build_matrix.XX06MD.a/bin/tcpdump -D
> /tmp/tcpdump_build_matrix.XX06MD.a/bin/tcpdump: illegal option -- D
> 
> The problem seems to be that the feature test linking instead of using
> the flags returned by pcap-config points exactly to the 32-bit version
> of libpcap and fails:

I've checked in changes to:

check the bit-width of the build in autotools;

on Solaris, use the results of the bit-width checks for autotools and 
CMake to figure out which version of pcap-config to run.

See if that clears up the Solaris 11 with GCC build.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] build failures on Solaris

2021-08-03 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Aug 3, 2021, at 12:07 AM, Dagobert Michelsen  wrote:

> The /64 suffix in bin/ and lib/ is a symlink to the respective architecture
> and simplifies cross-platform build between Sparc and x86.

For whatever reason, /usr/bin/64 isn't present on my Solaris 11.3 (x86-64) VM:

solaris11$ ls /usr/bin/64
/usr/bin/64: No such file or directory
solaris11$ uname -a
SunOS solaris11 5.11 11.3 i86pc i386 i86pc

The same is true of the directory containing the installed-from-IPS gcc:

solaris11$ which gcc
/usr/ccs/bin/gcc
solaris11$ ls /usr/ccs/bin/64
/usr/ccs/bin/64: No such file or directory

and Sun/Oracle C:

solaris11$ which cc
/opt/developerstudio12.5/bin/cc
solaris11$ ls /opt/developerstudio12.5/bin/64/cc
/opt/developerstudio12.5/bin/64/cc: No such file or directory

Sun/Oracle don't appear to have made as vigorous an effort to make this work as 
OpenCSW have.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] build failures on Solaris

2021-08-02 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 31, 2021, at 3:37 AM, Denis Ovsienko via tcpdump-workers 
 wrote:

> # Solaris 11 with GCC #
> This is the opposite: the pre-compile libpcap feature test programs
> fail to link so all libpcap feature tests fail. However, libpcap is
> detected as available and the build process resorts to missing/ and
> produces a binary of tcpdump that is mostly functional:
> 
> $ /tmp/tcpdump_build_matrix.XX06MD.a/bin/tcpdump -D
> /tmp/tcpdump_build_matrix.XX06MD.a/bin/tcpdump: illegal option -- D
> 
> The problem seems to be that the feature test linking instead of using
> the flags returned by pcap-config points exactly to the 32-bit version
> of libpcap and fails:
> 
> $ pcap-config --libs
> -L/usr/lib  -lpcap

solaris11$ /usr/bin/pcap-config --libs
-L/usr/lib  -lpcap
solaris11$ /usr/bin/amd64/pcap-config --libs
-L/usr/lib/amd64 -R/usr/lib/amd64 -lpcap

on my x86-64 Solaris 11 VM.

From the Solaris 64-bit Developer's Guide:

http://math-atlas.sourceforge.net/devel/assembly/816-5138.pdf

the equivalent of "amd64" on SPARC is probably "sparcv9".

So tcpdump (and anything else using libpcap) should, on Solaris, determine the 
target architecture and run the appropriate version of pcap-config.

I'll look at that.

(Apropos of nothing, that Sun document also says of the 64-bit SPARC ABI:

Structure passing and return are accomplished differently. Small data 
structures and some floating point arguments are now passed directly in 
registers.

I'm curious which, if any, ABIs pass data structures *and unions* that would 
fit in a single register in a register.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] build failures on Solaris

2021-08-01 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Aug 1, 2021, at 6:08 PM, Denis Ovsienko  wrote:

> On Sun, 1 Aug 2021 15:45:39 -0700
> Guy Harris  wrote:
> 
>> Probably some annoying combination of one or more of "different
>> compilers", "later version of CMake", "at least some versions of cc
>> and gcc build 32-bit binaries by default even on Solaris 11 on a
>> 64-bit machine(!)", and so on.
>> 
>> This is going to take a fair bit of cleanup, not the least of which
>> includes forcing build with both autotools *and* CMake to default to
>> 64-bit builds on 64-bit Solaris.
> 
> For clarity, there is no rush to fix every obscure issue in this
> problem space, but it is useful to have the problem space mapped.

At this point, I'm seeing two problems:

1) The pcap-config and libpcap.pc that we generate always include a -L flag, 
even if the directory is a system include directory, which means that it could 
be wrong in a system with 32-bit and 64-bit libraries in separate directories.  
Debian removes that from pcap-config to avoid that problem.  We shouldn't add 
-L in that case.

2) Tcpdump needs to work around that when configuring.

The first is definitely our bug, given that Debian is working around it.

The second would be helpful; we already work around Apple screwing up 
pcap-config by having the one they ship with macOS include -L/usr/local/lib for 
no good reason.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] build failures on Solaris

2021-08-01 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 31, 2021, at 4:35 PM, Denis Ovsienko  wrote:

> On Sat, 31 Jul 2021 14:55:32 -0700
> Guy Harris  wrote:
> 
> [...]
>> What version of CMake is being used, and how was it installed?
>> 
>> My Solaris 11 x86-64 virtual machine has CMake 2.8.6 in
>> /usr/ccs/bin/cmake, installed from Sun^WOracle's Image Packaging
>> System repositories, and I'm not seeing that behavior - the test
>> programs are linked with -lpcap, as is tcpdump.
> 
> This issue reproduces on OpenCSW host unstable11s:

So where do the Solaris 11 hosts show up on the buildbot site?

> # CMake 3.14.3 (OpenCSW package)
> # GCC 7.3.0
> 
> MATRIX_CC=gcc \
> MATRIX_CMAKE=yes \
> MATRIX_BUILD_LIBPCAP=no \
> ./build_matrix.sh 
> [...]
> $ /tmp/tcpdump_build_matrix.XXVrYyid/bin/tcpdump -D
> /tmp/tcpdump_build_matrix.XXVrYyid/bin/tcpdump: illegal option -- D
> tcpdump version 5.0.0-PRE-GIT
> libpcap version unknown
> 
> As I have discovered just now, it does not reproduce on OpenCSW host
> gcc211:

Probably some annoying combination of one or more of "different compilers", 
"later version of CMake", "at least some versions of cc and gcc build 32-bit 
binaries by default even on Solaris 11 on a 64-bit machine(!)", and so on.

This is going to take a fair bit of cleanup, not the least of which includes 
forcing build with both autotools *and* CMake to default to 64-bit builds on 
64-bit Solaris.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] build failures on Solaris

2021-07-31 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 31, 2021, at 3:37 AM, Denis Ovsienko via tcpdump-workers 
 wrote:

> # Solaris 11 with GCC #
> This is the opposite: the pre-compile libpcap feature test programs
> fail to link so all libpcap feature tests fail. However, libpcap is
> detected as available and the build process resorts to missing/ and
> produces a binary of tcpdump that is mostly functional:
> 
> $ /tmp/tcpdump_build_matrix.XX06MD.a/bin/tcpdump -D
> /tmp/tcpdump_build_matrix.XX06MD.a/bin/tcpdump: illegal option -- D

What version of CMake is being used, and how was it installed?

My Solaris 11 x86-64 virtual machine has CMake 2.8.6 in /usr/ccs/bin/cmake, 
installed from Sun^WOracle's Image Packaging System repositories, and I'm not 
seeing that behavior - the test programs are linked with -lpcap, as is tcpdump.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] compiler warnings on AIX and Solaris

2021-07-24 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 23, 2021, at 4:11 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> As it turns out, on Solaris 9 it is impossible to compile current
> tcpdump with CFLAGS=-Werror because missing/getopt_long.c yields a few
> warnings (attached). As far as the current revisions of this file go in
> FreeBSD, NetBSD and OpenBSD, FreeBSD seems to be the closest and just a
> bit newer than the current tcpdump copy (OpenBSD revision 1.22 -> 1.26).
> However, it seems unlikely that porting the newer revision would make
> the warnings go away, because, for example, permute_args() has not
> changed at all.

At least when it comes to not violating the promises made by the API 
definition, the BSD implementations of getupt_long(), the GNU libc 
implementation of getopt_long(), and the Solaris implementation of 
getopt_long() are all broken by design.

The declaration is

int getopt_long(int argc, char * const *argv, const char *optstring, 
const struct option *longopts, int *longindex);

where "char * const *argv" means, to quote cdecl.org, "declare argv as pointer 
to const pointer to char", which means that the pointer(s) to which argv points 
cannot be modified.  What the pointers point *to* - i.e., the argument strings 
- can be modified, but the pointers in the argv array will not be modified.

All three implementations could shuffle the arguments in argv[] (as per the 
name "permute_args" in the BSD implementations) unless either 1) the option 
string begins with a "+" or 2) the POSIXLY_CORRECT environment variable is set.

This isn't an issue for us on systems that provide getopt_long() - it's an 
issue for whoever compiles the standard library if they turn on "warn about 
casting away constness", but it's not an issue for *us*, as somebody else 
compiled the standard library.  Thus, it doesn't show up on Linux (GNU libc), 
*BSD/macOS (BSD), or newer versions of Solaris (they added getopt_long() to the 
library).

It is, however, an issue for us if 1) the platform doesn't provide 
getopt_long() (presumably it was added to Solaris after Solaris 9), so it has 
to be compiled as part of the tcpdump build process and 2) the compiler issues 
that warning.

It's not currently an issue on Windows when compiling with MSVC, because either 
1) MSVC never issues that warning or 2) it can but we're not enabling it.

So the only way to fix this is to turn off the warnings; change 
39f09d68ce7ebe9e229c9bf5209bfc30a8f51064 adds macros to disable and re-enable 
-Wcast-qual and wraps the offending code in getopt_long.c with those macros, so 
the problem should be fixed on Solaris 9.

> The same problem stands on AIX 7,

AIX also doesn't appear to provide getopt_long(), at least as of AIX 7.2:


https://www.ibm.com/docs/en/aix/7.2?topic=reference-base-operating-system-bos-runtime-services

so the same problem occurs; the change should fix that as well.

> and in addition there is an issue
> specific to XL C compiler, in that ./configure detects that the
> compiler does not support -W, but then proceeds to flex every -W
> option anyway, which the compiler treats as a soft error,

"The compiler treats [that] as a s soft error" is the problem - the configure 
script checks currently require that unknown -W flags be a *hard* error, so 
that attempting to compile a small test program with that option fails.

If there's a way to force XL C to treat it as a hard error, we need to update 
the AC_LBL_CHECK_UNKNOWN_WARNING_OPTION_ERROR autoconf macro to set the 
compiler up to use it when testing whether compiler options are supported.

If there *isn't* a way to do that, the configure-script test also needs to scan 
the standard error of the compilation and look for the warning, and treat that 
as an indication of lack of support as well.  (I think the equivalent test 
provided as part of CMake may already do that.)
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

[tcpdump-workers] Rough consensus and quiet humming

2021-04-22 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
https://twitter.com/MeghanEMorris/status/1382109954224521216/photo/1
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Link Layer Type Request NETANALYZER_NG

2021-03-24 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Mar 24, 2021, at 12:32 AM, Jan Adam  wrote:

>> So, with incl_len equal to {PayloadSize,VarSize} + 54, orig_len would be 
>> equal to {original PayloadSize} + 54, so the original payload size would be 
>> orig_len - 54.
>> 
>> That would allow the original size and the sliced size of the payload to be 
>> calculated, so that should work.
> 
> Yes it should work.
> 
> I have the feeling this is more about the design then the implementation.

It's about either 1) saying "slicing is forbidden" or 2) saying "here's how you 
do slicing".  In either case, there would be implementation changes to tcpdump 
and Wireshark's editcap tool, as both of them can do packet slicing when 
reading a file and writing another file from the contents (although I just 
discovered that tcpdump doesn't appear to correctly set the snapshot length in 
the header of the output capture file, which I need to fix).

> I will try to explain our design decision of the footer. We have observed 
> that customers using Wireshark don't think about the header when counting the 
> bytes in the hex dump and expect the frame to start at the first byte and as 
> a result read out wrong values.

Perhaps that's an indication that Wireshark needs to do a better job of 
distinguishing between metadata headers and packet data, then.  (I already 
think so, as 1) counting metadata headers as data means, for example, that you 
get bogus bytes/second values and 2) separating them may make it more 
straightforward to implement transformation from, for example, 

> Therefore our idea was to put the additional info at the end in form of a 
> footer.
> 
> Maybe you can help me understand more of the general concept, how is this 
> slicing handled for a DLT with a header or footer in general?
> If you take for example another DLT: 
> https://www.tcpdump.org/linktypes/LINKTYPE_LINUX_SLL.html it has 16 byte 
> header size, how does editcap or tcpdump take that into account? Is it 
> possible to slice without taking the header size into account?

For headers, it currently will do what would be done when doing a live capture 
and slicing it - the snaplen is the maximum size of the data in the packet 
record, *including* metadata headers.

Changing that might be considered an incompatible change, but the ability to 
say "write packets out with no more than N bytes of *on-the-network packet 
data*" (rather than "no more than N bytes of *total* packet data, including 
metadata headers"), as a separate option, might be useful.

That would be fairly say to do for *ex post facto* slicing of an existing 
capture file.  It would involve code that knows the size of the metadata header 
for all link-layer types, so that would be a bit of an architectural change to 
the code, but not a painful one.

It's trickier for live captures, but, if the slicing is done by a BPF program, 
where the return value of the BPF filter indicates the number of bytes of total 
packet data to write, that could be done even if the metadata header is 
variable-length.  That's the case for *BSD/macOS, Linux, Solaris, AIX, and, as 
far as I know, Windows with Npcap.

I'm not sure there *are* any currently cases where a given LINKTYPE_ value 
specifies a metadata trailer.  There are some network devices that append 
metadata trailers to Ethernet packets and route them to a host for capturing, 
with Wireshark having heuristics for trying to guess whether there's a metadata 
trailer on the frame or not and which type of metadata trailer it is; slicing, 
whether done at capture time or *ex post facto*, will just slice the metadata 
trailer in two or slice it off completely.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] ARM build slaves (tcpdump mirror in Germany)

2021-03-23 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Mar 22, 2021, at 5:35 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> On Mon, 22 Mar 2021 19:00:31 +0100
> Harald Welte  wrote:

...

>> btw: I'm not sure if qemu full system emulation of e.g. ppc on a
>> x86_64 hardware would be an option, though.  I think
>> openbuildservice.org is doing that a lot for building packages on
>> less popular architectures.
> 
> QEMU was very useful for the NetBSD setup. NetBSD for some reason did
> not provide binary packages for 9.1/aarch64, and heavy non-default
> packages (LLVM, Clang, GCC 10) just do not compile on 1GB RAM of RPI3B
> (NetBSD release does not run on RPI4B), so the only way to compile
> these was in a QEMU VM with more RAM.
> 
> That said, on a Linux host with i7-3770 CPU the QEMU guest measured at
> 64% core-to-core CPU performance of an RPI3B. So after the initial
> setup a hardware Pi does a better job.

The main PowerPC/Power ISA buildbot we'd want would probably be ppcle64, as the 
ppcle64 implementation of some crypto library routines, as used by tcpdump, 
require strict adherence to the API documentation, e.g. 1) don't use the same 
buffer for encrypted and decrypted data and 2) provide all the necessary 
padding in the input buffer and leave enough room in the output buffer, as per

https://github.com/the-tcpdump-group/tcpdump/issues/814

64% isn't perfect, but it's a lot better than 10%, so if QEMUs' PPC64/64-bit 
Power ISA emulation supports both big-endian and little-endian mode, and runs 
with acceptable performance (anything in the range of 50% is probably good 
enough), and the emulation is faithful enough (which being able to boot ppc64le 
Linux would probably imply), that would probably be sufficient.

Having *some* big-endian machine would be useful primarily for tcpdump testing, 
to make sure there's no code that implicitly assumes it's running on a 
little-endian machine (which most developers probably have); any of SPARC, 
ppcbe, or s390/s390x would suffice for that.

SPARC has the additional advantage of trapping on unaligned accesses, so it'll 
also detect code that implicitly assumes that unaligned accesses work.  S/3x0 
hasn't required alignment since S/370 came out (unaligned accesses were an 
optional feature of S/360, but were made a standard feature in S/370), and I'm 
not sure PPC requires it.  We already have SPARC/Solaris 10 testing with 
OpenCSW, so that will fail on unaligned accesses; the only thing additional 
buildbots would do would be to give us Solaris 11 and Linux.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Link Layer Type Request NETANALYZER_NG

2021-03-22 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Mar 22, 2021, at 7:33 AM, Jan Adam  wrote:

>> Are they aligned on natural boundaries?
> 
> No, it is not aligned but packet.  We use #pragma pack(1) for the footer 
> structure.

You should probably add that to the page with the structure definition.

>> What do the four fields of the SrcID indicate for the various values of 
>> Representation?
> 
> For Representation 0x01 to 0x05 their meaning is defined as following:
> tSrcId.ulPart1netANALYZER device number
> tSrcId.ulPart2netANALYZER serial number
> tSrcId.bPart4netANALYZER port number
> 
> For Representation 0x02 to 0x05
> tSrcId.bPart3netANALYZER TAP name (as character, e.g. 'A' = 0x41 or 'B')
> 
> For Representation 0x01
> tSrcId.bPart3netANALYZER TAP number

That should also be noted in the specification.

>> What other possible values of PayloadType are there?
> 
> The PayloadType has the following possible values but they are not usefull 
> for capturing network traffic. So the only value in the context of packet 
> data will be 0x0A which represents DATATYPE_OCTET_STRING.
> 
> #define VAR_DATATYPE_BOOLEAN0x01

...

> #define VAR_DATATYPE_NONE   0xff

It should also note that the other values are reserved and will not appear in 
pcap or pcapng files.

>>> Slicing a captured packet is not supported by our capturing device.
> 
>> But some software can slice packets afterwards.  Either that would have to 
>> be forbidden (meaning editcap and, I think, tcpdump would have to check for 
>> LINKTYPE_NETANALYZER_NG/DLT_NETANALYZR_NG and refuse to do slicing), or they 
>> would have to 1) ensure that the slice size is >= the footer size and 2) do 
>> the slicing specially, removing bytes *before* the footer, so that if 
>> incl_len < VarSize + footer_size, (VarSize + footer_size) - incl_len bytes 
>> have been sliced off.
> 
> Both might be possible path to take for slicing. In any case the PayloadSize 
> should also be adjusted when the payload length is changed in my opinion. Is 
> this a Problem?

So, with incl_len equal to {PayloadSize,VarSize} + 54, orig_len would be equal 
to {original PayloadSize} + 54, so the original payload size would be orig_len 
- 54.

That would allow the original size and the sliced size of the payload to be 
calculated, so that should work.

--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Link Layer Type Request NETANALYZER_NG

2021-03-18 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Mar 15, 2021, at 9:04 AM, Jan Adam  wrote:

>> Can the variable be anything *other* than a packet of some sort?
> 
> There are only the mentioned 5 representations planned for pcap files since 
> this is what our capture device may capture into a pcap file. The 
> representation gives at least the ability to extend in the future. Do you 
> have anything specific in mind?

No.

>> It also appears that the boundary between the payload and the trailer would 
>> be determined by fetching the VarSize field at the end of the trailer.  The 
>> first VarSize bytes of the data would be the payload, and the remaining 
>> sizeof(footer) bytes would be the trailer.  Is that the case?
> 
> This is also correct. The remaining bytes of incl_len - VarSize is the footer 
> size.

If the fields of the footer are aligned on natural boundaries, the footer will 
be 72 bytes long; if they are *not* aligned, the footer will be 53 bytes long.

Are they aligned on natural boundaries?

Presumably VarSize is the same thing as PayloadSize?  If so, then presumably 
incl_len must be equal to VarSize + {either 53 or 72}.

> Some fields of the footer (like the ID) may seem to be redundant and not of 
> much purpose in the wireshark or tcpdump context but we use the footer 
> structure everywhere in our software stack. This way we eliminated converting 
> structures between different parts of our software when dealing with captured 
> data.

So what do the two time stamps indicate for the various various of 
Representation?

What do the four fields of the SrcID indicate for the various values of 
Representation?

What do the values of PayloadState indicate for the various values of 
Representation?

What other possible values of PayloadType are there?

>> This also means that NETANALYZER_NG data must *not* be cut off at the end by 
>> any "slicing" process, such as capturing with a "slice length"/"snapshot 
>> length".  Is it possible that the frame in the payload is "sliced" in that 
>> fashion?
> 
> Slicing a captured packet is not supported by our capturing device.

But some software can slice packets afterwards.  Either that would have to be 
forbidden (meaning editcap and, I think, tcpdump would have to check for 
LINKTYPE_NETANALYZER_NG/DLT_NETANALYZR_NG and refuse to do slicing), or they 
would have to 1) ensure that the slice size is >= the footer size and 2) do the 
slicing specially, removing bytes *before* the footer, so that if incl_len < 
VarSize + footer_size, (VarSize + footer_size) - incl_len bytes have been 
sliced off.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Link Layer Type Request NETANALYZER_NG

2021-03-12 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Mar 12, 2021, at 4:35 AM, Jan Adam  wrote:

>> So is "the variable" the same thing as "the payload"?
> 
> That is correct. To be more specific the payload is the value/content of the 
> variable.

Can the variable be anything *other* than a packet of some sort?  The current 
set of values for the variable listed in https://kb.hilscher.com/x/brDJBw:

0x01:   netANALYZER legacy frame
0x02:   Ethernet (may also be a re-assembled mpacket)
0x03:   mpacket
0x04:   PROFIBUS frame
0x05:   IO-Link frame

lists only packets of various types, but I was reading "variable" in the 
programming language sense, rather than in the sense that the total content has 
a "fixed part", that being the trailer, and a "variable part", that being the 
packet preceding the trailer.  Is the latter the sense in which the word 
"variable" should be understood?

It also appears that the boundary between the payload and the trailer would be 
determined by fetching the VarSize field at the end of the trailer.  The first 
VarSize bytes of the data would be the payload, and the remaining 
sizeof(footer) bytes would be the trailer.  Is that the case?

That would also indicate that the "captured length" value for a pcap record or 
a pcapng block containing NETANALYZER_NG data must be >= sizeof(footer), so 
that the entire footer is present.

This also means that NETANALYZER_NG data must *not* be cut off at the end by 
any "slicing" process, such as capturing with a "slice length"/"snapshot 
length".  Is it possible that the frame in the payload is "sliced" in that 
fashion?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Link Layer Type Request NETANALYZER_NG

2021-03-12 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Mar 8, 2021, at 12:07 AM, Jan Adam via tcpdump-workers 
 wrote:

> We have created a public document on our website You can point to for the 
> description.
> 
> Here is the link:  https://kb.hilscher.com/x/brDJBw
> 
> It contains a more detailed description of the fields in the footer structure.
> It also contains a C – like structure definition of the footer.

So is "the variable" the same thing as "the payload"?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] continuous integration status update

2021-03-04 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Mar 3, 2021, at 2:30 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> A partial replacement for that service is ci.tcpdump.org, which is a
> buildbot instance doing Linux AArch64 builds for the github.com
> repositories.

So where is that hosted?  Are you hosting it yourself or hosting it on some 
cloud service?
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Link Layer Type Request NETANALYZER_NG

2021-03-03 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Mar 3, 2021, at 8:58 AM, Jan Adam via tcpdump-workers 
 wrote:

> for our new analysis product netANALYZER NG I would like to request a new 
> link-layer type value.
> 
> NETANALYZER_NG
> 
> The new Link-Layer-Type format is described as following:
> 
> Next-generation packet structure:
> +---+
> |   Payload |
> .   .
> .   .
> |   |
> +---+
> |   Footer  |
> |   |
> +---+
> 
> Next-gen footer description:
> 
> [16 bit]  Versionrepresents current structure version
> [64 bit]  Timestamp1 first timestamp in ns, UNIX time since 1.1.1970
> [64 bit]  Timestamp2 second timestamp in ns, UNIX time since 1.1.1970
> [32 bit]  TimestampAccuracy  actual accuracy of Timestamp1 and Timestamp2 in 
> ns. 0: actual accuracy is unknown

What do these two time stamps represent?  They presumably don't represent the 
packet arrival time, as both pcap and pcapng already provide that for all 
packets.

> [8 bit]   Representation identification of the following content

What are the possible values of this field, and what do those values signify?

> [32 bit]  SrcIdPart1 source identifier part 1
> [32 bit]  SrcIdPart2 source identifier part 2
> [8 bit]   SrcIdPart3 source identifier part 3
> [8 bit]   SrcIdPart4 source identifier part 3

So there's an 80-bit source identifier; what does that value signify?

> [64 bit]  VarId  variable identifier
> [64 bit]  VarState   variable error states, depending on 
> representation
> [8 bit]   VarTypevariable data type

What do those signify?

> [32 bit]  VarSizesize of raw frame payload

Presumably everything beyond that size is the footer; what are the contents of 
the footer?
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Request for new LINKTYPE_* code LINKTYPE_AUERSWALD_LOG

2021-02-04 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Feb 4, 2021, at 3:41 AM, developer--- via tcpdump-workers 
 wrote:

> We currently use this code in our lua dissector to display (decoded) SIP 
> messages.
> 
> -- offsets will change with the new LINKTYPE
>if (buf(148,2):uint() == MSG_TYPE_SIP) then
>sadd("src_ip",0,16)
>sadd("src_port",16,2,"uint")
>sadd("dst_ip", 18,16)
>sadd("dst_port",34,2,"uint")
>Dissector.get("sip"):call(buf(msg_start, msg_len):tvb(), pinfo, 
> subtree)
>return
>end

In other words, the format of packets is:

IPv6 source address - 16 octets
source port - 2 octets
IPv6 destination address - 16 octets
destination port - 16 octets
SIP packet
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Request for new LINKTYPE_* code LINKTYPE_AUERSWALD_LOG

2021-02-03 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Feb 3, 2021, at 6:54 AM, developer--- via tcpdump-workers 
 wrote:

> We would like to request a dedicated LINKTYPE_* / DLT_* code.
> Auerswald is a major German telecommunications equipment manufacturer.
> We have implemented the option to capture (combined) network traffic and 
> logging information as pcap/pcapng in our soon to be released new product 
> line.
> 
> For development, we so far have used LINKTYPE_USER0 and would like to change 
> this to a proper code before the commercial release.
> 
> We also plan to publicly release the dissector and would like to make sure 
> both can be released with a proper code from the get go.
> The dissector we currently use is however only in lua.
> 
> Our preferred name would be
> LINKTYPE_AUERSWALD_LOG
> 
> If anyone is interested we can provide further information.

Please provide a detailed description of the packet format, sufficient to allow 
somebody to make a program such as tcpdump, or Wireshark, or anything else that 
reads pcap or pcapng files.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Request to add MCTP and PCI_DOE to PCAP link type

2021-01-27 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Dec 16, 2020, at 8:09 PM, Yao, Jiewen via tcpdump-workers 
 wrote:

> We did a prototype for the SpdmDump tool 
> (https://github.com/jyao1/openspdm/blob/master/Doc/SpdmDump.md). We can 
> generate a PCAP file and parse it offline.
> In our prototype, we use below definition:
> #define LINKTYPE_MCTP  290  // 0x0122
> #define LINKTYPE_PCI_DOE   291  // 0x0123
> If you can assign same number, it will be great.
> If different number is assigned, we will change our implementation 
> accordingly.

Different numbers will definitely be assigned, as 290 is already in use (in 
Wireshark, for example).  (Not everything was updated to reflect that; I've 
fixed that.)

You will probably be assigned 291 for LINKTYPE_MCTP and 292 for 
LINKTYPE_PCI_DOE; you should update your prototype for that for now.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Request to add MCTP and PCI_DOE to PCAP link type

2021-01-24 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Dec 16, 2020, at 8:09 PM, Yao, Jiewen via tcpdump-workers 
 wrote:

> I write this email to request to below 2 link types.
> 
> 
>  1.  MCTP

...

> MCTP packet is defined in DMTF PMCI working group Management Component 
> Transport Protocol (MCTP) Base 
> Specification(https://www.dmtf.org/sites/default/files/standards/documents/DSP0236_1.3.1.pdf)
>  8.1 MCTP packet fields. It starts with MCTP transport header in Figure 4 - 
> Generic message fields.

So this is for MCTP messages, independent of the physical layer?

Presumably the not-a-multiple-of-8-bits fields in Table 1 go from the 
high-order bits to the low-order bits, so that the upper 4 bits of the first 
byte are the RSVD field and the lower 4 bits of the first byte are the Hdr 
version?

>  1.  PCI_DOE
> 
> PCI Data Object Exchange (DOE) is an industry standard defined by PCI-SIG 
> (https://pcisig.com/) Data Object Exchange (DOE) 
> ECN 
> (https://members.pcisig.com/wg/PCI-SIG/document/14143).

...

> PCI Data Object Exchange (DOE) is defined in PCI-SIG Data Object Exchange 
> (DOE) ECN (https://members.pcisig.com/wg/PCI-SIG/document/14143) 6.xx.1 Data 
> Objects. It starts with DOE Data Object Header 1 in Figure 6-x1: DOE Data 
> Object Format.

Unfortunately, I'm not a member of the PCI SIG, so I don't have an account to 
log in to in order to read that document.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] libpcap detection and linking in tcpdump

2021-01-23 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 22, 2021, at 7:11 PM, Guy Harris via tcpdump-workers 
 wrote:

> I'll try experimenting with one of my Ubuntu VMs.

Welcome to Shared Library Search Hell.

Most UN*Xes have a notion of RPATH (with, of course, different compiler 
command-line flags to set it).

pcap-config provides one if the shared library isn't going to be installed in 
/usr/lib.

The pkg-config file doesn't provide one, however, and some searching indicates 
that the pkg-config maintainers recommend *against* doing so.  They recommend 
using libtool when linking, instead.  Part of the problem here may be that 
setting the RPATH in an executable affects how it searches for *all* libraries, 
so it could affect which version of an unrelated library is found.

(The existence of libtool is an indication that shared libraries have gotten 
messy on UN*X.)

Perhaps for this particular case the right thing to do is to set 
LD_LIBRARY_PATH when running the temporarily-installed tcpdump.

The macOS linker appears to put absolute paths for shared libraries into the 
executable by default:

$ otool -L /bin/cat
/bin/cat:
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, 
current version 1281.100.1)

so this may not be an issue there.

(Also, the existence of the term "DLL hell" is an indication that shared 
libraries have gotten messy on Windows, but I digress :-))--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Any way to filter ether address when type is LINUX_SLL?

2021-01-22 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 21, 2021, at 8:41 AM, Bill Fenner via tcpdump-workers 
 wrote:

> It would be perfectly reasonable (and fairly straightforward) to update
> libpcap to be able to filter on the Ethernet address in DLT_LINUX_SLL or
> DLT_LINUX_SLL2 mode.

Link-layer address, to be more accurate.

The good news is that, for Ethernet, that address appears to be the source 
address for all packets, incoming and outgoing, at least with the 5.6.7 kernel; 
I haven't checked the kernel code paths for other kernel versions.

That might also be the case for 802.11.

However, for FDDI, for example, it appears not to be set (it's marked as 
zero-length).

> There are already filters that match other offsets in
> the SLL or SLL2 header.  However, I don't think it could be done on live
> captures, only against a savefile.

At least as of 5.6.7, I don't see an SKF_ #define that would correspond to a 
link-layer address, so it appears that it's not possible to easily filter on 
the address in a live capture, at least not with an in-kernel filter.  As we're 
using cooked sockets (PF_PACKET/SOCK_DGRAM), the link-layer header isn't 
supplied to us, so we can't look at it ourselves.

I've been thinking about a world in which we have more pcapng-style APIs.  With 
a capture API that can deliver, for each packet, something similar to a pcapng 
Enhanced Packet Block, with an interface number from the capturing program can 
determine a link-layer header type, so that not all captured packet have to 
have the same link-layer header type, it might be possible to generate a filter 
program that:

could use one of the SKF_ magic offsets to fetch the "next protocol 
type" value for the protocol after the link-layer protocol, so 
link-layer-type-independent code could be used to check for common "next 
protocol type" values such as IPv4, IPv6, and ARP;

could use one of the SKF_ magic offsets to fetch the offset, relative 
to the beginning of the raw packet data, of the first byte past the link-layer 
header, so that link-layer-type-independent code could be used to check for 
anything at the next protocol layer (IP address, etc.);

could use one of the SKF_ magic offsets to fetch the ARPHRD_ type 
giving the link-layer header type, and, based on that run different code to 
check fields in the link-layer header.

This would be done by using a raw socket (PF_PACKET/SOCK_RAW) rather than a 
cooked socket.

With all of that, we could do live-capture filtering of MAC addresses (source 
*or* destination).

That's a lot of work, though.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] libpcap detection and linking in tcpdump

2021-01-22 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 22, 2021, at 2:54 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> I have tested it again with the current master branches of libpcap and
> tcpdump. Both builds (with and without libpcap0.8-dev) now complete
> without errors.
> 
> However, in both cases the installed tcpdump fails to run because it
> is linked with libpcap.so.1. Which, as far as I can remember,
> previously somehow managed to resolve to the
> existing /tmp/libpcap/lib/libpcap.so.1, but not amymore:
> 
> $ /tmp/libpcap/bin/tcpdump --version
> /tmp/libpcap/bin/tcpdump: error while loading shared libraries:
> libpcap.so.1: cannot open shared object file: No such file or directory
> 
> $ ldd /tmp/libpcap/bin/tcpdump
>   linux-vdso.so.1 (0x7ffdc7ffe000)
>   libpcap.so.1 => not found
>   libcrypto.so.1.1 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1
> (0x7f34522ac000)
>   libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
> (0x7f3451ebb000)
>   libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2
> (0x7f3451cb7000)
>   libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
> (0x7f3451a98000)
>   /lib64/ld-linux-x86-64.so.2 (0x7f3452c6f000)
> 
> $ /tmp/libpcap/bin/pcap-config --libs
> -L/tmp/libpcap/lib -Wl,-rpath,/tmp/libpcap/lib -lpcap

So that *should* cause /tmp/libpcap/lib to be added to the executable's path, 
which *should* cause it to look in /tmp/libpcap/lib for shared libraries.

So, if there's a /tmp/libpcap/lib/libpcap.so.1 file, that's not happening, 
somehow.

I'll try experimenting with one of my Ubuntu VMs.

In the meantime, for some fun head-exploding reading, take a look at

https://en.wikipedia.org/wiki/Rpath

and perhaps some other documents found by a search for

lpath rpath linux--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

[tcpdump-workers] Stick with Travis for continuous integration, or switch?

2021-01-18 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
Travis CI is announcing on the travis-ci.org site that "... travis-ci.org will 
be shutting down in several weeks, with all accounts migrating to 
travis-ci.com. Please stay tuned here for more information."

They don't provide any information there.  However, at


https://travis-ci.community/t/build-delays-for-open-source-project/10272/26

they say

As was pointed out in Builds hang in queued state 3 linked to earlier 
in this topic, Travis is moving workers from travis-ci.org to travis-ci.com 1 
in preparation to fully close .org (or rather, make it read-only) around the 
New Year.

...

So you need to migrate to .com to stop experiencing delays. Note the 
caveats:

...

They claim that they'll still offer free service for free software:

Q. Will Travis CI be getting rid of free users? #

A. Travis CI will continue to offer a free tier for public or 
open-source repositories on travis-ci.com and will not be affected by the 
migration.

They also say here:

https://blog.travis-ci.com/2020-11-02-travis-ci-new-billing

that

The upcoming pricing change will not affect those of you who are:

* Building on the Travis CI 1, 2 and 5 concurrency job plans 
who are building on Linux, Windows and experimental FreeBSD environments.
* GitHub Marketplace plans
* Grouped Accounts
* Enterprise customers (not building in our cloud environments)
* Builders on our premium or manual plans. Contact the Travis 
CI support team for more information.

but they also say that

The upcoming pricing change will affect those of you who are:

Building on the macOS environment

macOS builds need special care and attention. We want to make sure that 
builders on Mac have the highest quality experience at the fastest possible 
speeds. Therefore, we are separating out macOS usage from other build usage and 
offering a distinct add-on plan that will correlate directly to your macOS 
usage. Purchase only the credits you need and use them until you run out.

* $15 will buy you 25 000 credits (1 minute of mac build time 
costs 50 credits)
* Use your credits for macOS builds only when you need to run 
these
* Replenish your credits as you need them
* More special build environments that fall into this category 
will be available soon

which may mean that their "free tier" doesn't include macOS.

They also say:

Building on a public repositories only

We love our OSS teams who choose to build and test using TravisCI and 
we fully want to support that community. However, in recent months we have 
encountered significant abuse of the intention of this offering (increased 
activity of cryptocurrency miners, TOR nodes operators etc.). Abusers have been 
tying up our build queues and causing performance reductions for everyone. In 
order to bring the rules back to fair playing grounds, we are implementing some 
changes for our public build repositories.

* For those of you who have been building on public 
repositories (on travis-ci.com, with no paid subscription), we will upgrade you 
to our trial (free) plan with a 10K credit allotment (which allows around 1000 
minutes in a Linux environment).
* You will not need to change your build definitions when you 
are pointed to the new plan
* When your credit allotment runs out - we’d love for you to 
consider which of our plans will meet your needs.
* We will be offering an allotment of OSS minutes that will be 
reviewed and allocated on a case by case basis. Should you want to apply for 
these credits please open a request with Travis CI support stating that you’d 
like to be considered for the OSS allotment. Please include:
* Your account name and VCS provider (like 
travis-ci.com/github/[your account name] )
* How many credits (build minutes) you’d like to 
request (should your run out of credits again you can repeat the process to 
request more or discuss a renewable amount)
* Usage will be tracked under your account information so that 
you can better understand how many credits/minutes are being used

We haven't been building on travis-ci.com, so presumably the first item in the 
list doesn't apply.  If the "We will be offering an allotment..." part applies, 
the "should your run out of credits again you can repeat the process to request 
more or discuss a renewable amount" seems like a pain.

See also this comment:


https://travis-ci.community/t/org-com-migration-unexpectedly-comes-with-a-plan-change-for-oss-what-exactly-is-the-new-deal/10567/15

where the commenter says:

When I emailed support for credits, they gave this list of requirements 
for the so-called 

  1   2   3   4   5   6   7   8   9   10   >