Re: [tcpdump-workers] [tcpdump] About struct in_addr / struct in6_addr

2022-07-17 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 17, 2022, at 3:39 PM, Bill Fenner  wrote:

> IMO it is safe to drop support for OSes lacking native IPv6 support.

Yeah.  Back when IPv6 support was added to tcpdump, it was an experimental new 
technology and the configure script had to figure out which of several add-on 
IPv6 packages you had installed.  Now a significant amount of Wikipedia 
vandalism comes from IPv6 addresses rather than IPv4 addresses. :-)--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] [tcpdump] About struct in_addr / struct in6_addr

2022-07-17 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 17, 2022, at 11:09 AM, Francois-Xavier Le Bail 
 wrote:

> Remain some stuff about 'struct in6_addr'. Any need to keep them?
> 
> $ git grep -l 'struct in6_addr'
> CMakeLists.txt
> cmakeconfig.h.in
> config.h.in
> configure
> configure.ac
> netdissect-stdinc.h

That's there for the benefit of OSes whose APIs don't have standard IPv6 
support; if there are any left that we care about (or if there are old non-IPv6 
versions we care about for any OSes we support), then it might be useful, but 
I'm not sure it would build (we use gethostbyaddr(), so *maybe* it'll compile, 
and maybe gethostbyaddr() will fail when passed AF_INET6 and the code will just 
show the IPv6 address rather than a name).

Should we care about it, or should we just drop support for OSes lacking native 
IPv6 support in 5.0?

--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] [tcpdump] About struct in_addr / struct in6_addr

2022-07-17 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 17, 2022, at 10:10 AM, Francois-Xavier Le Bail via tcpdump-workers 
 wrote:

> The current nd_ipv4 and nd_ipv6 types were added in 2017 for alignment 
> reasons.
> 
> Since then,
> most of the 'struct in_addr' were replaced by 'nd_ipv4',
> most of the 'struct in6_addr' were replaced by 'nd_ipv6'.
> 
> Remain:
> pflog.h:110:struct in_addr  v4;
> pflog.h:111:struct in6_addr v6;
> 
> Should they be replaced also?

Yes.  Dne in 71da7b139eb418ac91f1169c550e8a4dc970a692.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] NetBSD CI breakage

2022-07-14 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 10, 2022, at 2:48 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> The last CI build of the libpcap-1.10 branch failed on netbsd-aarch64
> because the latter now uses GCC 12.  Commit 4e7f6e8 makes a lazy fix
> for that in the master branch; if a more sophisticated solution is not
> required,

I changed it to a slightly different fix.

The problem was that, on platforms without a cloning BPF device, the BPF device 
open code iterates over BPF device names, and the loop index was a signed 
integer, so, in theory, if you have 2^31 BPF devices, from /dev/bpf0 to 
/dev/bpf2147483647 open, the loop index will go from 2147483647 to -2147483648, 
and, while 2147483647 requires 10 characters, -2147483648 requires 11.  Thus, 
the length of the buffer had to be increased.

I changed the index to an unsigned integer, so it goes from 0 to 4294967295, 
all of which require 10 characters.

On most OS versions without a cloning BPF device, you're unlikely to have 2^32 
BPF devices (almost certainly not on an ILP32 platform, for obvious reasons!), 
or even 2^31 BPF devices, so, in practice, this won't be a problem.

The only OS I know of that 1) has no cloning BPF device and 2) auto-creates BPF 
devices if you try to open one that's past the maximum unit number (it's named 
after a British naturalist and evolutionist whose last name is not "Huxley" 
:-)).  It uses "bpf%d" to generate the device names, so it could, in principle, 
create a device named /dev/bpf-2147483648, but the default upper limit on the 
number of BPF devices is 256, so you'd have to sysctl it up to a value above 
2^31 (the sysctl value is unsigned, so you can do it; that also means that 
"bpf%d" should be "bpf%u", so it's time to file a Radar^WFeedback on that).

> a simple cherry-pick into libpcap-1.10 should be sufficient
> to pass CI again.

I've backported a bunch of changes to 1.10, including your change and mine for 
that; the netbsd-aarch64 build now seems to be working for libpcap-1.10.

(Or should that be netbsd-a64, or netbsd-arm64?  Thanks, Arm, for making 
"architecture" names so complicated)
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] RFC: TLS in rpcaps

2022-07-05 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 4, 2022, at 4:49 PM, Ryan Castellucci via tcpdump-workers 
 wrote:

> 1) TLS compression support is a foot-bazooka, is exploitable in practice, and 
> should be removed. A modified version of the CRIME[1] attack should be 
> completely feasible. I can't imagine any remotely feasible mitigation. 
> Fortunately, I don't see any reason why removing it (perhaps making the 
> rpcapd option that turns it on do nothing) would cause any compatibility 
> issues.

The only thing that -C appears to do is cause ssl_init_once() to call 
SSL_COMP_get_compression_methods(), which, according to


https://www.openssl.org/docs/man3.0/man3/SSL_COMP_get_compression_methods.html

"returns a stack of all of the available compression methods or NULL on 
error.", so I'm not sure what -C, which is presumably "the rpcapd option that 
turns [TLS compression] on", actually *does*.

> 2) What should the default verification behavior be? I worry about breaking 
> people's installs if suddenly it's enabled in enforcing mode by default, but 
> also most people are never going to bother to set things up properly without 
> incentive. A middle ground could be to have soft failures by default - print 
> a warning to stderr which can be turned of by passing a command line option 
> such as --insecure, with a --tls-verify flag to make it a hard failure.

What does "setting things up properly" involve?  Presumably it's something more 
than just "not having an expired certificate"; if somebody can't be bothered to 
do *that*, my sympathy is limited.

> 3) libpcap seems to lose track of the hostname between establishing the 
> control connection. Path of least resistance seems to be adding `char 
> *rmt_hostname` to `struct pcap_rpcap`, saved via strdup. Is this going to 
> upset anyone?

It's a private data structure, and it consumes very little memory unless you 
have a huge number of pcap_t's open, so I'm not sure how much justification is 
there fore being upset.
'
> 4) What level of control should be exposed for the tls settings within 
> libpcap?

What settings are there that might be exposed, other than "should I check the 
validity of certificates"?

> 5) If control over cipher suites is provided, standard tools don't change 
> TLSv1.3 settings via cipher suite list.

"Standard tools" meaning "programs that use TLS" or something else?

And does "control" mean "disallow cipher suites that are allowed by default", 
"allow cipher suites that are disallowed by default", or something else?

> 6) Would anyone be willing to hand-hold a bit on the "active" mode? It seems 
> a bit weird, and I'm not confident I understand what's going on.

"Active mode":


https://www.winpcap.org/docs/docs_412/html/group__remote.html#RunningModes

is a hack to allow remote capture from interfaces on a firewalled remote 
machine.  To start a capture, a capture program that supports active mode would 
be run on the client machine, and it would open a listening socket for rpcapd.  
rpcapd would then be run in active mode on the machine on whose interface(s) 
capture is to be done, with the host name/address and port number of the 
capturing application provided as arguments to the -a flag, and would attempt 
to connect to that host and port.  Once the connection is made, the capturing 
machine (the machine that *accepted* the connection) would send an 
authentication request message to the machine on whose interface(s) the capture 
is to be done (the machine that *initiated* the connection), and that and all 
messages would work exactly the same was as if the capturing machine had 
initiated a connection to the machine on whose interface(s) the capture is to 
be done.

So the only part of the traffic that changes is the connection initiation.

Given that there are, as far as I know, zero capturing programs that support 
the not-exactly-clean API for active model (neither tcpdump nor Wireshark do), 
I've never tested that even without* TLS, much less *with* TLS, so that may 
require work even before any additional work is done.

I'd like to make remote capture work with the create/activate API, which might 
allow a cleaner active mode API, with less hackery necessary for programs to 
use it.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] endianness of portable BPF bytecode

2022-06-10 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jun 10, 2022, at 1:59 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> Below is a draft of such a file format.  It addresses the following
> needs:
> * There is a header with a signature string to avoid false positive
>  detection as some other file type that begins exactly with particular
>  bytecode (ran into this during disassembly experiments).
> * There are version fields to address possible future changes to the
>  encoding (either backward-compatible or not).

Is the idea that a change that's backward-compatible (so that code that handles 
the new format needs no changes to handle the old format, but code that handles 
only the old format can't handle the new format) would involve a change to the 
minor version number, but a change that's not backward-compatible (so that to 
handle both versions would require two code paths for the two versions) would 
involve a change to the major version number?

> File format:
> 
> 0   1   2   3
> 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
> |  'c'  |  'B'  |  'P'  |  'F'  |
> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

Is the 'c' part of the retronym "cBPF" for the "classic BPF" instruction set, 
as opposed to the eBPF instruction set?  (I didn't find any file format for 
saving eBPF programs, so this format could be used for that as well, with the 
magic number 'e' 'B' 'P' 'F'.)

> Type=0x02 (LINKTYPE_ID)
> Length=4
> Value=

This could be 2 bytes long - pcapng limits link-layer types to 16 bits, and 
pcap now can use the upper 16 bits of the link-layer type field for other 
purposes.

> Type=0x03 (LINKTYPE_NAME)
> Length is variable
> Value=

E.g. either its LINKTYPE_xxx name or its DLT_xxx name?

> Type=0x04 (COMMENT)
> Length is variabe
> Value=

"Generating software description" as in the code that generated the BPF program?

> Type=0x05 (TIMESTAMP)
> Length=8
> Value=

Is this the time the code was generated?

Is it a 64-bit time_t, or a 32-bit time_t and a 32-bit microseconds/nanoseconds 
value?  I'd recommend the former, unless we expect classic BPF to be dead by 
2038.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] What's the correct new API to request pcap_linux to not open an eventfd

2022-05-20 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 20, 2022, at 10:56 AM, Bill Fenner via tcpdump-workers 
 wrote:

> I'm helping to debug a system that uses many many pcap handles, and never
> calls pcap_loop - only ever pcap_next.

Both pcap_loop() and pcap_next() ultimately go to the same place.

Note, BTW, that pcap_next() sucks; it's impossible to know whether it returns 
NULL because of an error or because the timeout expired and no packets had 
arrived during that time.  pcap_next_ex() doesn't have that problem.  (On 
Linux, the turbopacket timer doesn't expire if no packets have arrived, so, *on 
Linux*, NULL should, as far as I know, be returned only on errors.)

> We've found that each pcap handle has an associated eventfd, which is used to 
> make sure to wake up when
> pcap_breakloop() is called.  Since this code doesn't call pcap_loop or
> pcap_breakloop, the eventfd is unneeded.

If it called pcap_breakloop(), the eventfd would still be needed; otherwise, a 
call could remain indefinitely stuck in pcap_next() until a packet finally 
arrives.  Only the lack of a pcap_breakloop() call renders the eventfd 
unnecessary.

So how is this system handling those pcap handles?

If it's putting them in non-blocking mode, and using some 
select/poll/epoll/etc. mechanism in a single event loop, then the right name 
for the API is pcap_setnonblock().  There's no need for an eventfd to wake up 
the blocking poll() if there *is* no blocking poll(), so:

if non-blocking mode is on before pcap_activate() is called, no eventfd 
should be opened, and poll_breakloop_fd should be set to -1;

if non-blocking mode is turned on after pcap_activate() is called, the 
eventfd should be closed, and poll_breakloop_fd should be set to -1;

if non-blocking mode is turned *off* afterwards, an eventfd should be 
opened, and poll_breakloop_fd should be set to it;

if poll_breakloop_fd is -1, the poll() should only wait on the socket 
FD;

so this can be handled without API changes.

If it's doing something else, e.g. using multiple threads, then:

> I'm willing to write and test the code that skips creating the breakloop_fd
> - but, I wanted to discuss what the API should be.  Should there be a
> pcap.c "pcap_breakloop_not_needed( pcap_t * )" that dispatches to the
> implementation, or should there be a linux-specific
> "pcap_linux_dont_create_eventfd( pcap_t * )"?

...it should be called pcap_breakloop_not_needed() (or something such as that), 
with a per-type implementation, and a *default* implementation that does 
nothing, so only implementations that need to do something different would need 
to do so.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 9, 2022, at 10:05 PM, Tomasz Moń  wrote:

> On Tue, May 10, 2022 at 6:57 AM Guy Harris  wrote:
>> On May 9, 2022, at 9:41 PM, Tomasz Moń  wrote:
>>> Also Wireshark would have to show "USB Full/Low speed capture" section with 
>>> only the single byte denoting
>>> full or low speed, followed by "USB Link Layer" (as shown currently for
>>> usbll captures).
>> 
>> No, it wouldn't.  It would just display that as an item in "USB Link Layer".
> 
> If you displayed that in USB Link Layer, without marking it as
> Wireshark generated field (and it shouldn't be marked as Wireshark
> generated because it was in capture file) it would be confusing.

Then show it as "USB physical layer information", similar to what's done for 
"802.11 radio layer information".
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 9, 2022, at 9:41 PM, Tomasz Moń  wrote:

> On Mon, 2022-05-09 at 13:19 -0700, Guy Harris wrote:
>> On May 9, 2022, at 1:02 PM, Tomasz Moń  wrote:
>> 
>>> "Why this doesn't match all the documents on USB that I have
>>> read?".
>> 
>> What is the "this" that wouldn't match?
> 
> Packet Bytes as shown by Wireshark.

OK, thet suggests that it's time to finally default to *NOT* showing metadata 
in the packet bytes pane of Wireshark and in hex dump data in tcpdump, as the 
only time its raw content is of interest is if you're debugging either 1) 
software that generates those headers or 2) software that dissects those 
headers.

*That* will quite effectively prevent people from asking where that byte is 
defined in a USB spec, as that byte won't be there in the first place.

> Also Wireshark would have to show "USB Full/Low speed capture" section with 
> only the single byte denoting
> full or low speed, followed by "USB Link Layer" (as shown currently for
> usbll captures).

No, it wouldn't.  It would just display that as an item in "USB Link Layer".
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 9, 2022, at 1:02 PM, Tomasz Moń  wrote:

> The same as why URB level captures are confusing. Maybe not to the same
> level as that would be just a single byte (and the URB metadata
> contains way more information), but it would still raise the questions
> like "where in USB specification this byte is defined?",

To what extent are people analyzing 802.11 captures raising the question "where 
in the 802.11 specification are the fields of the radiotap header defined?"

If the answer is "to a minimal extent" or "it doesn't happen", what about USB 
would make the answer different?

> "Why this doesn't match all the documents on USB that I have read?".

What is the "this" that wouldn't match?

--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 9, 2022, at 12:31 PM, Tomasz Moń  wrote:

> There is no such thing as "low-speed bus" because low-speed is only
> allowed for non-hub devices. USB hosts and hubs *must* support atleast
> full and high speed. USB devices are allowed to be low-speed (such
> devices can operate *only* at low speed).

So what is the term used for a cable between a low-speed-only device and either 
a host or a hub?

The USB 2.0 spec appears to use "bus" for an "edge", in the graph-theory sense:

https://en.wikipedia.org/wiki/Glossary_of_graph_theory#edge

rather than for the entire tree.

What *is* the correct term to use for a single cable, the traffic on which one 
might be sniffing?

> It is important that the analysis engine know whether the packets were
> full or low-speed as there are slightly different rules. There is not
> so clear distinction between layers as USB does not really use ISO/OSI
> model.
> 
> So I think it definitely makes sense to have separate link types for
> full-speed and low-speed.

It makes sense to indicate whether packets are full-speed or low-speed; nobody 
is arguing otherwise.

The question is whether the right way to do that is to have separate link 
types, so that you can't have a mix of full-speed and low-speed packets in a 
single pcap capture or on a single interface in a pcapng capture, or to have a 
single link-layer type with a per-packet full-speed/low-speed indicator.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 9, 2022, at 12:40 PM, Tomasz Moń  wrote:

> On Mon, 2022-05-09 at 12:02 -0700, Guy Harris wrote:
>> On May 9, 2022, at 7:41 AM, Tomasz Moń  wrote:
>> 
>>> That would require defining pseudoheader that would have to be
>>> included in every packet.
>> 
>> Is that really a great burden?
> 
> I think it would make it harder to understand the protocol for
> newcomers that use tools like Wireshark to try to make sense of USB.

In what fashion would it do so?
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 9, 2022, at 7:41 AM, Tomasz Moń  wrote:

> That would require defining pseudoheader that would have to be included
> in every packet.

Is that really a great burden?

> And it would only solve the corner case that the
> currently available open-source friendly sniffers do not presently
> handle.

The point isn't to just handle what currently available open-source friendly 
sniffers handle.  I'd prefer to leave room for future sniffers that *do* handle 
it.

> I think it is fine to assume that any tool that would create full-speed
> captures that contain both full-speed and low-speed data should be able
> to write pcapng file (or simply create two separate pcap files).

I think that, if you're capturing on a link between a full/low-speed host and a 
full/low-speed hub, with low-speed devices plugged into that hub, it would not 
make sense to treat that link as two interfaces, with one interface handling 
full-speed packets and one interface handling low-speed packets; I see that as 
an ugly workaround.

So I see either

1) a link-layer type for full/low-speed traffic, with a per-packet 
pseudo-header

or

2) don't support full/low-speed traffic capture, just support 
full-speed-only and low-speed-only traffic capture

as the reasonable choices.

(Note that both tcpdump and Wireshark still have their Token Ring dissection 
code; heck, Wireshark has even had 3MB Xerox PARC Ethernet dissection code for 
a while now!)--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 9, 2022, at 1:58 AM, Tomasz Moń  wrote:

> On Mon, May 9, 2022 at 9:17 AM Guy Harris  wrote:
>> On May 8, 2022, at 10:47 PM, Tomasz Moń  wrote:
>>> On Sun, May 8, 2022 at 8:53 PM Guy Harris  wrote:
>>>> At least from a quick look at section 5.2.3 "Physical Bus Topology" of the 
>>>> USB 2.0 spec, a given bus can either be a high-speed bus or a 
>>>> full/low-speed bus.
>>> 
>>> The full/low-speed bus applies only to upstream link from full speed hub.
>> 
>> So what happens if you plug a low-speed keyboard or mouse into a host that 
>> supports USB 2.0?  Does that link not run at low speed?
> 
> The link will run at low speed.

So what kind of bus is that link?  High-speed, full/low-speed, or low-speed?

>> "super-speed" is USB 3.0, right?  No LINKTYPE_/DLT_ has been proposed for 
>> the 3.0 link layer, as far as I know.
> 
> Yes, "super-speed" is USB 3.0. I don't know of any open source sniffer
> nor any tools that would really want to export the packets to pcap
> format.

And, if there ever *are* (I see no reason to rule it out), they can ask for 
another link-layer type when they need it.

>> But no full-speed or low-speed will go over that connection, either, so it's 
>> never the case that, in a capture on a USB cable, there will be both 
>> high-speed and full/low-speed traffic, right?
> 
> Yes. You either get solely high-speed traffic or full/low-speed traffic.

OK, so it makes sense to have a separate link-layer type for high-speed 
traffic, rather than a single link-layer type for "USB link-layer with metadata 
header, with the per-packet metadata header indicating the speed".

But, if, as you said earlier:

> If you capture at the connection between low speed device and
> host/hub, there will only ever be low speed packets. It would be a
> LINKTYPE_USB_2_0_LOW_SPEED capture.
> 
> The problematic case (and the reason why full/low-speed bus is
> mentioned) is the LINKTYPE_USB_2_0_FULL_SPEED. It is the case when you
> capture at the connection between full speed hub and the host (and
> possibly full speed device connected to a full speed hub if there are
> low speed devices connected to the full speed hub). If there is low
> speed device connected to downstream hub port, then when the host
> wants to send packets to the low speed device, these will be sent at
> low speed to the hub. However, there will be PRE packet (sent at full
> speed) before every low speed transaction.

can there be separate link-layer types for full-speed and low-speed traffic, or 
does there need to be a single type for full/low-speed traffic, with a 
per-packet metadata header indicating the speed"?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 9, 2022, at 1:33 AM, Tomasz Moń  wrote:

> On Mon, May 9, 2022 at 9:21 AM Guy Harris  wrote:
>> On May 8, 2022, at 11:09 PM, Tomasz Moń  wrote:
>> 
>>> Device to device communication is not possible.
>> 
>> Is the idea that the topology of USB is a tree, with the host at the root, 
>> and only the leaf nodes (devices, right?) are end nodes?
> 
> To some degree, yes. Note that the hubs are devices as well.

(So "communication is not possible" in "Device to device communication is not 
possible." preferably refers not to sending USB link layer messages from device 
to device, but refers to higher protocol layers; otherwise, you wouldn't be 
able to plug a disk, network device, keyboard, mouse, etc. into a hub and have 
it communicate with a host also plugged into the hub.)
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 8, 2022, at 11:09 PM, Tomasz Moń  wrote:

> Note that end nodes cannot directly communicate with each other. The
> communication is always between host and a device. 

Those two sentences, when combined, imply that either

1) a host is not an end node

or

2) a device is not an end node

or both.  Which is the case?

> Device to device communication is not possible.

Is the idea that the topology of USB is a tree, with the host at the root, and 
only the leaf nodes (devices, right?) are end nodes?

And, given that this means that "end node" is not the correct term for a piece 
of equipment that isn't a hub, what *is* the correct term?
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 8, 2022, at 10:47 PM, Tomasz Moń  wrote:

> On Sun, May 8, 2022 at 8:53 PM Guy Harris  wrote:
>> At least from a quick look at section 5.2.3 "Physical Bus Topology" of the 
>> USB 2.0 spec, a given bus can either be a high-speed bus or a full/low-speed 
>> bus.
> 
> The full/low-speed bus applies only to upstream link from full speed hub.

So what happens if you plug a low-speed keyboard or mouse into a host that 
supports USB 2.0?  Does that link not run at low speed?

>> The idea, then, is presumably that a capture tool is capturing on a single 
>> bus (single wire), so it's either capturing on a high-speed bus or a 
>> full/low-speed bus.
> 
> I assume that by single wire you meant "single wire pair"
> (differential pair). USB 2.0 has only single differential pair, formed
> by D+ and D- signal wires, so the high/full/low speed communication
> always occurs on the same wire pair.

Sorry - that's "wire" in the sense of "cable", not in the literal sense.

>> It looks as if a high-speed bus will always run at 480 Mb/s, so that capture 
>> would be a LINKTYPE_USB_2_0_HIGH_SPEED capture.  Is that correct?
> 
> Yes. If you connect high-speed hub to high-speed host (or super-speed
> host, but super-speed host essentially contains high-speed host, aka

> dual-bus) the communication on the connecting wires will be at high
> speed (480 Mb/s). Similarly if high-speed device is connected to
> high-speed host (or hub) then the communication will be at high speed.

"super-speed" is USB 3.0, right?  No LINKTYPE_/DLT_ has been proposed for the 
3.0 link layer, as far as I know.

But no full-speed or low-speed will go over that connection, either, so it's 
never the case that, in a capture on a USB cable, there will be both high-speed 
and full/low-speed traffic, right?

(And presumably this is for captures on a single USB cable; if you're capturing 
on more than one cable, that's with more than one capture interface, so that's 
a job for pcapng, with different interfaces having different link-layer types.)

>> For full/low-speed buses, will those also always run at full peed or low 
>> speed, so that there would never be a mixture of full-speed and low-speed 
>> transactions?
> 
> If you capture at the connection between low speed device and
> host/hub, there will only ever be low speed packets. It would be a
> LINKTYPE_USB_2_0_LOW_SPEED capture.
> 
> The problematic case (and the reason why full/low-speed bus is
> mentioned) is the LINKTYPE_USB_2_0_FULL_SPEED. It is the case when you
> capture at the connection between full speed hub and the host (and
> possibly full speed device connected to a full speed hub if there are
> low speed devices connected to the full speed hub). If there is low
> speed device connected to downstream hub port, then when the host
> wants to send packets to the low speed device, these will be sent at
> low speed to the hub. However, there will be PRE packet (sent at full
> speed) before every low speed transaction.

So, as per a few paragraphs above ("If you connect high-speed hub to high-speed 
host ... the communication on the connecting wires will be at high
speed (480 Mb/s)."), if you have a high-speed hub connected to a high-speed 
host, and the high-speed hub has full-speed or low-speed devices downstream, 
the packets from the host to the hub, ultimately intended for the full-speed or 
low-speed device, are sent as high-speed traffic, and only the downstream 
traffic from the host to the full-speed or low-speed device is full-speed or 
low-speed?

However, if you have a full-speed hub connected to a full-speed or high-speed 
host, and the full-speed hub has low-speed devices downstream, the packets from 
the host to the hub, ultimately intended for the low-speed device, are sent as 
a full-speed PRE packet followed by a transaction sent as low-speed traffic?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-08 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 8, 2022, at 1:30 PM, Michael Richardson  wrote:

> I guess I would have thought that a physical bus could have a mix of
> different devices which operate at different speeds.  As such, I wondered if
> you really needed pcapng to be able to mix LINKTYPES in the same file, or
> a different bit of meta-data to indicate bus speed for each frame captured.
> 
> But, maybe I'm wrong and that actually requires there to be a USB hub out 
> there.

"Bus" is a bit weird here.

To quote section 4.1.1 "Bus Topology" of the USB 2.0 pec:

The USB connects USB devices with the USB host. The USB physical 
interconnect is a tiered star topology. A hub is at the center of each star. 
Each wire segment is a point-to-point connection between the host and a hub or 
function, or a hub connected to another hub or function. Figure 4-1 illustrates 
the topology of the USB.

and Figure 5-6 "Multiple Full-speed Buses in a High-speed System" seems to use 
the term "bus" to refer to wire segments.

I think a point-to-point connection between the host and another entity may 
always run at a single speed, as well as a connection between a hub and a 
function.

It might also be the case that a hub-to-hub connection also runs at a single 
speed.  Section 11.14 "Transaction Translator" says:

A hub has a special responsibility when it is operating in high-speed 
and has full-/low-speed devices connected on downstream facing ports. In this 
case, the hub must isolate the high-speed signaling environment from the 
full-/low-speed signaling environment. This function is performed by the 
Transaction Translator (TT) portion of the hub.

so if you have a full-speed or low-speed device plugged into a USB 2.0 hub, and 
that hub is connected to a host, the host-to-hub link is high-speed, and the 
hub-to-device link is full-speed or low-speed, and the hub does the 
translation.  That way, you can plug a high-speed device and a full-speed or 
low-speed device into the hub, and the host will be able to talk at high speed 
to the high-speed device.

USB isn't a shared bus like non-switched Ethernet; it's more like switched 
Ethernet or point-to-point Ethernet, with links being point-to-point, either a 
direct connection between end nodes or connections to a switching device that 
handles speed translation if two end nodes of different speed capabilities are 
communicating.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Speed specific Link-Layer Header Types for USB 2.0

2022-05-08 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 8, 2022, at 4:48 AM, Tomasz Moń via tcpdump-workers 
 wrote:

> I would like to remedy the situation by requesting additional speed
> specific link layer header types, for example:
>  * LINKTYPE_USB_2_0_LOW_SPEED
>  * LINKTYPE_USB_2_0_FULL_SPEED
>  * LINKTYPE_USB_2_0_HIGH_SPEED
> 
> The description for existing LINKTYPE_USB_2_0 could be updated to
> mention that for new captures, the speed specific link layer header
> types should be used to enable better dissection.

To quote a comment of yours in the Wireshark issue:

> I should have gone for three separate link-layer header types for "USB 
> 1.0/1.1/2.0 packets" each at different capture speed (low/full/high). I think 
> technically we can still add these alongside the current "unknown speed" one. 
> The reason behind having separate link-layer header types is that the capture 
> tool must know the capture link speed (agreed speed does not change during 
> the transmission, and the handshaking is not on packet level) and the capture 
> link speed is useful when analyzing packets.

At least from a quick look at section 5.2.3 "Physical Bus Topology" of the USB 
2.0 spec, a given bus can either be a high-speed bus or a full/low-speed bus.

The idea, then, is presumably that a capture tool is capturing on a single bus 
(single wire), so it's either capturing on a high-speed bus or a full/low-speed 
bus.

It looks as if a high-speed bus will always run at 480 Mb/s, so that capture 
would be a LINKTYPE_USB_2_0_HIGH_SPEED capture.  Is that correct?

For full/low-speed buses, will those also always run at full peed or low speed, 
so that there would never be a mixture of full-speed and low-speed transactions?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] wireshark extension for a Kernel Module (like Usbmon)

2022-03-07 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Mar 7, 2022, at 5:55 AM, Christian via tcpdump-workers 
 wrote:

> hello out there, I created a kernel probe module and I want to watch the
> outputs of that module with pcap/Wireshark or tcpdump... Just like
> usbmon. My prefered tool is dumpcap. So I defined a char device in the
> dev-directory /dev/kpnode from which the pcap interface can read the
> output of that module. In order to enable reading, I started to place a
> handler function into libpcap:
> 
> In pcap.c I put in
> 
> #ifdef PCAP_SUPPORT_KPNODE
> #include "pcap-kpnode.h"
> #endif
>  and later:
> #ifdef PCAP_SUPPORT_KPNODE
> { kpnode_findalldevs, kpnode_create },
> #endif

That's the correct way to add it to the table of libpcap modules.

> further down:
> #ifdef PCAP_SUPPORT_KPNODE
> || strstr(device, "kpnode") != NULL
> #endif

That's presumably in pcap_lookupnet(); if so, that's the correct way to add 
kpnode there.

(I need to change that to use a better mechanism, so that it's the 
responsibility of the module to handle that, rather than hardcoding module 
information in a function.)

> The functions kpnode_findalldevs and kpnode_create are in my files
> pcap-kpnode.c and pcap-kpnode.h. They are not finished yet but the
> subject of this mail is for now, how to connect these functions into
> libpcap and Wireshark so that they are evoked if a device /dev/kpnode
> emerges.
> 
> Further I added an entry to configure.ac: AC_DEFINE(PCAP_SUPPORT_KPNODE,
> 1, [target host supports Linux kpmode])
> 
> Im not sure if editing the autoconf input file is too much, because I
> don't want to commit my changes to other platforms, it's just a small
> project of my own.

If you're just doing it on your own, and you will be using this modified 
libpcap only on systems where kpnode is available, the easiest way to do it 
would be to leave out the #ifdef`s for PCAP_SUPPORT_KPNODE.

If your entry in configure.ac unconditionally sets PCAP_SUPPORT_KPNODE, it's 
not useful, as it's equivalent to just removing the #ifdefs and hardwiring 
kpnode support into your version of libpcap.

If it *doesn't* unconditionally set PCAP_SUPPORT_KPNODE, then you might as well 
leave the #ifdefs in.

> But there are also some entries for USBMON in e.x.
> CMakeList.txt and more.

If you're not planning on committing your changes, and you don't plan to use 
CMake in the build process, there's no need to modify CMakeList.txt and 
anything else CMake-related, such as cmakeconfig.h.in.

> After execution of the configure script I put
> manually my files into the EXTRA_DIST list.

EXTRA_DIST is useful only if you plan to do "make releasetar" to make a source 
tarball - and if you want to do *that*, add it to Makefile.in, not to Makefile, 
so you won't have to fix Makefile manually.

> But so far, when I build the pcap library not even the symbol kpnode
> appears in the binary

Do you mean that a symbol named "kpnode" doesn't appear in the (shared) library 
binary?

Or do you mean that symbols with "kpnode" in their names, such as 
kpnode_findalldevs and kpnode_create, don't appear in the library binary?

If so, are you looking for *exported* symbols or *all* symbols?  On most 
platforms - and Linux is one such platform - we compile libpcap so that *only* 
routines we've designated as being libpcap APIs are exported by the library; 
others are internal-only symbols.  For example, if I do

$ nm libpcap.so.1.11.0-PRE-GIT | egrep usb_
0002f480 t swap_linux_usb_header.isra.0
ee60 t usb_activate
eb00 t usb_cleanup_linux_mmap
f300 t usb_create
f150 t usb_findalldevs
e670 t usb_inject_linux
e6b0 t usb_read_linux_bin
e860 t usb_read_linux_mmap
e660 t usb_setdirection_linux
edc0 t usb_set_ring_size
ed20 t usb_stats_linux_bin

on my Ubuntu 20.04 VM, it shows symbols for the Linux usbmon module, *but* they 
aren't exported symbols - they're shown with 't', not 'T'.  By contrast, if I do

4$ nm libpcap.so.1.11.0-PRE-GIT | egrep pcap_open
00012ea0 T pcap_open
0001bdc0 T pcap_open_dead
0001bce0 T pcap_open_dead_with_tstamp_precision
0001b9a0 T pcap_open_live
0002cf20 T pcap_open_offline
0001ab10 t pcap_open_offline_common
0002cde0 T pcap_open_offline_with_tstamp_precision
00015b70 t pcap_open_rpcap

symbols such as pcap_open(), pcap_open_live(), pcap_open_offline(), etc. *are* 
exported symbols - they're shown with 'T'.

So, to check for symbols, you should do "nm" and pipe the result to "egrep 
kpnode_".  Those symbols should show up with 't', not 'T', as they aren't part 
of the API - kpnode_findalldevs() should automatically get called if a program 
calls pcap_findalldevs() (e.g., if tcpdump is compile with this library, 
"tcpdump -D" should cause kpnode_findalldevs() to be called, and should show 
the kpnode device(s)), and kpnode_create() should automatically get 

Re: [tcpdump-workers] Selectively suppressing CI on some sites for a commit?

2022-01-06 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 6, 2022, at 3:22 PM, Guy Harris via tcpdump-workers 
 wrote:

> On Jan 6, 2022, at 3:00 PM, Denis Ovsienko via tcpdump-workers 
>  wrote:
> 
>> Do you think https://www.tcpdump.org/ci.html should document [skip cirrus] 
>> and [skip appveyor]?
> 
> [skip appveyor], possibly.

Cirrus documents that any of [skip ci], [ci skip], or [skip cirrus] in the 
first line of the commit message will suppress a CI build:

https://cirrus-ci.org/guide/writing-tasks/

AppVeyor documents that any of [skip ci], [ci skip], or [skip appveyor] in the 
commit message title (first line, presumably) will suppress a CI build:

https://www.appveyor.com/docs/how-to/filtering-commits/

It appears that a "GitHub skip hook" may have been first introduce in Buildbot 
0.9.11:

https://docs.buildbot.net/0.9.11/relnotes/index.html

with the hook being configurable by a regex match.  The 0.9.11 documentation of 
the "skips" parameter of the GitHub hook:

https://docs.buildbot.net/0.9.11/manual/cfg-wwwhooks.html#chsrc-GitHub

does not say anything about the skip item having to be on the first line of the 
commit message; it does say that the default parameter is

[r'\[ *skip *ci *\]', r'\[ *ci *skip *\]']

so either [skip ci] or [ci skip] (with arbitrary numbers of blanks thrown in 
after [, between the words, or before ]) should work.

OpenCSW's buildbot:

https://buildfarm.opencsw.org/buildbot/

claims to be running Buildbot 0.8.14; from the tests I ran, it skips the build 
if [skip ci] is on the first line of the message, but not if it's after that 
line.  I don't know whether there was a "skip ci" feature in older versions, or 
if the OpenCSW people implemented it themselves, checking only the first line.

All the Buildbot instances we've set up appear to be running Buildbot 3.4.0, 
which appears to handle [skip ci] anywhere in the commit message.

With a test I did by doing commits adding or removing blank lines from 
CMakeLists.txt, and with various commit messages, it appears that:

if the first line of the commit message ends with [skip ci], *all* CI 
builds are being suppressed (Cirrus, AppVeyor, OpenCSW, the buildbots we set 
up);

if some *other* line of the commit message is [skip ci], our buildbots 
skip the build, but Cirrus CI, AppVeyor, and OpenCSW don't skip it;

which appears to agree with what's documented above plus the hypothesis that 
OpenCSW's buildbot supports [skip ci] on the first line only.

So:

to suppress *all* builds, put [skip ci] on the first line;

to suppress only AppVeyor builds (which currently means "do only UN*X 
builds"), put [skip appveyor] on the first line;

to suppress only Cirrus builds (which means "skip x86-64 Linux, x86-64 
macOS, and x86-64 FreeBSD", but that doesn't suppress ARM64 FreeBSD or 
non-x86-64 Linux, so I'm not sure how useful it is), put [skip cirrus] on the 
first line;

to suppress only our buildbot builds, put [skip ci] somewhere *other* 
than the first line;

to supporess any set of builders that's the union of the three lines 
above, do the items for the builders in question.

There does not seem to be a way to do *only* Windows builds.  Putting [skip 
cirrus] on the first line and [skip ci] elsewhere in the commit message is the 
closest to that, but it won't suppress the OpenCSW builds, meaning "only 
Windows and Solaris".--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Selectively suppressing CI on some sites for a commit?

2022-01-06 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 6, 2022, at 3:00 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> On Thu, 6 Jan 2022 14:11:54 -0800 Guy Harris via tcpdump-workers 
>  wrote:
> 
>> I've just updated the libpcap .appveyor.yml to get Npcap from
>> npcap.com (the Npcap site has been moved there); I added [skip
>> cirrus] to skip Cirrus CI for that change, and it appears to work.
> 
> That's nice to know.  Either this is a relatively recent skip pattern in
> Cirrus CI, or I didn't notice it before (see my message to the list
> from 21 August 2020).

...or it doesn't work, even though the CI page on tcpdump.org didn't show the 
builds as being in progress.  It looks as if the libpcap builds *did* occur, 
and a tcpdump build (with the equivalent .appveyor.yml update) is in progress.

> Do you think https://www.tcpdump.org/ci.html should document [skip cirrus] 
> and [skip appveyor]?

[skip appveyor], possibly.  [skip cirrus], no, as my inference that it worked 
appears to be wrong.

>> Are there other comments to add to suppress OpenCSW CI and to
>> suppress the other CI sites that have been set up?  The only one I
>> want *not* suppressed is AppVeyor.
> 
> Not immediately, or not at all.  However, there are only two Buildbot
> places where all skip patterns are processed (or not).
> 
> ci.tcpdump.org recognizes [skip ci] because that's the default
> behaviour in that version of Buildbot.  Following the documentation,
> several months and Buildbot versions ago I tried adding [skip buildbot]
> to the list of skip patterns, but for some reason it had no effect
> (could be a user error or a bug). Would it help to try again?

I tried it with the tcpdump build, and it *appears* to work with the Tcpdump 
Group buildbots (the RISC-V one is running, but it's still working on a build 
from a change François submitted 3 hours ago, so it hasn't even started my 
change; that buildbot appears not to be the fastest computer in existence, 
shall we say).

> I am not familiar with OpenCSW Buildbot setup, but from the build
> history it is obvious it disregards [skip ci], so it looks likely it
> would disregard [skip buildbot] too.

It appears to disregard it.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


[tcpdump-workers] Selectively suppressing CI on some sites for a commit?

2022-01-06 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
I've just updated the libpcap .appveyor.yml to get Npcap from npcap.com (the 
Npcap site has been moved there); I added [skip cirrus] to skip Cirrus CI for 
that change, and it appears to work.

Are there other comments to add to suppress OpenCSW CI and to suppress the 
other CI sites that have been set up?  The only one I want *not* suppressed is 
AppVeyor.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] New DLT_ type request

2022-01-06 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 5, 2022, at 6:53 PM, Timotej Ecimovic  
wrote:

> No. Like the document describes: tooling that deals with deframing is 
> expected to remove the starting `[`, the ending `]` and the 2 byte length 
> right after the `[`.
> In case of creating a PCAPNG file out of this stream, the payload of the 
> packet blocks will NOT contain the framing. So the "packet" starts with the 
> debug message.

I.e., in LINKTYPE_SILABS_DEBUG_CHANNEL files, the packet doesn't include the 
'[', the length value, or the ']'?

>> What do the bits in the "Flags" field of the 3.0 debug message mean?  Does 
>> "few bytes of future-proofing flags" mean that there are currently no flag 
>> bits defined, so that the field should always be zero, but there might be 
>> flag bits defined in the future?
> They mean. "Reserved for future use". The value currently can be arbitrary 
> and until someone defines values for them, they have no meaning. I'll make 
> this more specific in the doc.

So is there something in the debug message to indicate whether the field has no 
meaning and should be ignored, or has a meaning and should be interpreted?
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] New DLT_ type request

2022-01-05 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 5, 2022, at 9:38 AM, Timotej Ecimovic via tcpdump-workers 
 wrote:

> I'm requesting an addition of the new DLT type. I'd call it: 
> DLT_SILABS_DEBUG_CHANNEL.
> The description of the protocol is here:
> https://github.com/SiliconLabs/java_packet_trace_library/blob/master/doc/debug-channel.md

...

> In case of errors (such as the ] not being present after the length bytes) 
> the recovery is typically accomplished by the deframing state engine reading 
> forward until a next [ is found, and then attempting to resume the deframing. 
> This case can be detected, because the payload of individual message contains 
> the sequence number.

So, presumably:

1) all packets in a LINKTYPE_SILABS_DEBUG_CHANNEL capture begin with a 
'[';

2) all bytes after the '[' and the payload bytes specified by the 
length should be ignored as being from a framing error, unless there's just one 
byte equal to ']'?

I.e., code reading the capture file does *not* have to do any deframing?

What do the bits in the "Flags" field of the 3.0 debug message mean?  Does "few 
bytes of future-proofing flags" mean that there are currently no flag bits 
defined, so that the field should always be zero, but there might be flag bits 
defined in the future?

> The types supported are listed in this file.

The file in question:


https://github.com/SiliconLabs/java_packet_trace_library/blob/master/silabs-pti/src/main/java/com/silabs/pti/debugchannel/DebugMessageType.java

lists a bunch of message types; is there a document that describes the format 
of messages with each of those types?


--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] [libpcap] Keep Win32/Prj/* files ?

2021-12-06 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Dec 6, 2021, at 10:55 AM, Denis Ovsienko via tcpdump-workers 
 wrote:

> On Mon, 29 Nov 2021 19:20:32 +0100 Francois-Xavier Le Bail via 
> tcpdump-workers  wrote:
> 
>> Does anyone use these files?
>> Win32/Prj/wpcap.sln
>> Win32/Prj/wpcap.vcxproj
>> Win32/Prj/wpcap.vcxproj.filters
> 
> It looks like CMake has superseded these files, as far as it is
> possible to tell without Windows.

They are not used by the CMake build process on Windows, so they would be used 
only by people trying to build *without* CMake.

The CMake files are likely to be better maintained than the "use Visual Studio 
directly" files, as you don't need Visual Studio, and don't need to know how 
Visual Studio solution or project files work internal, in order to modify the 
CMake files.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] NetBSD breakage

2021-08-11 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Aug 11, 2021, at 3:09 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> The other matter is that the gencode.h/grammar.h pair works best when
> it is included early.

Perhaps the gencode.h/grammar.h pair works best when it doesn't include 
grammar.h. :-)

I've checked in a change to remove the include of grammar.h from gencode.c; it 
builds without problems on macOS, and I suspect it will build without problems 
everywhere, as what grammar.h defines are:

1) the names for tokens (which may be done with an enum in a fashion 
that causes large amounts of pain if another header you include helpfully - but 
uselessly, for our purposes - names for the machine's registers, and you are 
unlucky enough to be compiling for a machine that has a register named "esp", 
causing a collision with the "esp" token in pcap filter language for ESP; 
fortunately, such machines are rare :-) :-) :-) :-) :-) :-();

2) a union of value types for all symbols in the grammar.

As far as I can tell, neither token names or values nor a value type union are 
passed to any of the gencode.c routines called from grammar.y.  We *do* pass 
values for symbols, but we select the particular union member, rather than just 
blindly passing the union as a whole.

So far, all the libpcap builds om www.tcpdump.org are green except for the 
Windows build, which is listed as pending; it's about 2/3 of the way through 
the build matrix.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] build failures on Solaris

2021-08-08 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Aug 8, 2021, at 2:26 AM, Denis Ovsienko  wrote:

> GCC+CMake fails early now (see attached).

Good!  That reveals the *underlying* problem:

1) CMake, by default, checks for both a C *and* a C++ compiler;

2) if it's checking for both compilers, the way CMake determines 
CMAKE_SIZEOF_VOID_P is to:

check for a C compiler;

set CMAKE_C_SIZEOF_DATA_PTR to the size of data pointers in that C 
compiler with whatever C flags are being used;

set CMAKE_SIZEOF_VOID_P to CMAKE_C_SIZEOF_DATA_PTR;

check for a C++ compiler;

set CMAKE_CXX_SIZEOF_DATA_PTR to the size of data pointers in that C++ 
compiler with whatever C++ flags are being used;

set CMAKE_SIZEOF_VOID_P to CMAKE_CXX_SIZEOF_DATA_PTR;

3) Sun/Oracle's C and C++ compilers default to building *32-bit* code;

4) the version of GCC installed on the Solaris 11 builder appears to default to 
building 64-bit code;

5) there does not appear to be a version of G++ installed, so CMake finds 
"/usr/bin/CC", which is the Sun/Oracle C++ compiler;

6) as a result of the above, CMake ends up setting CMAKE_SIZEOF_VOID_P to 4, 
which can affect the process of finding libraries;

7) nevertheless, the C code (which is *all* the code - ain't no C++ in tcpdump) 
is compiled 64-bit;

8) hilarity ensues.

I've checked in a change to explicitly tell CMake "this is a C-only project, 
don't check for a C++ compiler", so it should now think it's building 64-bit 
when building with GCC.

See whether that fixes things.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] build failures on Solaris

2021-08-08 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 31, 2021, at 3:37 AM, Denis Ovsienko via tcpdump-workers 
 wrote:

> # Solaris 11 with GCC #
> This is the opposite: the pre-compile libpcap feature test programs
> fail to link so all libpcap feature tests fail. However, libpcap is
> detected as available and the build process resorts to missing/ and
> produces a binary of tcpdump that is mostly functional:
> 
> $ /tmp/tcpdump_build_matrix.XX06MD.a/bin/tcpdump -D
> /tmp/tcpdump_build_matrix.XX06MD.a/bin/tcpdump: illegal option -- D
> 
> The problem seems to be that the feature test linking instead of using
> the flags returned by pcap-config points exactly to the 32-bit version
> of libpcap and fails:

I've checked in changes to:

check the bit-width of the build in autotools;

on Solaris, use the results of the bit-width checks for autotools and 
CMake to figure out which version of pcap-config to run.

See if that clears up the Solaris 11 with GCC build.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] build failures on Solaris

2021-08-03 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Aug 3, 2021, at 12:07 AM, Dagobert Michelsen  wrote:

> The /64 suffix in bin/ and lib/ is a symlink to the respective architecture
> and simplifies cross-platform build between Sparc and x86.

For whatever reason, /usr/bin/64 isn't present on my Solaris 11.3 (x86-64) VM:

solaris11$ ls /usr/bin/64
/usr/bin/64: No such file or directory
solaris11$ uname -a
SunOS solaris11 5.11 11.3 i86pc i386 i86pc

The same is true of the directory containing the installed-from-IPS gcc:

solaris11$ which gcc
/usr/ccs/bin/gcc
solaris11$ ls /usr/ccs/bin/64
/usr/ccs/bin/64: No such file or directory

and Sun/Oracle C:

solaris11$ which cc
/opt/developerstudio12.5/bin/cc
solaris11$ ls /opt/developerstudio12.5/bin/64/cc
/opt/developerstudio12.5/bin/64/cc: No such file or directory

Sun/Oracle don't appear to have made as vigorous an effort to make this work as 
OpenCSW have.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] build failures on Solaris

2021-08-02 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 31, 2021, at 3:37 AM, Denis Ovsienko via tcpdump-workers 
 wrote:

> # Solaris 11 with GCC #
> This is the opposite: the pre-compile libpcap feature test programs
> fail to link so all libpcap feature tests fail. However, libpcap is
> detected as available and the build process resorts to missing/ and
> produces a binary of tcpdump that is mostly functional:
> 
> $ /tmp/tcpdump_build_matrix.XX06MD.a/bin/tcpdump -D
> /tmp/tcpdump_build_matrix.XX06MD.a/bin/tcpdump: illegal option -- D
> 
> The problem seems to be that the feature test linking instead of using
> the flags returned by pcap-config points exactly to the 32-bit version
> of libpcap and fails:
> 
> $ pcap-config --libs
> -L/usr/lib  -lpcap

solaris11$ /usr/bin/pcap-config --libs
-L/usr/lib  -lpcap
solaris11$ /usr/bin/amd64/pcap-config --libs
-L/usr/lib/amd64 -R/usr/lib/amd64 -lpcap

on my x86-64 Solaris 11 VM.

From the Solaris 64-bit Developer's Guide:

http://math-atlas.sourceforge.net/devel/assembly/816-5138.pdf

the equivalent of "amd64" on SPARC is probably "sparcv9".

So tcpdump (and anything else using libpcap) should, on Solaris, determine the 
target architecture and run the appropriate version of pcap-config.

I'll look at that.

(Apropos of nothing, that Sun document also says of the 64-bit SPARC ABI:

Structure passing and return are accomplished differently. Small data 
structures and some floating point arguments are now passed directly in 
registers.

I'm curious which, if any, ABIs pass data structures *and unions* that would 
fit in a single register in a register.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] build failures on Solaris

2021-08-01 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Aug 1, 2021, at 6:08 PM, Denis Ovsienko  wrote:

> On Sun, 1 Aug 2021 15:45:39 -0700
> Guy Harris  wrote:
> 
>> Probably some annoying combination of one or more of "different
>> compilers", "later version of CMake", "at least some versions of cc
>> and gcc build 32-bit binaries by default even on Solaris 11 on a
>> 64-bit machine(!)", and so on.
>> 
>> This is going to take a fair bit of cleanup, not the least of which
>> includes forcing build with both autotools *and* CMake to default to
>> 64-bit builds on 64-bit Solaris.
> 
> For clarity, there is no rush to fix every obscure issue in this
> problem space, but it is useful to have the problem space mapped.

At this point, I'm seeing two problems:

1) The pcap-config and libpcap.pc that we generate always include a -L flag, 
even if the directory is a system include directory, which means that it could 
be wrong in a system with 32-bit and 64-bit libraries in separate directories.  
Debian removes that from pcap-config to avoid that problem.  We shouldn't add 
-L in that case.

2) Tcpdump needs to work around that when configuring.

The first is definitely our bug, given that Debian is working around it.

The second would be helpful; we already work around Apple screwing up 
pcap-config by having the one they ship with macOS include -L/usr/local/lib for 
no good reason.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] build failures on Solaris

2021-08-01 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 31, 2021, at 4:35 PM, Denis Ovsienko  wrote:

> On Sat, 31 Jul 2021 14:55:32 -0700
> Guy Harris  wrote:
> 
> [...]
>> What version of CMake is being used, and how was it installed?
>> 
>> My Solaris 11 x86-64 virtual machine has CMake 2.8.6 in
>> /usr/ccs/bin/cmake, installed from Sun^WOracle's Image Packaging
>> System repositories, and I'm not seeing that behavior - the test
>> programs are linked with -lpcap, as is tcpdump.
> 
> This issue reproduces on OpenCSW host unstable11s:

So where do the Solaris 11 hosts show up on the buildbot site?

> # CMake 3.14.3 (OpenCSW package)
> # GCC 7.3.0
> 
> MATRIX_CC=gcc \
> MATRIX_CMAKE=yes \
> MATRIX_BUILD_LIBPCAP=no \
> ./build_matrix.sh 
> [...]
> $ /tmp/tcpdump_build_matrix.XXVrYyid/bin/tcpdump -D
> /tmp/tcpdump_build_matrix.XXVrYyid/bin/tcpdump: illegal option -- D
> tcpdump version 5.0.0-PRE-GIT
> libpcap version unknown
> 
> As I have discovered just now, it does not reproduce on OpenCSW host
> gcc211:

Probably some annoying combination of one or more of "different compilers", 
"later version of CMake", "at least some versions of cc and gcc build 32-bit 
binaries by default even on Solaris 11 on a 64-bit machine(!)", and so on.

This is going to take a fair bit of cleanup, not the least of which includes 
forcing build with both autotools *and* CMake to default to 64-bit builds on 
64-bit Solaris.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] build failures on Solaris

2021-07-31 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 31, 2021, at 3:37 AM, Denis Ovsienko via tcpdump-workers 
 wrote:

> # Solaris 11 with GCC #
> This is the opposite: the pre-compile libpcap feature test programs
> fail to link so all libpcap feature tests fail. However, libpcap is
> detected as available and the build process resorts to missing/ and
> produces a binary of tcpdump that is mostly functional:
> 
> $ /tmp/tcpdump_build_matrix.XX06MD.a/bin/tcpdump -D
> /tmp/tcpdump_build_matrix.XX06MD.a/bin/tcpdump: illegal option -- D

What version of CMake is being used, and how was it installed?

My Solaris 11 x86-64 virtual machine has CMake 2.8.6 in /usr/ccs/bin/cmake, 
installed from Sun^WOracle's Image Packaging System repositories, and I'm not 
seeing that behavior - the test programs are linked with -lpcap, as is tcpdump.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] compiler warnings on AIX and Solaris

2021-07-24 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 23, 2021, at 4:11 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> As it turns out, on Solaris 9 it is impossible to compile current
> tcpdump with CFLAGS=-Werror because missing/getopt_long.c yields a few
> warnings (attached). As far as the current revisions of this file go in
> FreeBSD, NetBSD and OpenBSD, FreeBSD seems to be the closest and just a
> bit newer than the current tcpdump copy (OpenBSD revision 1.22 -> 1.26).
> However, it seems unlikely that porting the newer revision would make
> the warnings go away, because, for example, permute_args() has not
> changed at all.

At least when it comes to not violating the promises made by the API 
definition, the BSD implementations of getupt_long(), the GNU libc 
implementation of getopt_long(), and the Solaris implementation of 
getopt_long() are all broken by design.

The declaration is

int getopt_long(int argc, char * const *argv, const char *optstring, 
const struct option *longopts, int *longindex);

where "char * const *argv" means, to quote cdecl.org, "declare argv as pointer 
to const pointer to char", which means that the pointer(s) to which argv points 
cannot be modified.  What the pointers point *to* - i.e., the argument strings 
- can be modified, but the pointers in the argv array will not be modified.

All three implementations could shuffle the arguments in argv[] (as per the 
name "permute_args" in the BSD implementations) unless either 1) the option 
string begins with a "+" or 2) the POSIXLY_CORRECT environment variable is set.

This isn't an issue for us on systems that provide getopt_long() - it's an 
issue for whoever compiles the standard library if they turn on "warn about 
casting away constness", but it's not an issue for *us*, as somebody else 
compiled the standard library.  Thus, it doesn't show up on Linux (GNU libc), 
*BSD/macOS (BSD), or newer versions of Solaris (they added getopt_long() to the 
library).

It is, however, an issue for us if 1) the platform doesn't provide 
getopt_long() (presumably it was added to Solaris after Solaris 9), so it has 
to be compiled as part of the tcpdump build process and 2) the compiler issues 
that warning.

It's not currently an issue on Windows when compiling with MSVC, because either 
1) MSVC never issues that warning or 2) it can but we're not enabling it.

So the only way to fix this is to turn off the warnings; change 
39f09d68ce7ebe9e229c9bf5209bfc30a8f51064 adds macros to disable and re-enable 
-Wcast-qual and wraps the offending code in getopt_long.c with those macros, so 
the problem should be fixed on Solaris 9.

> The same problem stands on AIX 7,

AIX also doesn't appear to provide getopt_long(), at least as of AIX 7.2:


https://www.ibm.com/docs/en/aix/7.2?topic=reference-base-operating-system-bos-runtime-services

so the same problem occurs; the change should fix that as well.

> and in addition there is an issue
> specific to XL C compiler, in that ./configure detects that the
> compiler does not support -W, but then proceeds to flex every -W
> option anyway, which the compiler treats as a soft error,

"The compiler treats [that] as a s soft error" is the problem - the configure 
script checks currently require that unknown -W flags be a *hard* error, so 
that attempting to compile a small test program with that option fails.

If there's a way to force XL C to treat it as a hard error, we need to update 
the AC_LBL_CHECK_UNKNOWN_WARNING_OPTION_ERROR autoconf macro to set the 
compiler up to use it when testing whether compiler options are supported.

If there *isn't* a way to do that, the configure-script test also needs to scan 
the standard error of the compilation and look for the warning, and treat that 
as an indication of lack of support as well.  (I think the equivalent test 
provided as part of CMake may already do that.)
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

[tcpdump-workers] Rough consensus and quiet humming

2021-04-22 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
https://twitter.com/MeghanEMorris/status/1382109954224521216/photo/1
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Link Layer Type Request NETANALYZER_NG

2021-03-24 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Mar 24, 2021, at 12:32 AM, Jan Adam  wrote:

>> So, with incl_len equal to {PayloadSize,VarSize} + 54, orig_len would be 
>> equal to {original PayloadSize} + 54, so the original payload size would be 
>> orig_len - 54.
>> 
>> That would allow the original size and the sliced size of the payload to be 
>> calculated, so that should work.
> 
> Yes it should work.
> 
> I have the feeling this is more about the design then the implementation.

It's about either 1) saying "slicing is forbidden" or 2) saying "here's how you 
do slicing".  In either case, there would be implementation changes to tcpdump 
and Wireshark's editcap tool, as both of them can do packet slicing when 
reading a file and writing another file from the contents (although I just 
discovered that tcpdump doesn't appear to correctly set the snapshot length in 
the header of the output capture file, which I need to fix).

> I will try to explain our design decision of the footer. We have observed 
> that customers using Wireshark don't think about the header when counting the 
> bytes in the hex dump and expect the frame to start at the first byte and as 
> a result read out wrong values.

Perhaps that's an indication that Wireshark needs to do a better job of 
distinguishing between metadata headers and packet data, then.  (I already 
think so, as 1) counting metadata headers as data means, for example, that you 
get bogus bytes/second values and 2) separating them may make it more 
straightforward to implement transformation from, for example, 

> Therefore our idea was to put the additional info at the end in form of a 
> footer.
> 
> Maybe you can help me understand more of the general concept, how is this 
> slicing handled for a DLT with a header or footer in general?
> If you take for example another DLT: 
> https://www.tcpdump.org/linktypes/LINKTYPE_LINUX_SLL.html it has 16 byte 
> header size, how does editcap or tcpdump take that into account? Is it 
> possible to slice without taking the header size into account?

For headers, it currently will do what would be done when doing a live capture 
and slicing it - the snaplen is the maximum size of the data in the packet 
record, *including* metadata headers.

Changing that might be considered an incompatible change, but the ability to 
say "write packets out with no more than N bytes of *on-the-network packet 
data*" (rather than "no more than N bytes of *total* packet data, including 
metadata headers"), as a separate option, might be useful.

That would be fairly say to do for *ex post facto* slicing of an existing 
capture file.  It would involve code that knows the size of the metadata header 
for all link-layer types, so that would be a bit of an architectural change to 
the code, but not a painful one.

It's trickier for live captures, but, if the slicing is done by a BPF program, 
where the return value of the BPF filter indicates the number of bytes of total 
packet data to write, that could be done even if the metadata header is 
variable-length.  That's the case for *BSD/macOS, Linux, Solaris, AIX, and, as 
far as I know, Windows with Npcap.

I'm not sure there *are* any currently cases where a given LINKTYPE_ value 
specifies a metadata trailer.  There are some network devices that append 
metadata trailers to Ethernet packets and route them to a host for capturing, 
with Wireshark having heuristics for trying to guess whether there's a metadata 
trailer on the frame or not and which type of metadata trailer it is; slicing, 
whether done at capture time or *ex post facto*, will just slice the metadata 
trailer in two or slice it off completely.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] ARM build slaves (tcpdump mirror in Germany)

2021-03-23 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Mar 22, 2021, at 5:35 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> On Mon, 22 Mar 2021 19:00:31 +0100
> Harald Welte  wrote:

...

>> btw: I'm not sure if qemu full system emulation of e.g. ppc on a
>> x86_64 hardware would be an option, though.  I think
>> openbuildservice.org is doing that a lot for building packages on
>> less popular architectures.
> 
> QEMU was very useful for the NetBSD setup. NetBSD for some reason did
> not provide binary packages for 9.1/aarch64, and heavy non-default
> packages (LLVM, Clang, GCC 10) just do not compile on 1GB RAM of RPI3B
> (NetBSD release does not run on RPI4B), so the only way to compile
> these was in a QEMU VM with more RAM.
> 
> That said, on a Linux host with i7-3770 CPU the QEMU guest measured at
> 64% core-to-core CPU performance of an RPI3B. So after the initial
> setup a hardware Pi does a better job.

The main PowerPC/Power ISA buildbot we'd want would probably be ppcle64, as the 
ppcle64 implementation of some crypto library routines, as used by tcpdump, 
require strict adherence to the API documentation, e.g. 1) don't use the same 
buffer for encrypted and decrypted data and 2) provide all the necessary 
padding in the input buffer and leave enough room in the output buffer, as per

https://github.com/the-tcpdump-group/tcpdump/issues/814

64% isn't perfect, but it's a lot better than 10%, so if QEMUs' PPC64/64-bit 
Power ISA emulation supports both big-endian and little-endian mode, and runs 
with acceptable performance (anything in the range of 50% is probably good 
enough), and the emulation is faithful enough (which being able to boot ppc64le 
Linux would probably imply), that would probably be sufficient.

Having *some* big-endian machine would be useful primarily for tcpdump testing, 
to make sure there's no code that implicitly assumes it's running on a 
little-endian machine (which most developers probably have); any of SPARC, 
ppcbe, or s390/s390x would suffice for that.

SPARC has the additional advantage of trapping on unaligned accesses, so it'll 
also detect code that implicitly assumes that unaligned accesses work.  S/3x0 
hasn't required alignment since S/370 came out (unaligned accesses were an 
optional feature of S/360, but were made a standard feature in S/370), and I'm 
not sure PPC requires it.  We already have SPARC/Solaris 10 testing with 
OpenCSW, so that will fail on unaligned accesses; the only thing additional 
buildbots would do would be to give us Solaris 11 and Linux.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Link Layer Type Request NETANALYZER_NG

2021-03-22 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Mar 22, 2021, at 7:33 AM, Jan Adam  wrote:

>> Are they aligned on natural boundaries?
> 
> No, it is not aligned but packet.  We use #pragma pack(1) for the footer 
> structure.

You should probably add that to the page with the structure definition.

>> What do the four fields of the SrcID indicate for the various values of 
>> Representation?
> 
> For Representation 0x01 to 0x05 their meaning is defined as following:
> tSrcId.ulPart1netANALYZER device number
> tSrcId.ulPart2netANALYZER serial number
> tSrcId.bPart4netANALYZER port number
> 
> For Representation 0x02 to 0x05
> tSrcId.bPart3netANALYZER TAP name (as character, e.g. 'A' = 0x41 or 'B')
> 
> For Representation 0x01
> tSrcId.bPart3netANALYZER TAP number

That should also be noted in the specification.

>> What other possible values of PayloadType are there?
> 
> The PayloadType has the following possible values but they are not usefull 
> for capturing network traffic. So the only value in the context of packet 
> data will be 0x0A which represents DATATYPE_OCTET_STRING.
> 
> #define VAR_DATATYPE_BOOLEAN0x01

...

> #define VAR_DATATYPE_NONE   0xff

It should also note that the other values are reserved and will not appear in 
pcap or pcapng files.

>>> Slicing a captured packet is not supported by our capturing device.
> 
>> But some software can slice packets afterwards.  Either that would have to 
>> be forbidden (meaning editcap and, I think, tcpdump would have to check for 
>> LINKTYPE_NETANALYZER_NG/DLT_NETANALYZR_NG and refuse to do slicing), or they 
>> would have to 1) ensure that the slice size is >= the footer size and 2) do 
>> the slicing specially, removing bytes *before* the footer, so that if 
>> incl_len < VarSize + footer_size, (VarSize + footer_size) - incl_len bytes 
>> have been sliced off.
> 
> Both might be possible path to take for slicing. In any case the PayloadSize 
> should also be adjusted when the payload length is changed in my opinion. Is 
> this a Problem?

So, with incl_len equal to {PayloadSize,VarSize} + 54, orig_len would be equal 
to {original PayloadSize} + 54, so the original payload size would be orig_len 
- 54.

That would allow the original size and the sliced size of the payload to be 
calculated, so that should work.

--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Link Layer Type Request NETANALYZER_NG

2021-03-18 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Mar 15, 2021, at 9:04 AM, Jan Adam  wrote:

>> Can the variable be anything *other* than a packet of some sort?
> 
> There are only the mentioned 5 representations planned for pcap files since 
> this is what our capture device may capture into a pcap file. The 
> representation gives at least the ability to extend in the future. Do you 
> have anything specific in mind?

No.

>> It also appears that the boundary between the payload and the trailer would 
>> be determined by fetching the VarSize field at the end of the trailer.  The 
>> first VarSize bytes of the data would be the payload, and the remaining 
>> sizeof(footer) bytes would be the trailer.  Is that the case?
> 
> This is also correct. The remaining bytes of incl_len - VarSize is the footer 
> size.

If the fields of the footer are aligned on natural boundaries, the footer will 
be 72 bytes long; if they are *not* aligned, the footer will be 53 bytes long.

Are they aligned on natural boundaries?

Presumably VarSize is the same thing as PayloadSize?  If so, then presumably 
incl_len must be equal to VarSize + {either 53 or 72}.

> Some fields of the footer (like the ID) may seem to be redundant and not of 
> much purpose in the wireshark or tcpdump context but we use the footer 
> structure everywhere in our software stack. This way we eliminated converting 
> structures between different parts of our software when dealing with captured 
> data.

So what do the two time stamps indicate for the various various of 
Representation?

What do the four fields of the SrcID indicate for the various values of 
Representation?

What do the values of PayloadState indicate for the various values of 
Representation?

What other possible values of PayloadType are there?

>> This also means that NETANALYZER_NG data must *not* be cut off at the end by 
>> any "slicing" process, such as capturing with a "slice length"/"snapshot 
>> length".  Is it possible that the frame in the payload is "sliced" in that 
>> fashion?
> 
> Slicing a captured packet is not supported by our capturing device.

But some software can slice packets afterwards.  Either that would have to be 
forbidden (meaning editcap and, I think, tcpdump would have to check for 
LINKTYPE_NETANALYZER_NG/DLT_NETANALYZR_NG and refuse to do slicing), or they 
would have to 1) ensure that the slice size is >= the footer size and 2) do the 
slicing specially, removing bytes *before* the footer, so that if incl_len < 
VarSize + footer_size, (VarSize + footer_size) - incl_len bytes have been 
sliced off.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Link Layer Type Request NETANALYZER_NG

2021-03-12 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Mar 12, 2021, at 4:35 AM, Jan Adam  wrote:

>> So is "the variable" the same thing as "the payload"?
> 
> That is correct. To be more specific the payload is the value/content of the 
> variable.

Can the variable be anything *other* than a packet of some sort?  The current 
set of values for the variable listed in https://kb.hilscher.com/x/brDJBw:

0x01:   netANALYZER legacy frame
0x02:   Ethernet (may also be a re-assembled mpacket)
0x03:   mpacket
0x04:   PROFIBUS frame
0x05:   IO-Link frame

lists only packets of various types, but I was reading "variable" in the 
programming language sense, rather than in the sense that the total content has 
a "fixed part", that being the trailer, and a "variable part", that being the 
packet preceding the trailer.  Is the latter the sense in which the word 
"variable" should be understood?

It also appears that the boundary between the payload and the trailer would be 
determined by fetching the VarSize field at the end of the trailer.  The first 
VarSize bytes of the data would be the payload, and the remaining 
sizeof(footer) bytes would be the trailer.  Is that the case?

That would also indicate that the "captured length" value for a pcap record or 
a pcapng block containing NETANALYZER_NG data must be >= sizeof(footer), so 
that the entire footer is present.

This also means that NETANALYZER_NG data must *not* be cut off at the end by 
any "slicing" process, such as capturing with a "slice length"/"snapshot 
length".  Is it possible that the frame in the payload is "sliced" in that 
fashion?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Link Layer Type Request NETANALYZER_NG

2021-03-12 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Mar 8, 2021, at 12:07 AM, Jan Adam via tcpdump-workers 
 wrote:

> We have created a public document on our website You can point to for the 
> description.
> 
> Here is the link:  https://kb.hilscher.com/x/brDJBw
> 
> It contains a more detailed description of the fields in the footer structure.
> It also contains a C – like structure definition of the footer.

So is "the variable" the same thing as "the payload"?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] continuous integration status update

2021-03-04 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Mar 3, 2021, at 2:30 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> A partial replacement for that service is ci.tcpdump.org, which is a
> buildbot instance doing Linux AArch64 builds for the github.com
> repositories.

So where is that hosted?  Are you hosting it yourself or hosting it on some 
cloud service?
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Link Layer Type Request NETANALYZER_NG

2021-03-03 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Mar 3, 2021, at 8:58 AM, Jan Adam via tcpdump-workers 
 wrote:

> for our new analysis product netANALYZER NG I would like to request a new 
> link-layer type value.
> 
> NETANALYZER_NG
> 
> The new Link-Layer-Type format is described as following:
> 
> Next-generation packet structure:
> +---+
> |   Payload |
> .   .
> .   .
> |   |
> +---+
> |   Footer  |
> |   |
> +---+
> 
> Next-gen footer description:
> 
> [16 bit]  Versionrepresents current structure version
> [64 bit]  Timestamp1 first timestamp in ns, UNIX time since 1.1.1970
> [64 bit]  Timestamp2 second timestamp in ns, UNIX time since 1.1.1970
> [32 bit]  TimestampAccuracy  actual accuracy of Timestamp1 and Timestamp2 in 
> ns. 0: actual accuracy is unknown

What do these two time stamps represent?  They presumably don't represent the 
packet arrival time, as both pcap and pcapng already provide that for all 
packets.

> [8 bit]   Representation identification of the following content

What are the possible values of this field, and what do those values signify?

> [32 bit]  SrcIdPart1 source identifier part 1
> [32 bit]  SrcIdPart2 source identifier part 2
> [8 bit]   SrcIdPart3 source identifier part 3
> [8 bit]   SrcIdPart4 source identifier part 3

So there's an 80-bit source identifier; what does that value signify?

> [64 bit]  VarId  variable identifier
> [64 bit]  VarState   variable error states, depending on 
> representation
> [8 bit]   VarTypevariable data type

What do those signify?

> [32 bit]  VarSizesize of raw frame payload

Presumably everything beyond that size is the footer; what are the contents of 
the footer?
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Request for new LINKTYPE_* code LINKTYPE_AUERSWALD_LOG

2021-02-04 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Feb 4, 2021, at 3:41 AM, developer--- via tcpdump-workers 
 wrote:

> We currently use this code in our lua dissector to display (decoded) SIP 
> messages.
> 
> -- offsets will change with the new LINKTYPE
>if (buf(148,2):uint() == MSG_TYPE_SIP) then
>sadd("src_ip",0,16)
>sadd("src_port",16,2,"uint")
>sadd("dst_ip", 18,16)
>sadd("dst_port",34,2,"uint")
>Dissector.get("sip"):call(buf(msg_start, msg_len):tvb(), pinfo, 
> subtree)
>return
>end

In other words, the format of packets is:

IPv6 source address - 16 octets
source port - 2 octets
IPv6 destination address - 16 octets
destination port - 16 octets
SIP packet
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Request for new LINKTYPE_* code LINKTYPE_AUERSWALD_LOG

2021-02-03 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Feb 3, 2021, at 6:54 AM, developer--- via tcpdump-workers 
 wrote:

> We would like to request a dedicated LINKTYPE_* / DLT_* code.
> Auerswald is a major German telecommunications equipment manufacturer.
> We have implemented the option to capture (combined) network traffic and 
> logging information as pcap/pcapng in our soon to be released new product 
> line.
> 
> For development, we so far have used LINKTYPE_USER0 and would like to change 
> this to a proper code before the commercial release.
> 
> We also plan to publicly release the dissector and would like to make sure 
> both can be released with a proper code from the get go.
> The dissector we currently use is however only in lua.
> 
> Our preferred name would be
> LINKTYPE_AUERSWALD_LOG
> 
> If anyone is interested we can provide further information.

Please provide a detailed description of the packet format, sufficient to allow 
somebody to make a program such as tcpdump, or Wireshark, or anything else that 
reads pcap or pcapng files.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Request to add MCTP and PCI_DOE to PCAP link type

2021-01-27 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Dec 16, 2020, at 8:09 PM, Yao, Jiewen via tcpdump-workers 
 wrote:

> We did a prototype for the SpdmDump tool 
> (https://github.com/jyao1/openspdm/blob/master/Doc/SpdmDump.md). We can 
> generate a PCAP file and parse it offline.
> In our prototype, we use below definition:
> #define LINKTYPE_MCTP  290  // 0x0122
> #define LINKTYPE_PCI_DOE   291  // 0x0123
> If you can assign same number, it will be great.
> If different number is assigned, we will change our implementation 
> accordingly.

Different numbers will definitely be assigned, as 290 is already in use (in 
Wireshark, for example).  (Not everything was updated to reflect that; I've 
fixed that.)

You will probably be assigned 291 for LINKTYPE_MCTP and 292 for 
LINKTYPE_PCI_DOE; you should update your prototype for that for now.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Request to add MCTP and PCI_DOE to PCAP link type

2021-01-24 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Dec 16, 2020, at 8:09 PM, Yao, Jiewen via tcpdump-workers 
 wrote:

> I write this email to request to below 2 link types.
> 
> 
>  1.  MCTP

...

> MCTP packet is defined in DMTF PMCI working group Management Component 
> Transport Protocol (MCTP) Base 
> Specification(https://www.dmtf.org/sites/default/files/standards/documents/DSP0236_1.3.1.pdf)
>  8.1 MCTP packet fields. It starts with MCTP transport header in Figure 4 - 
> Generic message fields.

So this is for MCTP messages, independent of the physical layer?

Presumably the not-a-multiple-of-8-bits fields in Table 1 go from the 
high-order bits to the low-order bits, so that the upper 4 bits of the first 
byte are the RSVD field and the lower 4 bits of the first byte are the Hdr 
version?

>  1.  PCI_DOE
> 
> PCI Data Object Exchange (DOE) is an industry standard defined by PCI-SIG 
> (https://pcisig.com/) Data Object Exchange (DOE) 
> ECN 
> (https://members.pcisig.com/wg/PCI-SIG/document/14143).

...

> PCI Data Object Exchange (DOE) is defined in PCI-SIG Data Object Exchange 
> (DOE) ECN (https://members.pcisig.com/wg/PCI-SIG/document/14143) 6.xx.1 Data 
> Objects. It starts with DOE Data Object Header 1 in Figure 6-x1: DOE Data 
> Object Format.

Unfortunately, I'm not a member of the PCI SIG, so I don't have an account to 
log in to in order to read that document.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] libpcap detection and linking in tcpdump

2021-01-23 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 22, 2021, at 7:11 PM, Guy Harris via tcpdump-workers 
 wrote:

> I'll try experimenting with one of my Ubuntu VMs.

Welcome to Shared Library Search Hell.

Most UN*Xes have a notion of RPATH (with, of course, different compiler 
command-line flags to set it).

pcap-config provides one if the shared library isn't going to be installed in 
/usr/lib.

The pkg-config file doesn't provide one, however, and some searching indicates 
that the pkg-config maintainers recommend *against* doing so.  They recommend 
using libtool when linking, instead.  Part of the problem here may be that 
setting the RPATH in an executable affects how it searches for *all* libraries, 
so it could affect which version of an unrelated library is found.

(The existence of libtool is an indication that shared libraries have gotten 
messy on UN*X.)

Perhaps for this particular case the right thing to do is to set 
LD_LIBRARY_PATH when running the temporarily-installed tcpdump.

The macOS linker appears to put absolute paths for shared libraries into the 
executable by default:

$ otool -L /bin/cat
/bin/cat:
/usr/lib/libSystem.B.dylib (compatibility version 1.0.0, 
current version 1281.100.1)

so this may not be an issue there.

(Also, the existence of the term "DLL hell" is an indication that shared 
libraries have gotten messy on Windows, but I digress :-))--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Any way to filter ether address when type is LINUX_SLL?

2021-01-22 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 21, 2021, at 8:41 AM, Bill Fenner via tcpdump-workers 
 wrote:

> It would be perfectly reasonable (and fairly straightforward) to update
> libpcap to be able to filter on the Ethernet address in DLT_LINUX_SLL or
> DLT_LINUX_SLL2 mode.

Link-layer address, to be more accurate.

The good news is that, for Ethernet, that address appears to be the source 
address for all packets, incoming and outgoing, at least with the 5.6.7 kernel; 
I haven't checked the kernel code paths for other kernel versions.

That might also be the case for 802.11.

However, for FDDI, for example, it appears not to be set (it's marked as 
zero-length).

> There are already filters that match other offsets in
> the SLL or SLL2 header.  However, I don't think it could be done on live
> captures, only against a savefile.

At least as of 5.6.7, I don't see an SKF_ #define that would correspond to a 
link-layer address, so it appears that it's not possible to easily filter on 
the address in a live capture, at least not with an in-kernel filter.  As we're 
using cooked sockets (PF_PACKET/SOCK_DGRAM), the link-layer header isn't 
supplied to us, so we can't look at it ourselves.

I've been thinking about a world in which we have more pcapng-style APIs.  With 
a capture API that can deliver, for each packet, something similar to a pcapng 
Enhanced Packet Block, with an interface number from the capturing program can 
determine a link-layer header type, so that not all captured packet have to 
have the same link-layer header type, it might be possible to generate a filter 
program that:

could use one of the SKF_ magic offsets to fetch the "next protocol 
type" value for the protocol after the link-layer protocol, so 
link-layer-type-independent code could be used to check for common "next 
protocol type" values such as IPv4, IPv6, and ARP;

could use one of the SKF_ magic offsets to fetch the offset, relative 
to the beginning of the raw packet data, of the first byte past the link-layer 
header, so that link-layer-type-independent code could be used to check for 
anything at the next protocol layer (IP address, etc.);

could use one of the SKF_ magic offsets to fetch the ARPHRD_ type 
giving the link-layer header type, and, based on that run different code to 
check fields in the link-layer header.

This would be done by using a raw socket (PF_PACKET/SOCK_RAW) rather than a 
cooked socket.

With all of that, we could do live-capture filtering of MAC addresses (source 
*or* destination).

That's a lot of work, though.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] libpcap detection and linking in tcpdump

2021-01-22 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 22, 2021, at 2:54 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> I have tested it again with the current master branches of libpcap and
> tcpdump. Both builds (with and without libpcap0.8-dev) now complete
> without errors.
> 
> However, in both cases the installed tcpdump fails to run because it
> is linked with libpcap.so.1. Which, as far as I can remember,
> previously somehow managed to resolve to the
> existing /tmp/libpcap/lib/libpcap.so.1, but not amymore:
> 
> $ /tmp/libpcap/bin/tcpdump --version
> /tmp/libpcap/bin/tcpdump: error while loading shared libraries:
> libpcap.so.1: cannot open shared object file: No such file or directory
> 
> $ ldd /tmp/libpcap/bin/tcpdump
>   linux-vdso.so.1 (0x7ffdc7ffe000)
>   libpcap.so.1 => not found
>   libcrypto.so.1.1 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.1
> (0x7f34522ac000)
>   libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6
> (0x7f3451ebb000)
>   libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2
> (0x7f3451cb7000)
>   libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0
> (0x7f3451a98000)
>   /lib64/ld-linux-x86-64.so.2 (0x7f3452c6f000)
> 
> $ /tmp/libpcap/bin/pcap-config --libs
> -L/tmp/libpcap/lib -Wl,-rpath,/tmp/libpcap/lib -lpcap

So that *should* cause /tmp/libpcap/lib to be added to the executable's path, 
which *should* cause it to look in /tmp/libpcap/lib for shared libraries.

So, if there's a /tmp/libpcap/lib/libpcap.so.1 file, that's not happening, 
somehow.

I'll try experimenting with one of my Ubuntu VMs.

In the meantime, for some fun head-exploding reading, take a look at

https://en.wikipedia.org/wiki/Rpath

and perhaps some other documents found by a search for

lpath rpath linux--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

[tcpdump-workers] Stick with Travis for continuous integration, or switch?

2021-01-18 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
Travis CI is announcing on the travis-ci.org site that "... travis-ci.org will 
be shutting down in several weeks, with all accounts migrating to 
travis-ci.com. Please stay tuned here for more information."

They don't provide any information there.  However, at


https://travis-ci.community/t/build-delays-for-open-source-project/10272/26

they say

As was pointed out in Builds hang in queued state 3 linked to earlier 
in this topic, Travis is moving workers from travis-ci.org to travis-ci.com 1 
in preparation to fully close .org (or rather, make it read-only) around the 
New Year.

...

So you need to migrate to .com to stop experiencing delays. Note the 
caveats:

...

They claim that they'll still offer free service for free software:

Q. Will Travis CI be getting rid of free users? #

A. Travis CI will continue to offer a free tier for public or 
open-source repositories on travis-ci.com and will not be affected by the 
migration.

They also say here:

https://blog.travis-ci.com/2020-11-02-travis-ci-new-billing

that

The upcoming pricing change will not affect those of you who are:

* Building on the Travis CI 1, 2 and 5 concurrency job plans 
who are building on Linux, Windows and experimental FreeBSD environments.
* GitHub Marketplace plans
* Grouped Accounts
* Enterprise customers (not building in our cloud environments)
* Builders on our premium or manual plans. Contact the Travis 
CI support team for more information.

but they also say that

The upcoming pricing change will affect those of you who are:

Building on the macOS environment

macOS builds need special care and attention. We want to make sure that 
builders on Mac have the highest quality experience at the fastest possible 
speeds. Therefore, we are separating out macOS usage from other build usage and 
offering a distinct add-on plan that will correlate directly to your macOS 
usage. Purchase only the credits you need and use them until you run out.

* $15 will buy you 25 000 credits (1 minute of mac build time 
costs 50 credits)
* Use your credits for macOS builds only when you need to run 
these
* Replenish your credits as you need them
* More special build environments that fall into this category 
will be available soon

which may mean that their "free tier" doesn't include macOS.

They also say:

Building on a public repositories only

We love our OSS teams who choose to build and test using TravisCI and 
we fully want to support that community. However, in recent months we have 
encountered significant abuse of the intention of this offering (increased 
activity of cryptocurrency miners, TOR nodes operators etc.). Abusers have been 
tying up our build queues and causing performance reductions for everyone. In 
order to bring the rules back to fair playing grounds, we are implementing some 
changes for our public build repositories.

* For those of you who have been building on public 
repositories (on travis-ci.com, with no paid subscription), we will upgrade you 
to our trial (free) plan with a 10K credit allotment (which allows around 1000 
minutes in a Linux environment).
* You will not need to change your build definitions when you 
are pointed to the new plan
* When your credit allotment runs out - we’d love for you to 
consider which of our plans will meet your needs.
* We will be offering an allotment of OSS minutes that will be 
reviewed and allocated on a case by case basis. Should you want to apply for 
these credits please open a request with Travis CI support stating that you’d 
like to be considered for the OSS allotment. Please include:
* Your account name and VCS provider (like 
travis-ci.com/github/[your account name] )
* How many credits (build minutes) you’d like to 
request (should your run out of credits again you can repeat the process to 
request more or discuss a renewable amount)
* Usage will be tracked under your account information so that 
you can better understand how many credits/minutes are being used

We haven't been building on travis-ci.com, so presumably the first item in the 
list doesn't apply.  If the "We will be offering an allotment..." part applies, 
the "should your run out of credits again you can repeat the process to request 
more or discuss a renewable amount" seems like a pain.

See also this comment:


https://travis-ci.community/t/org-com-migration-unexpectedly-comes-with-a-plan-change-for-oss-what-exactly-is-the-new-deal/10567/15

where the commenter says:

When I emailed support for credits, they gave this list of requirements 
for the so-called 

Re: [tcpdump-workers] libpcap detection and linking in tcpdump

2021-01-08 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 7, 2021, at 5:41 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> 5 years of backward compatibility might be OK'ish, although from time
> to time I run into such "long-term support" systems that in practice
> mean someone keeps paying good money for "support" for 5-10 years, but
> they don't get bugs fixed or new software versions backported. In my
> own experience this tends to have something to do with RedHat Linux
> distributions.

Yeah, a lot of people are running old RHEL or CentOS - a lot of them keep 
asking about newer versions of Wireshark, because the older RHEL/CentOS 
versions provide Wireshark 2.x packages when Wireshark's already up to 3.4.2.

They probably provide old tcpdump, too, so people might want to build newer 
versions.

I've checked in a change that picks up some code from a newer version of the 
pkg-config CMake module to make it work with older versions.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] libpcap detection and linking in tcpdump

2021-01-07 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 7, 2021, at 3:21 PM, Guy Harris via tcpdump-workers 
 wrote:

> So we should either 1) require CMake 3.1 or later or 2) forcibly set 
> PKG_CONFIG_USE_CMAKE_PREFIX_PATH to YES.  For now, my inclination is to do 
> the latter.

That won't work - PKG_CONFIG_USE_CMAKE_PREFIX_PATH *isn't supported* prior to 
3.1.

3.1 dates back to 2015.  That might be sufficient to treat as a minimum.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] libpcap detection and linking in tcpdump

2021-01-07 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Sep 9, 2020, at 9:07 AM, Denis Ovsienko via tcpdump-workers 
 wrote:

> The "Found pcap-config" message means that FindPCAP.cmake has not found
> libpcap by means of pkg-config, and the lack of the message means the
> pkg-config method worked. A few weeks ago Ubuntu 18.04 systems started
> to have the libpcap.pc file in the libpcap0.8-dev package, so on such a
> system "pkg-config --libs libpcap" now prints "-lpcap" and "pkg-config
> --cflags libpcap" prints an empty string, which makes sense.

It makes sense if you want to build with the *system* libpcap.

It does *not* make sense if you want to build with the libpcap that was built, 
and installed under /tmp, in the Travis build.  For *that*, you'd want 
"pkg-config --libs libpcap" to print "-L/tmp/lib -lpcap" and you'd want 
pkg-config --cflags libpcap" to print "-I /tmp/include".

That's because it's finding the libpcap.pc file in the libpcap0.8-dev package, 
not the one in /tmp/lib/pkgconfig; the latter one should do what we want.

I changed .travis.yml to run CMake with PKG_CONFIG_PATH=/tmp/lib/pkgconfig, 
which appears to make it find the right .pc file and thus to find the right 
libpcap.

> What does not make sense is that CMake seems to use non-pkg-config
> flags to tell if a specific feature is available,

It *should* be using the pkg-config flags - the code that tests for features 
just does

#
# libpcap/WinPcap/Npcap.
# First, find it.
#
find_package(PCAP REQUIRED)
include_directories(${PCAP_INCLUDE_DIRS})

cmake_push_check_state()

#
# Now check headers.
#
set(CMAKE_REQUIRED_INCLUDES ${PCAP_INCLUDE_DIRS})

#
# Check whether we have pcap/pcap-inttypes.h.
# If we do, we use that to get the C99 types defined.
#
check_include_file(pcap/pcap-inttypes.h HAVE_PCAP_PCAP_INTTYPES_H)

#
# Check for various functions in libpcap/WinPcap/Npcap.
#
cmake_push_check_state()
set(CMAKE_REQUIRED_LIBRARIES ${PCAP_LIBRARIES})

#
# Check for "pcap_list_datalinks()" and use a substitute version if
# it's not present.  If it is present, check for 
"pcap_free_datalinks()";
# if it's not present, we don't replace it for now.  (We could do so
# on UN*X, but not on Windows, where hilarity ensues if a program
# built with one version of the MSVC support library tries to free
# something allocated by a library built with another version of
# the MSVC support library.)
#
check_function_exists(pcap_list_datalinks HAVE_PCAP_LIST_DATALINKS)

...

cmake_pop_check_state()

which doesn't care whether PCAP_INCLUDE_DIRS and PCAP_LIBRARIES were set from 
pkg-config or pcap-config or manually poking the system.

> but uses pkg-config
> flags to compile and link the source,

*However*, the CMake documentation says about CMAKE_PREFIX_PATH:

Semicolon-separated list of directories specifying installation 
prefixes to be searched by the find_package(), find_program(), find_library(), 
find_file(), and find_path() commands.

and we're setting CMAKE_PREFIX_PATH, so if any of the include or library checks 
use CMAKE_PREFIX_PATH, then they might find headers for the libpcap installed 
in /tmp/libpcap, even though the build itself will use flags from the system 
libpcap.pc.

The CMake 3.19 documentation for FindPkgConfig, which is the module for using 
pkg-config:

https://cmake.org/cmake/help/v3.19/module/FindPkgConfig.html

says:

PKG_CONFIG_USE_CMAKE_PREFIX_PATH
Specifies whether pkg_check_modules() and pkg_search_module() 
should add the paths in the CMAKE_PREFIX_PATH, CMAKE_FRAMEWORK_PATH and 
CMAKE_APPBUNDLE_PATH cache and environment variables to thepkg-config search 
path.

If this variable is not set, this behavior is enabled by 
default if CMAKE_MINIMUM_REQUIRED_VERSION is 3.1 or later, disabled otherwise.

So we should either 1) require CMake 3.1 or later or 2) forcibly set 
PKG_CONFIG_USE_CMAKE_PREFIX_PATH to YES.  For now, my inclination is to do the 
latter.

Once all the other stuff I've checked in passes Travis, I'll try that instead 
of explicitly setting PKG_CONFIG_PATH. and see if that works.

> and when the system has one
> libpcap version installed as a package and another version that the
> user wants to build with, that very easily breaks (and even if it does
> not, the end result is not what the user was expecting).
> 
> Here are my steps to reproduce:
> 
> libpcap$ ./configure --enable-remote --prefix=/tmp/libpcap
> libpcap$ make
> libpcap$ make install
> tcpdumpbuild$ cmake -DWITH_CRYPTO="yes"
> -DCMAKE_PREFIX_PATH=/tmp/libpcap -DCMAKE_INSTALL_PREFIX=/tmp/libpcap
> /path/to/tcpdump_git_clone

Try that with

PKG_CONFIG_PATH=/tmp/libpcap/lib/pkgconfig cmake -DWITH_CRYPTO="yes" 
-DCMAKE_PREFIX_PATH=/tmp/libpcap 

Re: [tcpdump-workers] libpcap detection and linking in tcpdump

2021-01-07 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 7, 2021, at 9:35 AM, Bill Fenner via tcpdump-workers 
 wrote:

> These jobs are still failing, but now for a different reason.

Part of the problem is that pkg-config wasn't finding the locally-installed 
libpcap under /tmp, because PKG_CONFIG_PATH wasn't set to point to 
/tmp/lib/pkgconfig.

We're now doing that, and I re-enabled those jobs; so far, the GCC builds on 
Linux seem to be working for BUILD_LIBPCAP=yes CMAKE=yes.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

[tcpdump-workers] So which is cooler - tcpdump on your wrist or tcpdump on your Mac's Touch Bar?

2021-01-05 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
$ curl -s 
https://opensource.apple.com/source/tcpdump/tcpdump-100/tcpdump.xcodeproj/project.pbxproj.auto.html
 | egrep SUPPORTED_PLATFORMS
SUPPORTED_PLATFORMS = macosx iphoneos 
appletvos watchos bridgeos;
SUPPORTED_PLATFORMS = macosx iphoneos 
appletvos watchos bridgeos;

(bridgeOS is the apparently-watchOS-derived OS that runs on the T-series chips 
that run the Touch Bar on Touch Bar Macs and that handle secure booting and 
possibly some other security stuff.)

I don't know whether it ships on iOS/iPadOS/tvOS/watchOS/bridgeOS or is just 
built to be used in-house.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] tcpdump build doc for Windows

2021-01-03 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jan 3, 2021, at 12:15 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> tcpdump source tree has a short file named "Readme.Win32", which was
> mostly updated on 8 Aug 2019, and a longer file named
> "doc/README.Win32.md", which was mostly updated on 5 Feb 2020. These
> seem to provide somewhat different instructions, perhaps it would be a
> good time to review that.

They're both reasonably up-to-date (they both mention Npcap and the use of 
CMake), but the latter is more detailed.

I'll check whether the first document says anything that's not mentioned in the 
second, and try to merge that into the second one.  Then we can probably get 
rid of the first one.

The top level README.md should probably point to doc/README.Win32.md (or 
README.Windows.md, given that 1) Windows can also be 64-bit and 2) 16-bit 
Windows is pretty much dead, so people are unlikely to get confused and say 
"OK, how do I build this 16-bit?", the answer to which is "we don't even 
support that on UN*X...".--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Performance impact with multiple pcap handlers on Linux

2020-12-22 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Dec 22, 2020, at 3:31 PM, Linus Lüssing  wrote:

> Basically we want to do live measurements of the overhead of the mesh
> routing protocol and measure and dissect the layer 2 broadcast traffic.
> To measure how much ARP, DHCP, ICMPv6 NS/NA/RS/RA, MDNS, LLDP overhead
> etc. we have.

OK, so I'm not a member of the bpf mailing list, so this message won't get to 
that list, but:

Given how general (e)BPF is in Linux, and given the number of places where you 
can add an eBPF program, and given the extensions added by the "(e)" part, it 
might be possible to:

construct a single eBPF program that matches all of those packet types;

provides, in some fashion, an indication of *which* of the packet types 
matched;

provides the packet length as well.

If you *only* care about the packet counts and packet byte counts, that might 
be sufficient if the eBPF program can be put into the right place in the 
networking stack - it would also mean that the Linux kernel wouldn't have to 
copy the packets (as it does for each PF_PACKET socket being used for 
capturing, and there's one of those for every pcap_t), and your program 
wouldn't have to read those packets.

libpcap won't help you there, as it doesn't even know about eBPF, much less 
about it's added capabilities, but it sounds as if this is a Linux-specific 
program, so that doesn't matter.  There may be a compiler allowing you to write 
a program to do what's described above and get it compiled into eBPF.

I don't know whether there's a place in the networking stack to which you can 
attach an eBPF probe to do this, but I wouldn't be surprised to find out that 
there is one.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] Performance impact with multiple pcap handlers on Linux

2020-12-22 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Dec 22, 2020, at 2:05 PM, Linus Lüssing via tcpdump-workers 
 wrote:

> I was experimenting a bit with migrating from the use of
> pcap_offline_filter() to pcap_setfilter().
> 
> I was a bit surprised that installing for instance 500 pcap
> handlers

What is a "pcap handler" in this context?  An open live-capture pcap_t?

> with a BPF rule "arp" via pcap_setfilter() reduced
> the TCP performance of iperf3 over veth interfaces from 73.8 Gbits/sec
> to 5.39 Gbits/sec. Using only one or even five handlers seemed
> fine (71.7 Gbits/sec and 70.3 Gbits/sec).
> 
> Is that expected?
> 
> Full test setup description and more detailed results can be found
> here: https://github.com/lemoer/bpfcountd/pull/8

That talks about numbers of "rules" rather than "handlers".  It does speak of 
"pcap *handles*"; did you mean "handles", rather than "handlers"?

Do those "rules" correspond to items in the filter expression that's compiled 
into BPF code, or do they correspond to open `pcap_t`s?  If a "rule" 
corresponds to a "handle", then does it correspond to an open pcap_t?

Or do they correspond to an entire filter expression?

Does this change involve replacing a *single* pcap_t, on which you use 
pcap_offline_filter() with multiple different filter expressions, with 
*multiple* pcap_t's, with each one having a separate filter, set with 
pcap_setfilter()?  If so, note that this involves replacing a single file 
descriptor with multiple file descriptors, and replacing a single ring buffer 
into which the kernel puts captured packets with multiple ring buffers into 
*each* of which the kernel puts captured packets, which increases the amount of 
work the kernel does.

> PS: And I was also surprised that there seems to be a limit of
> only 510 pcap handlers on Linux.

"handlers" or "handles"?

If it's "handles", as in "pcap_t's open for live capture", and if you're 
switching from a single pcap_t to multiple pcap_t's, that means using more file 
descriptors (so that you may eventually run out) and more ring buffers (so that 
the kernel may eventually say "you're tying up too much wired memory for all 
those ring buffers").

In either of those cases, the attempt to open a pcap_t will eventually get an 
error; what is the error that's reported?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] [OPSAWG] [pcap-ng-format] draft-gharris-opsawg-pcap.txt --- IANA considerations

2020-12-22 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Dec 22, 2020, at 8:36 AM, Michael Richardson  wrote:

> Guy Harris  wrote:
> 
>> And, as per my idea of using 65535 to mean "custom linktype", similar
>> to pcapng custom blocks and options, with either:
> 
> I'm happy with this proposal, but isn't it pcapng specific?

No - it's *cleaner* to implement in pcapng, as you can use Interface 
Description Block (IDB) options to provide the Private Enterprise Number (PEN) 
and an enterprise-specific encapsulation type, but:

if we go with the PEN and and enterprise-specific encapsulation type 
with IDB options, for pcap we can steal the former time stamp offset 
(Reserved1) and time stamp accuracy (Reserved2) fields, interpreting them as 
the PEN and enterprise-specific encapsulation type, respectively, if the link 
type is 65535;

if we go with the PEN as an IDB option, and say that if an enterprise 
wants more than one encapsulation type, they'd have to put a encapsulation type 
at the beginning of the payload, so, for pcap, we'd steal the former time stamp 
offset (Reserved1), interpreting it as the PEN if the link type is 65535;

if we go with putting the PEN and encapsulation type at the beginning 
of the payload, that would work the same way for pcap as it does for pcapng.

--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] [pcap-ng-format] draft-gharris-opsawg-pcap.txt --- FCS length description

2020-12-22 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Dec 22, 2020, at 1:01 AM, Guy Harris  wrote:

> They were originally intended for use with some stuff NetBSD was doing (I'd 
> have to look into the history of the NetBSD code), but I think NetBSD stopped 
> doing that.

The commit message for the change that added the macros was:

commit afbb1ce7227dc5edb291f242ed8d95cd3762fc51
Author: Guy Harris 
Date:   Sat Sep 29 19:33:29 2007 +

Based on work from Florent Drouin, split the 32-bit link-layer type
field in a capture file into:

a 16-bit link-layer type field (it's 16 bits in pcap-NG, and
that'll probably be enough for the foreseeable future);

a 10-bit "class" field, indicating the group of link-layer type
values to which the link-layer type belongs - class 0 is for
regular DLT_ values, and class 0x224 grandfathers in the NetBSD
"raw address family" link-layer types;

a 6-bit "extension" field, storing information about the
capture, such an indication of whether the packets include an
FCS and, if so, how many bytes of FCS are present.

So what NetBSD had was a convention where a capture file could have a 
link-layer type that combined an AF_ value with some additional bits to 
distinguish the value from a regular LINKTYPE_ value; I don't know what AF_ 
values they supported for that, or where that code was, or whether it's still 
supported.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] [pcap-ng-format] draft-gharris-opsawg-pcap.txt --- FCS length description

2020-12-22 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Dec 21, 2020, at 4:31 PM, Michael Richardson  wrote:

> Hi, I have reworked the document that Guy put into XML describing the *PCAP*
> (not NG) format.   I found the text for LinkType to be confusing, and
> frankly, I think wrong.
> 
>   *  LinkType (32 bits): an unsigned value that defines, in the lower
>  16 bits, the link layer type of packets in the file, and
>  optionally indicates the length of the Frame Check Sequence (FCS)
>  of packets in the upper 16 bits.  The list of Standardized Link
>  Layer Type codes is available in [LINKTYPES].  If bit 5 is set,
>  bits 0 through 3 contain the length of the FCS field at the end of
>  all packets; if bit 5 is not set, the length of the FCS field at
>  the end of all packets is unknown.  Bit 4, and bits 6 through 15,
>  SHOULD be filled with 0 by pcap file writers, and MUST be ignored
>  by pcap file readers.

Perhaps that field should be called "LinkTypeandInfo", or something such as 
that, to indicate that only the lower 16 bits are the link type.  (Link-layer 
header types are shared by pcap and pcapng, and the link-layer header type in a 
pcapng Interface Description Block is 16 bits.)

> Looking at libpcap's pcap/pcap.h:
>   https://github.com/the-tcpdump-group/libpcap/blob/master/pcap/pcap.h#L217
> 
> we see:
> /*
> * Macros for the value returned by pcap_datalink_ext().
> *
> * If LT_FCS_LENGTH_PRESENT(x) is true, the LT_FCS_LENGTH(x) macro
> * gives the FCS length of packets in the capture.
> */
> #define LT_FCS_LENGTH_PRESENT(x)  ((x) & 0x0400)
> #define LT_FCS_LENGTH(x)  (((x) & 0xF000) >> 28)
> #define LT_FCS_DATALINK_EXT(x)x) & 0xF) << 28) | 
> 0x0400)
> 
> this suggests that the FCS length is really only 3 bits (maximum FCS size of
> 7 bytes?  Or does 0 indicate 8 bytes?  Ethernet is 4...).

0 indicates "no FCS present".

And, yes, the spec should indicate that.

> I see, however:
>   pcap-dag.c:
>p->linktype_ext = LT_FCS_DATALINK_EXT(pd->dag_fcs_bits/16);
> 
> I can find no other references.  So I guess Ethernet gets a value of 2 (*16 
> bits).

Yes, the length of the FCS is in 16-bit units.

And, yes, the spec should indicate that.

> I can't find any other uses.
> pcap_datalink_ext() is in pcap.c, but no the man page.
> 
> The code does not ignore bits 28:16 of the linktype field (the bits numbered
> 6:15 in the diagram).

They were originally intended for use with some stuff NetBSD was doing (I'd 
have to look into the history of the NetBSD code), but I think NetBSD stopped 
doing that.

> Since we are nowhere close to 64K link types, from looking at the pcap
> document only, we could make it 28-bits:
> BUT: pcapng has a 16-bit LinkType only, so we really need to limit 
> this to
> 16-bits OOPS.  I'll fix this in -01.
> 
> What I'm looking for in this email is:
> 1) confirmation that Linktype is 16-bits.

Yes.

> 2) some explanation of valid FCS values. Seems to be a multiple of 16-bits.
>   Is 0 valid?

Yes - it means "packets do not contain an FCS".

>  Or would that be indicated by LENGTH_PRESENT(x)==0?

*That* means "the FCS length, or whether there is an FCS, is unknown"; 
Wireshark does some heuristics to try to guess whether Ethernet packets have an 
FCS (I added those because, a long time ago, in a galaxy far far away, some 
Macs delivered Ethernet FCSes when capturing over BPF, and that messed up 
packet dissection in some cases).--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] [pcap-ng-format] draft-gharris-opsawg-pcap.txt --- IANA considerations

2020-12-21 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
(Resent, from the correct address this time.)

On Dec 21, 2020, at 5:51 PM, Michael Richardson  wrote:

> The short of it is:
> 
> 1) reserve bits 16:28 of linktype as zero.

In pcap files, presumably; you have only bits 0:15 in pcapng IDBs.

Note that the registry is for both pcap and pcapng, so the specs should say 
that.

> 2) lower 32K Specification Required (any document),
>  upper 32K First Come First Served
> 
> Details:
> The Registry has three sections according to {{RFC8126}}:
> * values from 0 to 32767 are marked as Specification Required.
> *   except that values 147 to 162 are reserved for Private Use
> * values from 32768 to 65000 are marked as First-Come First-Served.
> * values from 65000 to 65536 are marked as Private Use.

Presumably "to 65535" - 65536 doesn't fit in the 16-bit pcapng field.

So, for FCFS, does that mean anybody who wants a linktype can just grab one?

And, as per my idea of using 65535 to mean "custom linktype", similar to pcapng 
custom blocks and options, with either:

the pcap file header/pcanng IDB option containing a Private Enterprise 
Number and private linktype number;

the pcap file header/pcanng IDB option containing a Private Enterprise 
Number, with any linktype specifier being in the link-level header;

the Private Enterprise Number and anything else being in the link-level 
header;

should we reserver 65535?

> I did some editing of the description field to shorten in a lot, but I got
> tired about 30% through the list, not sure if we should even include that
> column.
> There are many entries like:
>  LINKTYPE_PPP_ETHER  |   51   |PPPoE; per RFC 2516

That one's there for NetBSD; I *think* the packet contains just a PPPoE header 
and payload.  I may have to dig into the NetBSD code to see what they do.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers

Re: [tcpdump-workers] pcap_lookupdev returning NULL

2020-11-05 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Nov 5, 2020, at 1:04 AM, Vaughan Wickham  wrote:

> Appreciate all the info that you have provided.
> 
> Although it probably doesn't look like it from my questions; I did actually 
> read some tutorials prior to posting my initial question; and none made 
> reference to the need for:
> sudo setcap cap_net_raw,cap_net_admin+eip {your program} 
> 
> So I'm wondering if you can suggest some reading that I should review to 
> understand the basics of using libpcap.

I suspect most, if not all, tutorials spend little if any time discussing the 
platform-dependent permission issues with capturing traffic with libpcap; they 
probably focus on "how to write code using libpcap", not "how to arrange that 
your program have enough privileges to do something useful with libpcap".

The only discussions I can offer for the "permissions" issue are:

1) the "capture privileges" page of the Wireshark Wiki:


https://gitlab.com/wireshark/wireshark/-/wikis/CaptureSetup/CapturePrivileges

   and, for your case, this particular subsection of that page:


https://gitlab.com/wireshark/wireshark/-/wikis/CaptureSetup/CapturePrivileges#other-linux-based-systems-or-other-installation-methods

2) the main pcap man page:

https://www.tcpdump.org/manpages/pcap.3pcap.html

   in the subsection that begins with "Reading packets from a network 
interface may require that you have special privileges:".

> Also, where can I find an overview of the key differences between version 
> 1.5.3 and the current release?

There isn't one.  In this *particular* case, the difference (which may have 
been introduced before the current 1.9 version) is that pcap_findalldevs() 
(atop which pcap_lookupdev() is built) checks for operability in older releases 
and doesn't do so for newer releases.  However, as noted, the permissions 
required to open a device for capture does *not* differ (and *can't* differ - 
it's a requirement imposed by the OS kernel) between older and newer versions.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] pcap_lookupdev returning NULL

2020-11-04 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Nov 4, 2020, at 10:26 PM, Vaughan Wickham  wrote:

> In regards to your latest comments regarding
> 
> sudo setcap cap_net_raw,cap_net_admin+eip {your program}
> 
> Are you saying that I need to compile my program and then start the compiled 
> version with these arguments, from a terminal?

No.

You need to compile your program (within the IDE or on the command line), 
execute, on the command line, the command

sudo setcap cap_net_raw,cap_net_admin+eip {your program}

where {your program} is the path to the executable that was built, and then you 
can run the program from the command line or from the IDE.

> Alternatively, while I've been happy using CentOS as a development 
> environment up until now. As I'm planning on doing some work with pcap; if 
> there is a "better" distro for doing pcap development I'm more than happy to 
> build another development system using whatever flavour is easiest to develop 
> with.

Note that, as I said, getting a newer version of libpcap will *not* remove the 
requirement that you run your program with special privileges; all it means is 
that pcap_lookupdev() will not require the special privileges, but if you plan 
to *open* the device that it returns, your program will have to run with, at 
minimum, the cap_net_raw privileges.

And all that choosing a distribution other than CentOS will do is perhaps 
change the libpcap version.

> Basically I would like to be able build and execute within the IDE.

Unless you can arrange that the IDE run a special command, *as root*, as part 
of the build process, you won't be able to do everything within the IDE>

The command in question is "setcap cap_net_raw,cap_net_admin+eip {the program 
that was built}".  It will have to ask you for root privileges, which means 
that, if you want to avoid the command line, the IDE will have to run some GUI 
program that asks for your password, or the password of somebody with rights to 
run a program as root (that's what sudo, on the command line, does, but I don't 
know whether any version of sudo can do a GUI prompt when not run on the 
command line) and then run a command as root.

You will also have to have whatever privileges sudo, or the GUI program, 
requires you to have in order for it to allow you to run a program as root.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] pcap_lookupdev returning NULL

2020-11-04 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Nov 4, 2020, at 9:18 PM, Vaughan Wickham  wrote:

> Version: libpcap version 1.5.3

That's an older version (CentOS, proudly trailing-edge!), and only returns 
interfaces that the program can open.

Capturing on Linux generally requires, at minimum, the CAP_NET_RAW privilege, 
and finding devices might also require CAP_NET_ADMIN; root privilege will also 
work.  As such, you program will, by default, not be able to open *any* capture 
device, so:

1) if you were using a sufficiently more recent of libpcap, which 
return interfaces that the program doesn't have sufficient privileges to open 
(so that the user gets a "permission denied" error when trying to capture, 
which is somewhat clear about the underlying problem, rather than just not 
seeing any devices), you'd get "eth0" but then you'd get an error trying to 
open it (presumably that's why you're calling pcap_lookupdev());

2) you need to give your program sufficient privileges.

So try doing

sudo setcap cap_net_raw,cap_net_admin+eip {your program}

and then running the program.  ("cap_net_admin" might not be necessary with 
1.5.1.)--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] pcap_lookupdev returning NULL

2020-11-04 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
What happens if you put

printf("Version: %s\n", pcap_lib_version());

before the pcap_lookupdev() call?

It won't fix the pcap_lookupdev() call not to return NULL, but it'll indicate 
what version of libpcap your program is using, which might help determine what 
the problem is.  Let us know what the "Version:" output is.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] [RFC] Addition of link-layer header types for PCI, PCI-X, and PCI-Express

2020-10-25 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Oct 21, 2020, at 1:56 PM, Aki Van Ness via tcpdump-workers 
 wrote:

> I'm working on a project that plans to store PCI and PCI-Express
> packets in the pcapng format as that's the most appropriate storage
> format and I really rather not roll something custom.
> 
> As such what are thoughts on adding Link-Layer types for PCI, PCI-X,
> and PCI-Express?

It seems reasonable, given that we have USB, Infiniband, and the DisplayPort 
AUX channel.

> And would you want to group all versions of PCI, PCI-X, and
> PCI-Express together or have them be their own values?

Would each version need its own LINKTYPE_ value, or would a single metadata 
header and payload suffice for all versions of PCI, all versions of PCI-X, and 
all versions of PCIe?  From a quick look at the Wikipedia pages for those, for 
what that's worth, they changes for each seem to be at the physical layer, with 
full or at least significant backwards compatibility, so, other than additional 
bits of metadata, would LINKTYPE_PCI, LINKTYPE_PCI_X, and LINKTYPE_PCI_EXPRESS 
suffice?

I'm assuming that the metadata would be different between PCI, PCI-X, and PCIe.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] [pcap-ng-format] New Version Notification for draft-tuexen-opsawg-pcapng-02.txt

2020-10-18 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Oct 18, 2020, at 1:32 AM, Michael Tuexen  wrote:

> Just a note. I'm using the .xml format and put a link in the README.md, which 
> shows the .txt or .html file based on the current .xml.

Yes, that's what we were doing for the pcapng draft before switching to 
kramdown-rfc2629, and what we're still doing for the pcap draft, because I 
haven't yet switched it to kramdown-rfc2629.

That works for xml2rfc XML; it doesn't work for kramdown-rfc2629, because 
xml2rfc.tools.ietf.org doesn't have a converter that goes from kramdown-rfc2629 
directly to HTML/PDF/text.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] [pcap-ng-format] New Version Notification for draft-tuexen-opsawg-pcapng-02.txt

2020-10-17 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Oct 17, 2020, at 6:01 PM, Michael Richardson  wrote:

> Guy Harris via tcpdump-workers  wrote:
> 
>> So is there anything we do to arrange that the "Current committed
>> version as ..." links on the GitHub repository home page work again?
> 
> Yes, there is a travis-ci process that generated the gh-pages.
> I haven't done that yet.  It was slightly involved before, but it has gotten
> significantly easier I'm told.  Travis-CI does the work.
> Oops, it is now "Circle" (CI) that is going it:
> 
>   https://github.com/martinthomson/i-d-template/blob/main/doc/REPO.md
>   _Automatic Update for Editor's Copy with Circle CI_

So, once that's done, there will be .xml files in the repository, built from 
the .md files?  (File, currently, but at some point I'll convert the pcap spec 
to  kramedown-rfc2629 as well.)

>> Or is there some site that will run kramdown-rfc2629 on a Markdown file
>> and run xml2rfc on the result, along the lines of what
>> xml2rfc.tools.ietf.org does?  I haven't gotten
> 
>> https://xml2rfc.tools.ietf.org/experimental.html#kramdown
> 
>> to work - I tried pasting
>> https://raw.githubusercontent.com/pcapng/pcapng/master/draft-tuexen-opsawg-pcapng.md
>> into the URL box, selecting "Window", and hitting Submit, but it didn't
>> seem to work, it just popped up a blank
> 
> generally, "gem install kramedown-rfc2629" is all you need to do.

Yes, that worked for me on macOS Catalina.  (I'm not sure whether Big Sur ships 
with Ruby or not; Apple wants to stop shipping the scripting language 
interpreters - or, at least, the ones not required by Single UNIX Spec 
conformance - probably because they don't want to be responsible for making it 
work *and* for keeping it up to date.  That's the tradeoff with "OS vendor 
provides third-party software" - the good news is it's there without having to 
download it, the bad news is that you may not be getting the latest version.)

But that means that I can generate the .xml files on my machine, for checking 
purposes.  I was trying to find a Web server that could be handed a URL for a 
kramedown-rfc2629 document and that would return an HTML/PDF/txt version of the 
document, for display in the browser, similar to what the xml2rfc server does 
for URLs pointing to an xml2rfc XML document, so that we could add links to 
README.me.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] [pcap-ng-format] New Version Notification for draft-tuexen-opsawg-pcapng-02.txt

2020-10-17 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Oct 17, 2020, at 4:19 PM, Guy Harris  wrote:

> Or is there some site that will run kramdown-rfc2629 on a Markdown file and 
> run xml2rfc on the result, along the lines of what xml2rfc.tools.ietf.org 
> does?  I haven't gotten
> 
>   https://xml2rfc.tools.ietf.org/experimental.html#kramdown
> 
> to work - I tried pasting 
> https://raw.githubusercontent.com/pcapng/pcapng/master/draft-tuexen-opsawg-pcapng.md
>  into the URL box, selecting "Window", and hitting Submit, but it didn't seem 
> to work, it just popped up a blank window.

OK, that appears to go from kramdown-rfc2629 to xml2rfc XML format, but doesn't 
go the rest of the way; they don't appear to have a converter that takes 
kramdown-rfc2629 as input and gives you HTML/text/PDF as output.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] [pcap-ng-format] New Version Notification for draft-tuexen-opsawg-pcapng-02.txt

2020-10-17 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Sep 28, 2020, at 4:28 PM, Michael Richardson  wrote:

> Guy Harris  wrote:
>> For 2), I note that
> 
>>  
>> https://github.com/pcapng/pcapng/blob/master/draft-tuexen-opsawg-pcapng.md
> 
>> has a bunch of stuff that GitHub isn't treating as markup, such as the
>> stuff prior to the "Introduction" heading, and the tags such as
>> "{::boilerplate bcp14}".  Is that an extension of Markdown not
>> supported by GitHub's Markdown renderer but supported by some
>> Markdown-to-RFC XML converter
> 
> Yes, kramdown-rfc2629. The MT Makefile does all the magic.

(Presumably "the MT Makefile" is the currently checked-in Martin Thompson 
Makefile in GitHub.)

So is there anything we do to arrange that the "Current committed version as 
..." links on the GitHub repository home page work again?

Should we, for now, put the XML documents back into the repository, and either

1) arrange, somehow, that GitHub automatically regenerate the XML 
documents if the Markdown documents are updated, if that's possible

or

2) have everybody who modifies the Markdown documents update the XML 
documents and check them in with the updated Markdown documents?

Or is there some site that will run kramdown-rfc2629 on a Markdown file and run 
xml2rfc on the result, along the lines of what xml2rfc.tools.ietf.org does?  I 
haven't gotten

https://xml2rfc.tools.ietf.org/experimental.html#kramdown

to work - I tried pasting 
https://raw.githubusercontent.com/pcapng/pcapng/master/draft-tuexen-opsawg-pcapng.md
 into the URL box, selecting "Window", and hitting Submit, but it didn't seem 
to work, it just popped up a blank window.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] libpcap error codes?

2020-10-07 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Oct 7, 2020, at 3:16 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> Do you mean to introduce a function like pcap_error(), which the
> developers would be able to interrogate if they need in use cases like
> this? Then existing functions could be slowly updated as needed to store
> the fault details somewhere for that function.

I was thinking of a new API for injecting packets, which would directly return 
a PCAP_ERROR_ value.  A pcap_last_error() routine would also handle that case, 
but if a new routine would return a success vs. failure value, it might as well 
return an error code, so pcap_last_error() would be useful only for existing 
routines.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] libpcap error codes?

2020-10-07 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Oct 7, 2020, at 1:30 PM, Fernando Gont via tcpdump-workers 
 wrote:

> WHile using pcap_inject() in Linux, it is failing with "pcap_inject(): send: 
> Resource temporarily unavailable". In principle, one would expect that for 
> temporary problems (such as this one), one may one to wait a bit and retry.  
> So it would make sense to somehow be able to process the error 
> code/condition, and act differently depending on the error type.
> 
> Is there a way to get an error code (say, int value), as opposed to a text 
> describing it?

There isn't, other than looking at errno.

A new API could be added that returns a PCAP_ERROR_ value rather than -1 on 
error (so as not to break source or binary compatibility with code using the 
existing APIs).--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] [OPSAWG] New Version Notification for draft-tuexen-opsawg-pcapng-02.txt

2020-09-30 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Sep 29, 2020, at 7:14 PM, Qin Wu  wrote:

> Can you clarify what functionalities is missed for more modern applications? 
> Since it is enhancement to libpcap, do you expect all the future packet 
> capture tools support the format defined in this draft?

pcapng is a file format that's a replacement for pcap.

The current version of libpcap can read some pcapng files, but it only shows 
what can be shown through the existing pcap API, so most of the enhancements 
don't make a difference to programs using libpcap.  That version of libpcap 
cannot *write* pcapng files.

macOS's version of libpcap has undocumented APIs that allow macOS's tcpdump to 
read and write pcapng files.

Wireshark doesn't use libpcap to read capture files; it fully supports reading 
and writing pcapng files.

In the future, we would like to add new APIs to libpcap that support reading 
and writing pcapng files (and pcap files as well); the new APIs will make all 
of the added capabilities of pcapng available.  However, programs that use 
libpcap will have to be changed to use the new APIs in order to use those added 
capabilities.  tcpdump will probably be the first program updated to use them.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] New Version Notification for draft-tuexen-opsawg-pcapng-02.txt

2020-09-28 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Sep 28, 2020, at 2:00 PM, Michael Tuexen  wrote:

> On 28. Sep 2020, at 22:48, Guy Harris  wrote:
> 
>> On Sep 28, 2020, at 1:42 PM, Michael Tuexen  wrote:
>> 
>>> Shouldn't we write up (I can work on an initial version) of
>>> a specification for .pcap.
>> 
>>  
>> https://github.com/pcapng/pcapng/blob/master/draft-gharris-opsawg-pcap.xml
> 
> Cool. Do you want to publish it as an RFC?

At some point.

Currently, I view it as "up for review by the community", and there have been 
pull requests from the community applied.

Should its publication coincide with the introduction of an IANA registry of 
link-layer data types (replacing the tcpdump.org one)?

Should we publish one RFC for the pcap format and one RFC that includes the 
current content of the registry?  (The latter would probably be much bigger 
than the former)--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] New Version Notification for draft-tuexen-opsawg-pcapng-02.txt

2020-09-28 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Sep 28, 2020, at 1:42 PM, Michael Tuexen  wrote:

> Shouldn't we write up (I can work on an initial version) of
> a specification for .pcap.


https://github.com/pcapng/pcapng/blob/master/draft-gharris-opsawg-pcap.xml


http://xml2rfc.tools.ietf.org/cgi-bin/xml2rfc.cgi?url=https://raw.githubusercontent.com/pcapng/pcapng/master/draft-gharris-opsawg-pcap.xml=html/ascii=ascii
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] New Version Notification for draft-tuexen-opsawg-pcapng-02.txt

2020-09-28 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Sep 28, 2020, at 12:06 PM, Michael Tuexen  wrote:

> On 28. Sep 2020, at 20:26, Michael Richardson  wrote:
> 
>> internet-dra...@ietf.org wrote:
>>> Diff:
>>> https://www.ietf.org/rfcdiff?url2=draft-tuexen-opsawg-pcapng-02
>> 
>> Hi, I have converted the xml to markdown.
> 
> Why? If we want to publish this, it will be published in xmlv3. So
> better to use that format earlier...

There are tools to convert Markdown to v2 or v3 RFC XML:

https://www.rfc-editor.org/pubprocess/tools/

so:

1) is it easier to edit Markdown or RFC XML?

2) is Markdown rich enough to do everything we want to do?

For 2), I note that


https://github.com/pcapng/pcapng/blob/master/draft-tuexen-opsawg-pcapng.md

has a bunch of stuff that GitHub isn't treating as markup, such as the stuff 
prior to the "Introduction" heading, and the tags such as "{::boilerplate 
bcp14}".  Is that an extension of Markdown not supported by GitHub's Markdown 
renderer but supported by some Markdown-to-RFC XML converter, or incomplete 
parts of the RFC XML to Markdown conversion?

In addition, the XML version at


https://github.com/pcapng/pcapng/blob/master/reference-draft-tuexen-opsawg-pcapng.xml

has some additional Decryption Secrets Block secret formats.  Those have data 
formats that *themselves* call for figures, and I'd been trying, at one point, 
to determine how to do that in RFC XML v2 format - it might require v3 format.  
Can that be handled with Markdown?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] New Version Notification for draft-tuexen-opsawg-pcapng-02.txt

2020-09-28 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Sep 28, 2020, at 12:06 PM, Michael Tuexen  wrote:

> Do we want to finally publish that? Up to now, I think the point was to
> find a home where it is substantially discussed and improved...

For example, unlike pcap, which is not easily changeable (you *can* change it, 
but that involves adding new magic numbers), pcapng can have new block types 
and option types.

There are extensible protocols with RFCs; that's handled with protocol 
registries:

https://www.iana.org/protocols

and with new I-Ds -> RFCs for extensions.  We'd have to set up registries for 
block and option types if we publish an RFC for pcapng.  We would *also* want a 
registry for link-layer header types, for both pcap and pcapng.

See, for example, RFC 1761

https://tools.ietf.org/html/rfc1761

which specifies the Sun snoop file format, and RFC 3827:

https://tools.ietf.org/html/rfc3827

which sets up a registry for snoop link-layer header types:


https://www.iana.org/assignments/snoop-datalink-types/snoop-datalink-types.xhtml#snoop-datalink-types-2

and adds some new entries to it.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] tcpdump ack why become more 6 bytes

2020-09-14 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
This is not a security issue; questions about tcpdump should be sent to 
tcpdump-workers@lists.tcpdump.org, which is where I'm sending this question.

On Sep 14, 2020, at 8:22 PM, Accepted <532876...@qq.com> wrote:

> hi, in this picture, I try to use tcpdump to get package when a new 
> connection become.
> but in three handshakes,the last ack why added more 6 bytes?

If that's Ethernet traffic, it's Ethernet padding.

An ACK-only TCP-over-IPv4 packet with no IP or TCP options has 20 bytes of IP 
header (the "45" at the beginning of the IP header says "IPv4, with a 20-byte 
header), 20 bytes of TCP header, and no TCP payload, for a total of 40 bytes.  
The Ethernet header is an additional 14 bytes, for a total of 54 bytes.

An Ethernet packet has a minimum size of 64 bytes, including the 4-byte CRC at 
the end of the packet; the CRC is normally not captured, so it doesn't show up 
in tcpdump.  The ACK-only packet must therefore have 6 bytes of padding before 
the 4-byte CRC, to be 64 bytes long.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] backward compatibility in pcap_loop(3PCAP)?

2020-08-21 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Aug 21, 2020, at 2:48 PM, Denis Ovsienko via tcpdump-workers 
 wrote:

> The man page says:
> 
>   (In  older  versions  of libpcap, the behavior when cnt was 0
>   was undefined; different platforms and devices  behaved
>   differently,  so  code that  must work with older versions of
>   libpcap should use -1, not 0, as the value of cnt.)
> 
> Would it make sense to move this paragraph to a BACKWARD COMPATIBILITY
> section and to tell which specific version started to recognise 0 as a
> valid value?

That's where other "some of what this manual page says doesn't apply to older 
versions of libpcap" items go, so it'd make sense.

The PACKET_COUNT_IS_UNLIMITED(), which is what pcap modules should now be 
using, was introduced in libpcap 1.5, so the first version where either 0 or -1 
should work is 1.5.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Using libnetdissect in other code, outside tcpdump source tree

2020-08-12 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Aug 12, 2020, at 1:31 PM, Guy Harris via tcpdump-workers 
 wrote:

> We should probably have an include/libnetdissect directory in which we 
> install netdissect.h and the headers it requires.

Or include/netdissect.

> However, API-declaring headers should *NEVER* require config.h (there was a 
> particularly horrible case with OpenBSD's version of libz, forcing a painful 
> workaround in Wireshark:

...

> so if anything in netdissect.h depends on config.h definitions, we should try 
> to fix that.

It looks like it's just declaring replacements for strlcat(), strlcpy(), 
strdup(), and strsep() if the platform doesn't provide them.  That should be 
done in a non-public header.

> That leaves ip.h and ip6.h; I'd have to check to see whether they should be 
> considered part of the API or not.

The comments are:

#include "ip.h" /* struct ip for nextproto4_cksum() */
#include "ip6.h" /* struct ip6 for nextproto6_cksum() */

so what should probably be done is have a header for *users* of libnetdissect 
and a separate header for *components* of libnetdissect; the latter can define 
more things.  (The latter would be a non-public header, unless we decide to 
support third-party dissector plugins; that would also mean we'd probably want 
to have something like Wireshark's dissector tables to which those plugins 
would add themselves.)--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Using libnetdissect in other code, outside tcpdump source tree

2020-08-12 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Aug 11, 2020, at 4:55 AM, Bill Fenner via tcpdump-workers 
 wrote:

> Is there a plan for a public face for libnetdissect?

At some point we should probably do that.

(Back in the late '90's, I discovered a program called tcpview, which was a 
Motif(!)-based GUI network analyzer based on modified tcpdump code, so people 
*have* used tcpdump's dissection code in their own programs.)

> I've tried teasing it
> out, and I ended up having to install:
> funcattrs.h print.h config.h netdissect.h ip.h ip6.h compiler-tests.h
> status-exit-codes.h
> in /usr/include/tcpdump/ in order to compile a libnetdissect-using program
> outside of the tcpdump source tree.

netdissect.h is the library's main API-declaration header.  print.h also 
declares functions that I'd consider part of libnetdissect's API; 
status-exit-codes.h is also part of that API.

For funcattrs.h and compiler-tests.h, libpcap installs equivalents in the 
include/pcap directory, for use by pcap.h.

We should probably have an include/libnetdissect directory in which we install 
netdissect.h and the headers it requires.

However, API-declaring headers should *NEVER* require config.h (there was a 
particularly horrible case with OpenBSD's version of libz, forcing a painful 
workaround in Wireshark:

/*
 * OK, now this is tricky.
 *
 * At least on FreeBSD 3.2, "/usr/include/zlib.h" includes
 * "/usr/include/zconf.h", which, if HAVE_UNISTD_H is defined,
 * #defines "z_off_t" to be "off_t", and if HAVE_UNISTD_H is
 * not defined, #defines "z_off_t" to be "long" if it's not
 * already #defined.
 *
 * In 4.4-Lite-derived systems such as FreeBSD, "off_t" is
 * "long long int", not "long int", so the definition of "z_off_t" -
 * and therefore the types of the arguments to routines such as
 * "gzseek()", as declared, with prototypes, in "zlib.h" - depends
 * on whether HAVE_UNISTD_H is defined prior to including "zlib.h"!
 *
 * It's not defined in the FreeBSD 3.2 "zlib", so if we include "zlib.h"
 * after defining HAVE_UNISTD_H, we get a misdeclaration of "gzseek()",
 * and, if we're building with "zlib" support, anything that seeks
 * on a file may not work.
 *
 * Other BSDs may have the same problem, if they haven't done something
 * such as defining HAVE_UNISTD_H in "zconf.h".
 *
 * "config.h" defines HAVE_UNISTD_H, on all systems that have it, and all
 * 4.4-Lite-derived BSDs have it.  Therefore, given that "zlib.h" is included
 * by "file_wrappers.h", that means that unless we include "zlib.h" before
 * we include "config.h", we get a misdeclaration of "gzseek()".
 *
 * Unfortunately, it's "config.h" that tells us whether we have "zlib"
 * in the first place, so we don't know whether to include "zlib.h"
 * until we include "config.h"
 *
 * A similar problem appears to occur with "gztell()", at least on
 * NetBSD.
 *
 * To add further complication, on recent versions, at least, of OpenBSD,
 * the Makefile for zlib defines HAVE_UNISTD_H.
 *
 * So what we do is, on all OSes other than OpenBSD, *undefine* HAVE_UNISTD_H
 * before including "wtap-int.h" (it handles including "zlib.h" if HAVE_ZLIB
 * is defined, and it includes "wtap.h", which we include to get the
 * WTAP_ERR_ZLIB values), and, if we have zlib, make "file_seek()" and
 * "file_tell()" subroutines, so that the only calls to "gzseek()" and
 * "gztell()" are in this file, which, by dint of the hackery described
 * above, manages to correctly declare "gzseek()" and "gztell()".
 *
 * On OpenBSD, we forcibly *define* HAVE_UNISTD_H if it's not defined.
 *
 * Hopefully, the BSDs will, over time, remove the test for HAVE_UNISTD_H
 * from "zconf.h", so that "gzseek()" and "gztell()" will be declared
 * with the correct signature regardless of whether HAVE_UNISTD_H is
 * defined, so that if they change the signature we don't have to worry
 * about making sure it's defined or not defined.
 *
 * DO NOT, UNDER ANY CIRCUMSTANCES, REMOVE THE FOLLOWING LINES, OR MOVE
 * THEM AFTER THE INCLUDE OF "wtap-int.h"!  Doing so will cause any program
 * using Wiretap to read capture files to fail miserably on a FreeBSD
 * 3.2 or 3.3 system - and possibly some other BSD systems - if zlib is
 * installed.  If you *must* have HAVE_UNISTD_H defined before including
 * "wtap-int.h", put "file_error()" into a file by itself, which can
 * cheerfully include "wtap.h" and get "gzseek()" misdeclared, and include
 * just "zlib.h" in this file - *after* undefining HAVE_UNISTD_H.
 */

Furthermore, the result of config.h may *also* reflect:

the compiler being used when it was generated, which means that it may 
not be appropriate on platforms with multiple compilers that would produce 
different config.h results, if you're compiling with a compiler other than the 
one used to generate config.h;

the instruction set used as the target when config.h was generated, 
which means that it may not be appropriate on platforms that support fat 
binaries, such as macOS (Apple now only support 

Re: [tcpdump-workers] pcap_compile_nopcap() not in man pages

2020-08-12 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
(Your first attempt seems to have worked - finally.  Perhaps Michael cleared 
the backlog?)

On Aug 10, 2020, at 4:24 AM, Denis Ovsienko via tcpdump-workers 
 wrote:

> It turns out, pcap_compile_nopcap() has been a part of the libpcap API
> since version 0.5 (June 2000), but it is not even mentioned anywhere in
> the man pages. The existing pcap_compile.3pcap man page seems to be the
> best place to add this information, since the two functions are
> similar. Would it be the right thing to do?

The problem with pcap_compile_nopcap() is that it provides no way to return an 
error message if it fails, unlike pcap_open_dead() combined with 
pcap_compile(), where you can use pcap_geterr() before closing the pcap_t.

An additional problem is that NetBSD fixed this by adding an error-buffer 
pointer argument, but that meant that NetBSD's pcap_compile_nopcap() was 
unfixably incompatible with the one in other OSes.  They've shifted to the 
compatible API, at the cost of not being able to get an error string.

So, for now, my inclination is to 1) deprecate pcap_compile_nopcap() (complete 
with marking it as deprecated, so code that uses it gets a compile-time warning 
on compilers where the deprecation macro is supported) and 2) not document it.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


[tcpdump-workers] About DLT_LANE8023 and lane_if_print()

2020-08-12 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Aug 4, 2020, at 1:28 PM, Francois-Xavier Le Bail  
wrote:

> lane_if_print() in print-lane.c
> (Added by 77b2a4405561467f66a3dfb0f8ce2b0eaa5ebaf9 in Sun Nov 21 1999 "print 
> of ATM LanEmulation")
> is called for DLT_LANE8023:
> 
> print.c-56-#ifdef DLT_LANE8023
> print.c:57: { lane_if_print,DLT_LANE8023 },
> print.c-58-#endif
> (Added by 777892a591065d32fb8744675574f9214398283a in Sun Nov 21 1999 "add 
> lane and cip printing")
> 
> But DLT_LANE8023 was never defined in libpcap nor tcpdump.

A comment in pcap/dlt.h says:

/*
* 17 was used for DLT_PFLOG in OpenBSD; it no longer is.
*
* It was DLT_LANE8023 in SuSE 6.3, so we defined LINKTYPE_PFLOG
* as 117 so that pflog captures would use a link-layer header type
* value that didn't collide with any other values.  On all
* platforms other than OpenBSD, we defined DLT_PFLOG as 117,
* and we mapped between LINKTYPE_PFLOG and DLT_PFLOG.
*
* OpenBSD eventually switched to using 117 for DLT_PFLOG as well.
*
* Don't use 17 for anything else.
*/

However, I downloaded ISO disk 6 from

https://archive.org/download/SuSE6.3-full

mounted it (macOS diskimages-helper for the win!), copied libpcap-0.4a6.spm, 
turned it into a cpis file with rpm2cpio, and extracted the contents; I can't 
see DLT_LANE8023 in either the source (which may be a vanilla version of 
libpcap 0.4a6, often mistakenly thought to be the last libpcap from LBL - 0.4 
non-alpha was) or in the SuSE patch, so either

1) there was no DLT_LANE8023 in SuSE 6.3;

2) there was, but it wasn't in libpcap;

3) there was, but it wasn't in *that* libpcap, it was in some *other* 
libpcap (but I couldn't find any other libpcap);

4) that's not an image of SuSE 6.3.

So I checked my mailbox, and found a message from 2000(!) to the ethereal-dev 
mailing list:

https://www.wireshark.org/lists/ethereal-users/28/msg00159.html

in which, among other things, I said:

> So I downloaded an RPM from SuSE's Web site, and the "bpf.h" in it says:
> 
>   /* Warning: not binary compatible with ANK libpcap !!! */
>   #define DLT_LANE802317  /* LANE 802.3(Ethernet) */
>   #define DLT_CIP 18  /* ATM Classical IP */

and

> And then, in Linuxland:
> 
>   We have Alexey's patches - which may just have picked stuff up
>   from elsewhere - which add
> 
>   #define DLT_LANE802315  /* LANE 802.3(Ethernet) */
>   #define DLT_CIP 16  /* ATM Classical IP */
> 
>   We have the ISDN4Linux patches, which add
> 
>   #define DLT_I4L_RAWIP   15  /* isdn4linux: rawip */
>   #define DLT_I4L_IP  16  /* isdn4linux: ip */
> 
>   And now we have SuSE's, which add the ISDN4Linux stuff, and then
>   add the stuff from Alexey's patches *but with different
>   numbers from the ones in his patches*.

I'm not sure what RPM that was, but the idea was, presumably, that *if* you 
built tcpdump on a system that *did* define DLT_LANE8023 in *its* libpcap, and 
used *its* libpcap, it could print packets that used DLT_LANE8023.

("Alexey"/"ANK" is Alexey Kuznetzov who, among other things, created the 
"turbopacket" patch to the Linux PF_PACKET socket code; that eventually got 
into the mainline kernel - the "T" in "TPACKET_V[123]" stands, I think, for 
"turbo".)

> What to do with this?

As far as I know, neither DLT_LANE8023 nor DLT_CIP are still around in any 
Linux distribution, so nuking support for that's OK with me.  I'm not seeing 
any support for either of them in Wireshark.

Current OpenSuSe Leap 15.2 does not have DLT_LANE8023 or DLT_CIP.

Is there any reason to keep the code to handle those DLT_ values around?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] tcpslice licence

2020-08-03 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Aug 3, 2020, at 12:33 PM, Denis Ovsienko via tcpdump-workers 
 wrote:
> 
> Whilst updating the description of files in tcpslice (the little
> relative of tcpdump) repository, it came to my attention that it does
> not have the customary LICENSE file. I have looked through the .c
> and .h files and they contain the following boilerplates:
> 
> * a 4-clause BSD-style licence seemingly derived from the so-called
>  LBNL 3-clause BSD: https://opensource.org/BSD-3-Clause-LBNL
> * a 3-clause BSD licence with the same text as above and two clauses
>  merged together
> * GPL2+
> 
> Would it be difficult to tell which licence is the right one for the
> program, and to say it in a LICENSE file for clarity?

The first step I'd take would be to get rid of the GPLed headers in favor of 
BSD-licensed headers, e.g. taking the ip.h, tcp.h, and udp.h headers from 
tcpdump and changing the code to work with them.

What remains are:

1) files such as tcpslice.c, which have a 3-clause variation of the original 
4-clause BSD license:

https://spdx.org/licenses/BSD-4-Clause.html

that puts the fourth clause ("don't use our name to endorse or promote products 
derived from this software without specific prior written permission") in a 
separate sentence, with no number, after the third clause ("give us credit by 
name");

2) files such as sessions.c, which have a 3-clause BSD license:

https://spdx.org/licenses/BSD-3-Clause.html

(with a slight wording tweak - just "The name of the author" rather than 
"Neither the name of the copyright holder nor the names of its contributors", 
probably because the copyright holder is the only contributor).

The 3-clause variation of the original 4-clause BSD license has the 
"advertising clause" ("All advertising materials mentioning features or use of 
this software must display the following acknowledgement: This product includes 
software developed by {XXX}.").

However, the 3-clause LBNL license you mention above is different - it's the 
LBNL version of the 3-clause BSD license, that has 3 numbered clauses because 
it doesn't have the advertising clause, not because it doesn't give the fourth 
clause of the original 4-clause BSD license a number.

A while ago, I tried contacting people at LBNL to see whether the big BSD "we 
hereby drop the advertising clause" letter applied to code licensed by LBNL.  I 
seem to remember not getting a definitive answer; I can't find *any* answer in 
my mail any more.  (Time to run find | xargs egrep on my mail directory?)

However, the 3-clause LBNL license *does* remove the clause - *and* the page 
you cite gives

> License Steward: 
> Sebastian Ainslie
> Principal Commercialization & Licensing Lead
> Computing Sciences Area & Energy Geosciences Division
> Intellectual Property Office, Lawrence Berkeley National Laboratory

so I'll try contacting Mr. Ainslie to see whether we can replace the 
3-clause-plus-one-unnumbered-clause LBL license with the 3-clause LB(N)L 
license in libpcap, tcpdump, and tcpslice.

--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Proposed update to DLT_BLUETOOTH_LE_LL_WITH_PHDR

2020-07-13 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 13, 2020, at 8:09 AM, Sultan Khan  wrote:

> Hmm. Chris Kilgour (whiterocker) originally created the spec, and the version 
> on tcpdump.org was just a backup copy. Now, Chris has said that he is no 
> longer active in the Bluetooth LE sniffing space, and he doesn’t want to be 
> in charge of the spec any more.

Does this also apply to the LINKTYPE_BLUETOOTH_BREDR_BB specification?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Proposed update to DLT_BLUETOOTH_LE_LL_WITH_PHDR

2020-07-13 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 13, 2020, at 9:02 AM, Sultan Khan  wrote:

> Thanks Chris. I’ll make a pull request to tcpdump-htdocs later today, and 
> I’ll include a link to the previous version of the spec as an archive.org 
> link to the old one on whiterocker.com.

The new version is a superset of the old version, so that any header that 
conforms to the old version also conforms to the new version, right?

If so, I don't see any need for an archive.org link to the old version.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Proposed update to DLT_BLUETOOTH_LE_LL_WITH_PHDR

2020-07-13 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 10, 2020, at 2:57 PM, Sultan Khan  wrote:

> Link to the updated version of the spec with the latest changes: 
> https://gistcdn.githack.com/sultanqasim/8b6561309f5934f084a0d938ae733b7a/raw/199fb1867642c927f768fe7d67dae2a639acb48e/LINKTYPE_BLUETOOTH_LE_LL_WITH_PHDR.html

So

https://www.tcpdump.org/linktypes.html

currently links to

http://www.whiterocker.com/bt/LINKTYPE_BLUETOOTH_LE_LL_WITH_PHDR.html

for LINKTYPE_BLUETOOTH_LE_LL_WITH_PHDR.  What should it link to now?





--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Proposed update to DLT_BLUETOOTH_LE_LL_WITH_PHDR

2020-07-10 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
For an advertising physical channel PDU, it appears that the PDU type is in the 
least-significant 4 bits of the PDU header.

Is that not present in an auxiliary advertising packet?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Proposed update to DLT_BLUETOOTH_LE_LL_WITH_PHDR

2020-07-10 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
A couple more editorial comments:

In the description of the bits in the Flags field, I'd describe the 0x3000 bits 
as "PDU type dependent", and, after they're listed indicate that:

For PDU types other than type 1 (auxiliary advertising), the PDU type 
dependent field indicates the checked status of the MIC portion of the 
decrypted packet:

* 0x1000 indicates the MIC portion of the decrypted LE Packet 
was checked
* 0x2000 indicates the MIC portion of the decrypted LE Packet 
passed its check

For PDU type 1 (auxiliary advertising, the PDU type dependent field 
indicates the auxiliary advertisement type:

* 0x: AUX_ADV_IND
* 0x1000: AUX_CHAIN_IND
* 0x2000: AUX_SYNC_IND
* 0x3000: AUX_SCAN_RSP

I'd redo the last two paragraphs as:

The LE Packet field follows the previous fields. All multi-octet values 
in the LE Packet are always expressed in little-endian format, as is the normal 
Bluetooth practice.

For packets using the LE Uncoded PHYs (LE 1M PHY and LE 2M PHY) as 
defined in the Bluetooth Core Specification v5.2, Volume 6, Part B, Section 
2.1, it is represented as the four-octet access address, immediately followed 
by the PDU and CRC; it does not include the preamble.

For packets using the LE Coded PHY as defined in the Bluetooth Core 
Specification v5.2, Volume 6, Part B, Section 2.2, the LE Packet is represented 
as the four-octet access address, followed by the Coding Indicator (CI), stored 
in a one-octet field with the lower 2 bits containing the CI value, immediately 
followed by the PDU and the CRC; it does not include the preamble. Packets 
using the LE Coded PHY are represented in an uncoded form, so the TERM1 and 
TERM2 coding terminators are not included in the LE packet field.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Proposed update to DLT_BLUETOOTH_LE_LL_WITH_PHDR

2020-07-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jul 9, 2020, at 1:46 PM, Sultan Khan  wrote:

> Through discussions with Joakim Anderson (of Nordic) and Mike Ryan (Ubertooth 
> developer), and going through several iterations of proposed protocol 
> updates, I/we came up with this: 
> https://gistcdn.githack.com/sultanqasim/8b6561309f5934f084a0d938ae733b7a/raw/LINKTYPE_BLUETOOTH_LE_LL_WITH_PHDR.html

In the last paragraph, it says:

For packets using the LE Coded PHY as defined in the Bluetooth Core 
Specification v5.2, Volume 6, Part B, Section 2.2, the Coding Indicator (CI) is 
represented by the two least significant bits of a dedicated coding indicator 
byte between the Access Address and PDU. Packets received using the LE Coded 
PHY are represented in an uncoded form, so the TERM1 and TERM2 coding 
terminators are not included in the LE packet field.

Perhaps that's a bit clearer if stated as

For packets using the LE Coded PHY as defined in the Bluetooth Core 
Specification v5.2, Volume 6, Part B, Section 2.2, the LE Packet is represented 
as the Coding Indicator (CI), stored in a one-octet field with the lower 2 bits 
containing the CI value, immediately followed by the PDU and the CRC.  Packets 
received using the LE Coded PHY are represented in an uncoded form, so the 
TERM1 and TERM2 coding terminators are not included in the LE packet field.--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


[tcpdump-workers] Reading capture files with an unknown link-layer header type

2020-06-11 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
François checked in a change to tcpdump so that, if it's handed a capture file 
with a link-layer header type for which it has no dissector, it just dumps the 
packet data in hex, rather than failing with an indication that the header type 
isn't supported.

However, pcap_compile(), in *libpcap*, will fail with an unknown header type - 
and tcpdump always hands a filter to pcap_compile(), even if it's a null string 
(which means "accept every packet").

It doesn't fail with *known* filter types for which most filters are 
unsupported, it just rejects most of them (other than "link[M:N]").

Is there any reason *not* handle link-layer types unknown to libpcap in 
pcap_compile()?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] [AiG-CERT #104737] DLT value

2020-06-11 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jun 2, 2020, at 12:58 AM, Airbus CERT via tcpdump-workers 
 wrote:

> The layout is 
> https://docs.microsoft.com/en-us/windows/win32/api/evntcons/ns-evntcons-event_header

So each packet's data starts with, in order:

a 2-octet event record size;
a 2-octet header type;
a 2-octet flag word;
a 2-octet indication of the format of the event data;
a 4-octet thread ID;
a 4-octet process ID;
an 8-octet time stamp;
a 16-octet UUID for the event provider;
a sequence of:
a 2-octet event identifier;
a 1-octet event version;
a 1-octet event channel;
a 1-octet event level;
a 1-octet event opcode;
a 2-octet task identifier;
an 8-octet keyword bitmask;
either:
a 4-octet elapsed kernel CPU time followed by a 4-octet elapsed 
user CPU time;
or:
an 8-octet elapsed user-mode CPU time;
a 16-octet UUID for an activity.

What byte order are the numerical values in?  Little-endian?

> following by one or more 
> https://docs.microsoft.com/en-us/windows/win32/api/evntcons/ns-evntcons-event_header_extended_data_item
>  depending of the flag _EVENT_HEADER.Flags.

So that's one or more of, in order:

2 reserved octets;
a 2-octet extended data type value;
2 reserved octets;
a 2-octet extended data size value;

presumably immediately followed by the octets of the extended data.

What byte order are the numerical values in?  Little-endian?

If the number of octets of extended data isn't a multiple of 8, is there any 
padding after it?

And do the same rules used to generate those data layouts - and the same choice 
of byte order - apply for the structures in the extended data?
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] [tcpdump] Keep win32/prj/WinDump.dsp ?

2020-06-08 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jun 8, 2020, at 12:24 PM, Francois-Xavier Le Bail 
 wrote:

> Thus all the files in win32/prj/ could be removed?
> (WinDump.dsp  WinDump.dsw  WinDump.sln  WinDump.vcproj)

I have no problem removing them and requiring Windows users to ue CMake, 
especially given that newer versions of Visual Studio has CMake as an 
installable component.

If nobody else has said "no, I need them!" - and volunteered to take 
responsibility for maintaining them! - I'd say we should get rid of them, as 
we're not maintaining them.  (CMake files 1) can handle multiple versions of 
Visual Studio and 2) are intended to be maintainable by people using a text 
editor rather than generated by a big IDE that runs primarily, if not 
exclusively, on Windows.)--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] [AiG-CERT #104737] DLT value

2020-06-02 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On Jun 2, 2020, at 12:22 AM, Airbus CERT via tcpdump-workers 
 wrote:

> Yes exactly each packet is an event. The layout of the event is 
> https://docs.microsoft.com/en-us/windows/win32/api/evntcons/ns-evntcons-event_header
>  and 
> https://docs.microsoft.com/en-us/windows/win32/api/evntcons/ns-evntcons-event_header_extended_data_item.
>  But we aligned this format with the ETL (serialization use by microsoft) 
> which is not well documented.

Is it documented at all?

The description of a given LINKTYPE_/DLT_ value on

https://www.tcpdump.org/linktypes.html

and the pages linked to by that description must be sufficient to allow 
somebody to write code to, at minimum, parse the link-layer headers, without 
ever looking at Wireshark or tcpdump code.
--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] [AiG-CERT #104737] DLT value

2020-05-29 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 29, 2020, at 3:23 AM, Airbus CERT via tcpdump-workers 
 wrote:

> I would like to request you to get a DTL value for the PR 
> https://github.com/the-tcpdump-group/libpcap/pull/934. 
> This PR intend to add ETW capture for libpcap.

So is each packet an Event Tracing for Windows:

https://docs.microsoft.com/en-us/windows/win32/etw/event-tracing-portal

record of some sort?  If so, where is the format of that record defined?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] [tcpdump] Keep win32/prj/WinDump.dsp ?

2020-05-24 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
On May 24, 2020, at 4:37 AM, Francois-Xavier Le Bail via tcpdump-workers 
 wrote:

> 15 printers are missing in win32/prj/WinDump.dsp.
> Does anyone use it? Any reason to keep it ?

Note that the *supported* way to build tcpdump (and libpcap) on Windows is with 
CMake (which can more easily be kept up-to-date by UN*X users adding a new 
dissector than can Windows project files, and which are tested by the 
Travis/AppVeyor CI builds).  CMake can generate project files for several 
versions of Visual Studio, as well as build files for various other build 
systems.

See


https://github.com/the-tcpdump-group/tcpdump/blob/master/doc/README.Win32.md

for information on building tcpdump on Windows with Visual Studio (and


https://github.com/the-tcpdump-group/libpcap/blob/master/doc/README.Win32.md

for information on building libpcap on Windows with Visual Studio).--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


Re: [tcpdump-workers] Compile libpcap with DLT_LINUX_SLL2

2020-05-09 Thread Guy Harris via tcpdump-workers
--- Begin Message ---
BTW, having just implemented SLL2 support in Wireshark, the layout of the 
header really doesn't work as well as I'd like with ARPHRD_NETLINK packets.

I'd prefer something like

struct header {
uint16_t hatype;/* link-layer address type */
uint8_t  pkttype;   /* packet type */
uint8_t  halen; /* link-layer address length */
uint8_t  addr[SLL_ADDRLEN]; /* link-layer address */
int32_t  if_index;  /* 1-based interface index */
uint16_t hatype_specific;   /* dependent on sll3_hatype */
uint16_t protocol;  /* protocol */
};

because

1) It puts the protocol field *after* the hatype field, and right before the 
payload, so that, for packets with an hatype of ARPHRD_NETLINK, we can treat 
everything up to the if_index field as the cooked-mode header, and have the 
dissector for ARPHRD_NETLINK-over-SLL treat the hatype_specific and protocol 
fields as fields for *it* to dissect.  For that ARPHRD_ type, the protocol is a 
Netlink protocol type, so it really should be analyzed by the code that 
understands Netlink messages.

2) It provides a field to handle various annoyances in the way that packets are 
provided to PF_PACKET sockets.  In particular, Netlink messages are in the host 
byte order of the machine doing the capturing, so, for ARPHRD_NETLINK, we can 
have libpcap put the value 0x0123 in that field, in *host* byte order, so the 
code that processes the packets can just see whether that field's value is 
0x0123 or 0x3210 and, based on that, determine whether the messages need to be 
byte-swapped.  (Remember, somebody might capture Netlink traffic on a machine 
with one byte order but read the capture on a machine with the opposite byte 
order.)

Is SLL2 sufficiently established that we'd have to introduce an SLL3 type, or 
can we just change SLL2 at this point?--- End Message ---
___
tcpdump-workers mailing list
tcpdump-workers@lists.tcpdump.org
https://lists.sandelman.ca/mailman/listinfo/tcpdump-workers


  1   2   3   4   5   6   7   8   9   10   >