Re: tcpdump - ifname in filter expression

2022-03-27 Thread David Gwynne
On Wed, Mar 23, 2022 at 02:34:54PM -0400, Aner Perez wrote:
> On 3/22/22 00:37, David Gwynne wrote:
> > On Mon, Mar 21, 2022 at 04:37:59PM -0400, Aner Perez wrote:
> > > I noticed that if I put an "ifname" (or "on") in a fllter expression for
> > > tcpdump, it will show all traffic that has an ifname that *starts with* 
> > > the
> > > name I provided.?? e.g.
> > > 
> > > # tcpdump -n -l -e -ttt -i pflog0 ifname vlan1
> > > 
> > > Will show packets for vlan1 but also for vlan110, vlan140, etc (but not 
> > > for em0).
> > > 
> > > It's not clear from the man page if that is the intended behavior.
> > > 
> > > https://man.openbsd.org/tcpdump.8#ifname
> > > 
> > > |ifname|  interface
> > > True if the packet was logged as coming from the specified interface 
> > > (applies only to
> > > packets logged by pf(4) ).
> > > 
> > > While testing I also tried using "ifname vlan" as the filter but it fails
> > > with a syntax error.?? I'm thinking that is probably an unintended
> > > interaction with the "vlan" primitive since "ifname em" or "ifname bnx" 
> > > seem
> > > to work with no error.
> > > 
> > > This is all tested on 6.7 so apologies if this is not the current 
> > > behavior.
> > i think this behaviour with ifname is unintended. the diff below tries
> > to fix it by having the ifname comparison include the terminating nul
> > when doing a comparison of the supplied interface name and the one in
> > the pflog header.
> > 
> > the consequence is that it will not longer do string prefix matches,
> > only whole name matches.
> > 
> > the vlan thing is different because there's a "vlan" keyword in our
> > pcap filter language that lets you do things like "tcpdump vlan
> > 123" when sniffing on a vlan parent interface to limit the packets
> > to those with tag 123. the parser is saying it didnt expect you to
> > talk about vlan when it's supposed to be a string (ie, not a keyword)
> > at that point.
> > 
> > Index: gencode.c
> > ===
> > RCS file: /cvs/src/lib/libpcap/gencode.c,v
> > retrieving revision 1.60
> > diff -u -p -r1.60 gencode.c
> > --- gencode.c   13 Feb 2022 20:02:30 -  1.60
> > +++ gencode.c   22 Mar 2022 04:29:40 -
> > @@ -3230,7 +3246,7 @@ gen_pf_ifname(char *ifname)
> > len - 1);
> > /* NOTREACHED */
> > }
> > -   b0 = gen_bcmp(off, strlen(ifname), ifname);
> > +   b0 = gen_bcmp(off, strlen(ifname) + 1, ifname);
> > return (b0);
> >   }
> > 
> That certainly seems like it would do the trick.?? Would your diff make it
> into the official source tree for a future release or is this something that
> needs to be discussed by the powers that be?

i thought i was the relevant power :'(

deraadt@ said ok too, so i've put it in. should be in snaps soon and the
next release.



Re: Question about /etc/resolvd.conf and local resolver

2022-03-27 Thread Stuart Henderson
On 2022-03-27, Peter J. Philipp  wrote:
> Some fun facts about DNS.  A DNS packet can be 0x hex (or 65535 bytes dec)
> maximally.  This is true for TCP DNS packets which serve an unsigned short
> indicator of length before the packet segment.  With UDP it's a bit different
> a UDP packet can be maximally 65535 bytes long but often the MTU of the
> interface doesn't allow this much room so it fragments at the IP layer if the 
> MTU is below that value.  There is a constraint in UDP DNS keeping it to 512 
> bytes without EDNS set, it can be increased with an EDNS header. Usually the 
> value for this is 4096 but over time it has been reduced to 1232 which was 
> invented at a dns flag day which was a community event with the dns community.

TL;DR: with OpenBSD current resolver settings I suggest leaving it alone.

The reason for this general change to 1232 is to avoid fragmentation
and MTU blackholes - e.g. if the internet connection goes over a 1492
MTU pppoe connection and a restrictive firewall somewhere drops the
frag-needed message, the lookup can fail.

This generally doesn't apply to TCP as often because most typical
connections with restricted MTU are behind routers that adjust MSS in
TCP SYN packets to avoid fragmentation.

OpenBSD's system resolver still uses 4096 though (MAXPACKETSZ in
libc/asr/asr_private.h). Now, for queries against localhost that's not
going to be an issue as the default MTU on loopback on OpenBSD is
32768 bytes. But on the other hand, the latency is low so 3-way
handshake is going to be very quick anyway, so there's little point.

If you're querying a resolver on the internet over a MTU smaller
than the DNS server's (as is the case with many standard internet
connections) doing a query with the edns0 buffer size set to 4096
could easily cause problems with some large responses. But you won't
notice anything wrong unless you actually do such a query, probably
long after you touched the setting.


-- 
Please keep replies on the mailing list.



Re: Question about /etc/resolvd.conf and local resolver

2022-03-27 Thread Peter J. Philipp
Hello J Doe/general,

Some comments inline...

On Sat, Mar 26, 2022 at 09:58:01PM -0400, J Doe wrote:
> Hi,
> 
> I had a question regarding configuring: /etc/resolvd.conf for use with a
> local caching resolver (using BIND), on the loopback address on OpenBSD 7.0.
> 
> This server is a mail server and I make use of DNSBL's such as SpamHaus,
> which is why I require a local caching resolver.
> 
> I see in: /etc/resolvd.conf that there are two options for the: options
> directive:

Do you mean /etc/resolv.conf?  I don't see a resolvd.conf but there is a
resolvd manpage.

> 
> edns0
> tcp
> 
> Because I have: /etc/resolvd.conf configured to use BIND on localhost, can I
> add either: edns0 or: tcp since I know the resolver on localhost supports
> this and it's the only resolver I ever contact ?  I am thinking that this is
> perhaps more efficient than using just UDP without edns0 or tcp.

Some fun facts about DNS.  A DNS packet can be 0x hex (or 65535 bytes dec)
maximally.  This is true for TCP DNS packets which serve an unsigned short
indicator of length before the packet segment.  With UDP it's a bit different
a UDP packet can be maximally 65535 bytes long but often the MTU of the
interface doesn't allow this much room so it fragments at the IP layer if the 
MTU is below that value.  There is a constraint in UDP DNS keeping it to 512 
bytes without EDNS set, it can be increased with an EDNS header.  Usually the 
value for this is 4096 but over time it has been reduced to 1232 which was 
invented at a dns flag day which was a community event with the dns community.

Usually what happens is that if the server would truncate on UDP, it would
send back a TC indicator in the DNS packet that pretty well requires the
query to be redone with TCP in order to get the packet, so it's a hybrid
solution in order to make use of maximally 0x dns lengths.

If you use TCP transport right from the start, the drawback is the 3 way
handshake overhead, and the computer will generate more packets due to the
TCP ACK.  I don't know what the behaviour is in the resolver whether it will
connect for each query or stay connected for the next answer.  I also don't
know if BIND supports it, but I do know in DNS some servers allow this (wether
right or wrong), there may also be a timeout for the session to close.

Since DNS uses compression in most answers you can cram in a lot of data in
even 512 bytes and if not then in 1232 bytes when using UDP.  The compression
mechanism is unique to DNS and isn't zlib or any of those types of compressions.
Having UDP mode on and allowing it to occasionally use TCP for large answers
seems rather correct to me.

> The other thing I'm wondering is whether this is superfluous ... does BIND
> "know" when being queried over localhost to use better network settings that
> UDP ?
> 
> Thanks,
> 
> - J

I don't know BIND all too well, so I can't help you with this.  To sum up what
you're asking I would use edns0 option but leave tcp option be. 

Best Regards,
-peter