Looking at the longer term, some form of transport/presentation layer
authentication without encryption is likely to become necessary as
revisions are made at the link and lower layers.

Maximum packet sizes of 1500 bytes are ridiculous at this point and 9,000
is only slightly less so. The pressure to move to 64KB packets and beyond
will increase as bandwidth demands increase. Already, the main reason I
can't saturate my 940 Mb/s FIOS drop from a single machine is the
processing on the individual packets. As streaming 8K video becomes a
normal thing, the logic for telling the IEEE, 'OK if you won't deliver the
spec we need, we will go to someone who will' will become very strong.

The big roadblock here is that the checksum on ethernet packets is only 32
bits and you really don't want to have to go to hardware. But you
definitely need that integrity check on the routing header or else really
bad things will ensue. So the obvious fix is to have a jumbo packet with a
checksum that only covers the first part of the packet and rely on the
upper layers of the protocol to provide integrity protection for the
payload end-to-end.

If you look at long term technology trends, modularity and flexibility wins
in the long term but is eventually replaced as integrated solutions win
out. Back in the 1980s, nobody would buy a PC they couldn't upgrade, hence
ISA. That has already collapsed in the notebook area and is starting to
fail in the desktop category. We will see similar trends in networking as
support for legacy protocols like Novell and Appletalk are finally
exorcised. I do find it rather odd that people who talk about IP end-to-end
seem to have an allergic reaction to 'quite right, time to get rid of MACs
and ARPing and use IP for routing inside the network'.

QUIC/HTTP represents one part of that trend. It is a good trend that I
support.


On Wed, Oct 5, 2022 at 3:12 PM Eliot Lear <[email protected]> wrote:

> Hi,
>
> Just on this:
> On 05.10.22 19:32, Lucas Pardue wrote:
>
> RFC 7258 / BCP 188 [1] was published in 2014. It describes how " Pervasive
> monitoring is a technical attack that should be mitigated in the design of
> IETF protocols, where possible."
>
> Yes, we said that.  However, we also said the following in the same
> document:
>
>    Those developing IETF specifications need to be able to describe how
>    they have considered PM, and, if the attack is relevant to the work
>    to be published, be able to justify related design decisions.
>
> Application developers need to consider their particular circumstances and
> make decisions for themselves.  The OPC world makes heavy use of ISA99
> model / IEC 62443, which has a very formal segmentation scheme that may
> mitigate the need for encryption.  However, some caution is advised:
> services that have in the past been considered local often transition to
> use the Internet.  I'm not close enough to OPC to have a fine-tuned crystal
> ball in that regard.
>
> This doesn't answer the question of whether QUIC should be changed for
> OPC's use case.  That's not an easy call, but I still don't think we fully
> understand the requirements.  The existing QUIC may be perfectly fine for
> certain industrial uses where live key distribution from one party either
> is easy or unnecessary.
>
> Eliot
>
>
>

Reply via email to