It seems we can not find a common basis here. See below:
On 14/12/2016 01:23, Joe Touch wrote:
On 12/13/2016 5:34 AM, Gorry Fairhurst wrote:
...
(1) I think we need a parameter returned to the App that is
equivalent to Maximum Packet Size, MPS, in DCCP (RFC4340). It is
useful to know how many bytes the app can send with reasonable
chance of unfragmented delivery.
All we can know is whether it is unfragmented at the next layer down.
I disagree. The stack can tell the App a MPS value based on PMTUD (when
it implements this, or understanding of headers). That's already
specified for SCTP & DCCP. Sure, the path may change, but at least the
App can access a recent result.
(2) It's helpful for Apps to be able to retrieve the upper size
allowed with potential fragmentation - that could be useful in
determinine probe sizes for an application. Apps should know the
hard limt, In DCCP this is called the current congestion control
maximum packet size (CCMPS), the largest permitted by the stack
using the current congestion control method. That's bound to be less
than or equal to what is permitted for the local Interface MTU. This
limit lets the App also take into consideration other size
constraints in the stack below the API.
Again, next layer down only. We're generally talking about existing
transports that try to pass the link MTU up through network and
transport transparently, but that need not be the case. Keep in mind
that the link MTU for ATM AAL5 isn't 48B, it's 9K - i.e., it is the
message that the link will deliver intact, not the native link size.
...
>
In this case, I think you are wrong, sorry. Apps can be told the largest
message they can send over a transport. And some transports do in fact
limit this.
I don't see the relevance of the ATM example. Datagram protocols work at
the transport layer.
(3) Apps need to be able to ask the stack to try hard to send
datagrams larger than the current MPS -
I disagree.
We don't agree. Apps can send probe messages.
The app should see two values from transport:
A) how big a message can you deliver at all?
- Wasn't that the thing I originally cited as CCMPS
B) how big a message can you deliver "natively"?
- Wasn't that MPS?
Any probing happens between those two values.
That's a true!
This is not expected as the default, the stack needs to be told to
enable this use of source/router fragmentation and send IPv4 datagrams
with DF=0 (For some IPv4 paths, the PMTU, and hence MPS can be very
small).
I disagree.
DF=0 is a network flag that should never be exposed to the app. Even if
it is, this wouldn't be the control the app really wants. The app would
want to prevent source fragmentation. DF=0 applies to IPv4 only and only
affects *on path* fragmentation.
But that's not the transport's job.
I disagree, if MPS > datagram > CCMPS the stack needs to know whether
to source fragment (3), or for IPv4 whether to allow network
fragmentation (4). Potentially you could discard if neither (3) or (4)
is allowed.
Consider this:
- transport has max and native message sizes
- network has max and native message sizes
- link has max and native message sizes
Every layer has these. Sometimes they're the same (when a layer doesn't
support frag/reassembly , e.g., UDP,
>
UDP does support fragmentation.
Ethernet)
... Which is link layer.
, sometimes they're not
(IP, TCP). Somtimes they're unlimited (TCP has no max message size, AFAICT).
TCP isn't datagram either - and is stream-based - so segments are not
necessarily packets.
(4) Apps need to be able to ask the stack to send datagrams larger
than the current MPS, but NOT if this results in source fragmentation.
Apps can control only transport fragmentation. They can't and shouldn't
see or control network or link fragmentation UNLESS transport lets that
pass through as a transport behavior.
If we were talking about TCP-like protocols that would be fine, but I'm
talking about datagram protocols where the PDU being sent is a datagram.
Such packets need to be sent with DF=1. - This is not expected as the
default, the stack needs to be told to enable this - for UDP it would
be needed to perform PMTUD. That's I think what has been just called
"native transmission desired ".
The issue is that this is an interaction between the app and transport.
It has nothing to do with the network or link layers - unless the
transport wants it to. It's up to the transport to decide whether to try
to "pass through" the native network size. It's up to the network layer
to decide whether to "pass through" the native link size.
E.g., for IPv6, the lowest values to the answers above are:
A) 1500B, including IP header and options
B) 1280B, including IP header and options
That necessarily means that IPv6 over IPv6 cannot truthfully answer (B)
- it HAS to require the lower IPv6 to fragment and reassemble. Otherwise
it would be reporting a value that would make it no longer compliant
with RFC2460.
Sure, MPS would then be less than 1280 less headers, etc.
So we need to be careful about this - there really aren't 4 values here.
There are only two - the max and "native" *as reported by* the next
layer down.
There are two from the stack (MPS, CCMPS).
There is one towards the stack (allow fragmentation below the transport
in IPv4, discard if not sendable, or source fragment).
There never has been and never can be a way that an app can solely
manage PMTUD to match network *unless* the transport passes that
information through. There never has been and never can be a way an app
can match to a link layer native MTU (otherwise, we'd be spinning MTUs
down to 48B for ATM).
To be clear, I see "MTU" as an IP-layer parameter. The ATM cell, is in
my mind a link frame (an HDLC frame is also a link frame, etc). To me,
the interface presented to the network layer (IP MTU) is that supplied
by the adaption layer that runs over the link frames.
Joe
Gorry
_______________________________________________
Taps mailing list
[email protected]
https://www.ietf.org/mailman/listinfo/taps