On Sun, Jul 3, 2022, 2:56 PM Gyan Mishra <hayabusa...@gmail.com> wrote:

>
> Hi Tom
>
> Thanks for the info on Big TCP and jumbo grams use case testing.  From
> that link It does sound like increased performance numbers going to jumbo
> grams with the example 185,000 byte payload yielded 50% increase in
> throughout.
>
> Doing the math compared to 9216 byte packet which is used today commonly
> on servers and comparing to use case of 185,000 MTU.
>
> 185,000/9216 = 20 packets
>
> IPv6 + TCP + Ethernet headers overhead = 78 bytes
> (L2 MTU worst case - L3 MTU would be 60)
>
> 78 bytes x 20 = 1565 bytes saved being transmitted over the wire
>
> 185,000  bytes over 10G link .148 ms
>
> 9216+ 78 x 20 packets  = 185,880 bytes over 10G link - .1487 ms
>
> From my calculations please check my math but the performance gains is
> negligible almost nill with jumbo grams
>
> I think if you compare 1500 to jumbo grams their is significant gains in
> performance, however to be accurate you have to compare to what is used
> today in production DC servers which is 9216.
>
> Please correct me if I am wrong in my calculations but I don’t see any
> performance gains going from 9216 used on DC server today to using Jumbo
> grams.
>

Hi, Gyan,

Generally, I agree there are diminishing returns as MTU increases if you
are just considering the packet processing overhead. However, there are
potentially other benefits to a larger MTU. For instance, Google moved to
9K MTUs because that allows them to send two 4K pages in a packet that can
easily be page flipped to user space addresses thereby achieving a very
efficient form of zero copy. Using jumbograms could conceivably allow even
larger even larger pages to be conveyed over the network (e.g. huge pages
in LInux are 2M). Since this is a host side technique, it's not necessary
for the network to support larger MTUs as the NIC can perform receive
segmentation offload to give the host larger packets. RFC2675 is a win here
because we can use a standard format to represent these jumbo packets that
might be created locally on receive (as opposed to using some npn-standard
custom OS API).

Tom


> Thanks
>
> Gyan
>
> On Sun, Jul 3, 2022 at 2:52 PM Tom Herbert <t...@herbertland.com> wrote:
>
>>
>>
>> On Sat, Jul 2, 2022, 9:26 PM Gyan Mishra <hayabusa...@gmail.com> wrote:
>>
>>>
>>> I reviewed the draft and don’t support WG adoption.
>>>
>>> Main reason as stated by others is the minimal gain from super jumbo
>>> greater then 9000 bytes is supported today of which most router / switch
>>> vendors for example Cisco supports maximum 9216 MTU and Juniper supports
>>> 16000 MTU.  Most all hosts Windows, MAC and Linux support up to 9216 MTU.
>>>
>>> Servers for 5G MEC Multi Access Edge Compute on the wireless  RAN Radio
>>> network X-Haul are now closer to the user’s 100G connected servers for
>>> network slice services being instantiated at ultra low latency microseconds
>>> as well as MSDC (Massively Scalable Data Centers) 100G connected servers
>>> and ultra low latency all using super Jumbo set to 9000+ MTU.
>>>
>>> Keeping in mind that MPLS, IPSEC, NVO overlays must be taken into
>>> account, the super jumbo server MTU must be less then the network underlay
>>> or overlay MTU so let’s say set to a maximum maximum  of 9116 for Cisco for
>>> now being the lowest common denominator in a network with Cisco and Juniper
>>> routers.
>>>
>>> At this time Transport networks with OTNGN DWDM packet or TDM over
>>> IP/MPLS a single wavelength can be 400G and now soon to be 800G standard
>>> per wavelength.
>>>
>>> At the current network speeds on the core and transport side given the
>>> higher bandwidths we are still at super jumbo which > 9000 and no vendor is
>>> close to 64k -  65535 which is the maximum super jumbo size.
>>>
>>> Jumbo grams RFC 2675  as discussed in the draft is  > 64k to 4.2B.
>>>
>>> We are well off from a network technology point of view coming anywhere
>>> close to use of Jumbo grams RFC 2675.
>>>
>>> Until we get close to the maximum MTU of Super Jumbo and we still feel
>>> we need larger buffers for better performance would this draft be even
>>> remotely a possibility.
>>>
>>> Also the big issue here is if you do the math you do get tremendous
>>> gains in throughout and performance from 1500 byte to jumbo to then super
>>> jumbo up to 9216.  The performance gains are not there to even go to the
>>> maximum of super jumbo 64k which has not happened and more then likely will
>>> never happen.
>>>
>>> However the cost of extra buffers on server NIC and router hardware
>>> proves that the performance gains are nominal and it’s not worth the
>>> investment by router and server vendors to ever support more then what is
>>> supported today which is 9216 on servers and what Cisco supports today.
>>>
>>> So bottom line is RFC 2675 would never come to fruition and should
>>> really be deprecated or made obsolete.
>>>
>>
>> Gyan,
>>
>> Please take a look at the reference I provided to the work to make
>> GRO/GSO jumbo grams. Routers are not the only devices that can make
>> productive use of RFC2674, hosts can use it as shown in the example. So
>> IMO, there's no need to deprecate the protocol as it is useful.
>>
>> Tom
>>
>>
>>> I believe this was discussed on 6MAN.
>>>
>>> Kind Regards
>>>
>>> Gyan
>>>
>>> On Sat, Jul 2, 2022 at 11:12 AM Bob Hinden <bob.hin...@gmail.com> wrote:
>>>
>>>> Hi,
>>>>
>>>> I agree with the other comments that this shouldn’t be adopted at this
>>>> point.
>>>>
>>>> Another point is that what I understand this is proposing would appear
>>>> to have non-trivial effect on current transport protocols, as it will add
>>>> delay to create the “parcels”.  I don’t see this issue discussed in the
>>>> draft, other than pointing to some other perhaps similar work.
>>>>
>>>> Bob
>>>>
>>>>
>>>> > On Jul 1, 2022, at 5:17 PM, Tommy Pauly <tpauly=
>>>> 40apple....@dmarc.ietf.org> wrote:
>>>> >
>>>> > I agree with the points being raised by Tom and Joel. I don’t think
>>>> this is something intarea should adopt at this point. If there’s going to
>>>> be further discussion on this, I’d want to see more explanation of who
>>>> would intend to support and deploy this solution to the problem.
>>>> >
>>>> > If this is a matter of sending fewer packets over a particular link
>>>> of the network, the use of a proxy or tunnel between hosts may equally well
>>>> solve the problem without needing to make changes at this layer.
>>>> >
>>>> > Thanks,
>>>> > Tommy
>>>> >
>>>> >> On Jul 1, 2022, at 5:06 PM, Tom Herbert <t...@herbertland.com> wrote:
>>>> >>
>>>> >> At this point, I don't see IP parcels as being a significant benefit
>>>> to host performance which, as I understand it, is the primary motivation.
>>>> While it's an interesting idea, I don't support adoption.
>>>> >>
>>>> >> A recent patch to the Linux kernel allows for GSO/GRO segments
>>>> greater than 64K, using RFC2675 Jumbograms to reassemble so those
>>>> limitations which were discussed on the list have been addressed in
>>>> implementation. There is a nice writeup in
>>>> https://lwn.net/Articles/884104/.
>>>> >>
>>>> >> As Joel mentions moving any sort of reassembly into network devices
>>>> is complex and problematic. For instance, if a middebox is trying to
>>>> perform reassembly of packets for a flow not addressed to it, it's
>>>> implicitly requiring that all packets of the flow that go through the
>>>> device perform reassembly which is contrary to the end-to-end model. Also,
>>>> if reassembly requires buffering of messages then that creates a memory
>>>> requirement on middleboxes; hosts are in a better position to do reassembly
>>>> since they are only providing the service for themselves as opposed to some
>>>> number of devices behind a middlebox.
>>>> >>
>>>> >> Tom
>>>> >>
>>>> >>
>>>> >>
>>>> >>
>>>> >> On Wed, Jun 22, 2022 at 12:25 PM Juan Carlos Zuniga (juzuniga)
>>>> <juzuniga=40cisco....@dmarc.ietf.org> wrote:
>>>> >> Dear IntArea WG,
>>>> >>
>>>> >>
>>>> >>
>>>> >> We are starting a 2-week call for adoption of the IP-Parcels draft:
>>>> >>
>>>> >>
>>>> https://www.ietf.org/archive/id/draft-templin-intarea-parcels-10.html
>>>> >>
>>>> >>
>>>> >>
>>>> >> The document has been discussed for some time and it has received
>>>> multiple comments.
>>>> >>
>>>> >>
>>>> >>
>>>> >> If you have an opinion on whether this document should be adopted by
>>>> the IntArea WG please indicate it on the list by the end of Wednesday July
>>>> 6th.
>>>> >>
>>>> >>
>>>> >>
>>>> >> Thanks,
>>>> >>
>>>> >>
>>>> >>
>>>> >> Juan-Carlos & Wassim
>>>> >>
>>>> >> (IntArea WG chairs)
>>>> >>
>>>> >>
>>>> >>
>>>> >> _______________________________________________
>>>> >> Int-area mailing list
>>>> >> Int-area@ietf.org
>>>> >> https://www.ietf.org/mailman/listinfo/int-area
>>>> >> _______________________________________________
>>>> >> Int-area mailing list
>>>> >> Int-area@ietf.org
>>>> >> https://www.ietf.org/mailman/listinfo/int-area
>>>> >
>>>> > _______________________________________________
>>>> > Int-area mailing list
>>>> > Int-area@ietf.org
>>>> > https://www.ietf.org/mailman/listinfo/int-area
>>>>
>>>> _______________________________________________
>>>> Int-area mailing list
>>>> Int-area@ietf.org
>>>> https://www.ietf.org/mailman/listinfo/int-area
>>>>
>>> --
>>>
>>> <http://www.verizon.com/>
>>>
>>> *Gyan Mishra*
>>>
>>> *Network Solutions A**rchitect *
>>>
>>> *Email gyan.s.mis...@verizon.com <gyan.s.mis...@verizon.com>*
>>>
>>>
>>>
>>> *M 301 502-1347*
>>>
>>> _______________________________________________
>>> Int-area mailing list
>>> Int-area@ietf.org
>>> https://www.ietf.org/mailman/listinfo/int-area
>>>
>> --
>
> <http://www.verizon.com/>
>
> *Gyan Mishra*
>
> *Network Solutions A**rchitect *
>
> *Email gyan.s.mis...@verizon.com <gyan.s.mis...@verizon.com>*
>
>
>
> *M 301 502-1347*
>
>
_______________________________________________
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area

Reply via email to