Re: [Int-area] Jumbograms [was: Call for WG adoption of draft-templin-intarea-parcels-10]

2022-07-03 Thread Richard Li
>Thanks Brian for 6MAN clarification on RFC 2365 that it has been implemented 
>for very specialized environments

You mean RFC 2675, but not RFC 2365, don't you?


Richard


From: Int-area  On Behalf Of Gyan Mishra
Sent: Sunday, July 3, 2022 5:43 AM
To: Brian E Carpenter 
Cc: int-area@ietf.org
Subject: Re: [Int-area] Jumbograms [was: Call for WG adoption of 
draft-templin-intarea-parcels-10]


Thanks Brian for 6MAN clarification on RFC 2365 that it has been implemented 
for very specialized environments.

I agree it does no harm to anyone who doesn't use it.

What is the application where it was implemented if you have a link would be 
greatly appreciated.

Thanks

Gyan

On Sun, Jul 3, 2022 at 1:08 AM Brian E Carpenter 
mailto:brian.e.carpen...@gmail.com>> wrote:
Hi Gyan,

On 03-Jul-22 16:25, Gyan Mishra wrote:
...
> So bottom line is RFC 2675 would never come to fruition and should really be 
> deprecated or made obsolete.

Why? Firstly, it has come to fruition, as an earlier message in this thread 
said. Secondly, it was intentionally designed for very special environments 
with unusual hardware, rather than for any commodity environment. Thirdly, it 
does no harm whatever to anyone that doesn't use it, so there is no reason 
whatever to deprecate it.

> I believe this was discussed on 6MAN.

I believe the conclusion was to leave it as is.

Brian
--

[http://ss7.vzw.com/is/image/VerizonWireless/vz-logo-email]

Gyan Mishra

Network Solutions Architect

Email gyan.s.mis...@verizon.com

M 301 502-1347

___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area


Re: [Int-area] Call for WG adoption of draft-templin-intarea-parcels-10

2022-07-03 Thread Gyan Mishra
Hi Tom

That makes sense host side optimization, taking advantage of host localized
use of jumbo grams standard with use of 64k segmentation chunks before
packetizing onto the wire for host endpoint interoperability.

Thanks

Gyan

On Sun, Jul 3, 2022 at 6:17 PM Tom Herbert  wrote:

>
>
> On Sun, Jul 3, 2022, 2:56 PM Gyan Mishra  wrote:
>
>>
>> Hi Tom
>>
>> Thanks for the info on Big TCP and jumbo grams use case testing.  From
>> that link It does sound like increased performance numbers going to jumbo
>> grams with the example 185,000 byte payload yielded 50% increase in
>> throughout.
>>
>> Doing the math compared to 9216 byte packet which is used today commonly
>> on servers and comparing to use case of 185,000 MTU.
>>
>> 185,000/9216 = 20 packets
>>
>> IPv6 + TCP + Ethernet headers overhead = 78 bytes
>> (L2 MTU worst case - L3 MTU would be 60)
>>
>> 78 bytes x 20 = 1565 bytes saved being transmitted over the wire
>>
>> 185,000  bytes over 10G link .148 ms
>>
>> 9216+ 78 x 20 packets  = 185,880 bytes over 10G link - .1487 ms
>>
>> From my calculations please check my math but the performance gains is
>> negligible almost nill with jumbo grams
>>
>> I think if you compare 1500 to jumbo grams their is significant gains in
>> performance, however to be accurate you have to compare to what is used
>> today in production DC servers which is 9216.
>>
>> Please correct me if I am wrong in my calculations but I don’t see any
>> performance gains going from 9216 used on DC server today to using Jumbo
>> grams.
>>
>
> Hi, Gyan,
>
> Generally, I agree there are diminishing returns as MTU increases if you
> are just considering the packet processing overhead. However, there are
> potentially other benefits to a larger MTU. For instance, Google moved to
> 9K MTUs because that allows them to send two 4K pages in a packet that can
> easily be page flipped to user space addresses thereby achieving a very
> efficient form of zero copy. Using jumbograms could conceivably allow even
> larger even larger pages to be conveyed over the network (e.g. huge pages
> in LInux are 2M). Since this is a host side technique, it's not necessary
> for the network to support larger MTUs as the NIC can perform receive
> segmentation offload to give the host larger packets. RFC2675 is a win here
> because we can use a standard format to represent these jumbo packets that
> might be created locally on receive (as opposed to using some npn-standard
> custom OS API).
>
> Tom
>
>
>> Thanks
>>
>> Gyan
>>
>> On Sun, Jul 3, 2022 at 2:52 PM Tom Herbert  wrote:
>>
>>>
>>>
>>> On Sat, Jul 2, 2022, 9:26 PM Gyan Mishra  wrote:
>>>

 I reviewed the draft and don’t support WG adoption.

 Main reason as stated by others is the minimal gain from super jumbo
 greater then 9000 bytes is supported today of which most router / switch
 vendors for example Cisco supports maximum 9216 MTU and Juniper supports
 16000 MTU.  Most all hosts Windows, MAC and Linux support up to 9216 MTU.

 Servers for 5G MEC Multi Access Edge Compute on the wireless  RAN Radio
 network X-Haul are now closer to the user’s 100G connected servers for
 network slice services being instantiated at ultra low latency microseconds
 as well as MSDC (Massively Scalable Data Centers) 100G connected servers
 and ultra low latency all using super Jumbo set to 9000+ MTU.

 Keeping in mind that MPLS, IPSEC, NVO overlays must be taken into
 account, the super jumbo server MTU must be less then the network underlay
 or overlay MTU so let’s say set to a maximum maximum  of 9116 for Cisco for
 now being the lowest common denominator in a network with Cisco and Juniper
 routers.

 At this time Transport networks with OTNGN DWDM packet or TDM over
 IP/MPLS a single wavelength can be 400G and now soon to be 800G standard
 per wavelength.

 At the current network speeds on the core and transport side given the
 higher bandwidths we are still at super jumbo which > 9000 and no vendor is
 close to 64k -  65535 which is the maximum super jumbo size.

 Jumbo grams RFC 2675  as discussed in the draft is  > 64k to 4.2B.

 We are well off from a network technology point of view coming anywhere
 close to use of Jumbo grams RFC 2675.

 Until we get close to the maximum MTU of Super Jumbo and we still feel
 we need larger buffers for better performance would this draft be even
 remotely a possibility.

 Also the big issue here is if you do the math you do get tremendous
 gains in throughout and performance from 1500 byte to jumbo to then super
 jumbo up to 9216.  The performance gains are not there to even go to the
 maximum of super jumbo 64k which has not happened and more then likely will
 never happen.

 However the cost of extra buffers on server NIC and router hardware
 proves that the performance gains are nominal and it’s not 

Re: [Int-area] Call for WG adoption of draft-templin-intarea-parcels-10

2022-07-03 Thread Tom Herbert
On Sun, Jul 3, 2022, 2:56 PM Gyan Mishra  wrote:

>
> Hi Tom
>
> Thanks for the info on Big TCP and jumbo grams use case testing.  From
> that link It does sound like increased performance numbers going to jumbo
> grams with the example 185,000 byte payload yielded 50% increase in
> throughout.
>
> Doing the math compared to 9216 byte packet which is used today commonly
> on servers and comparing to use case of 185,000 MTU.
>
> 185,000/9216 = 20 packets
>
> IPv6 + TCP + Ethernet headers overhead = 78 bytes
> (L2 MTU worst case - L3 MTU would be 60)
>
> 78 bytes x 20 = 1565 bytes saved being transmitted over the wire
>
> 185,000  bytes over 10G link .148 ms
>
> 9216+ 78 x 20 packets  = 185,880 bytes over 10G link - .1487 ms
>
> From my calculations please check my math but the performance gains is
> negligible almost nill with jumbo grams
>
> I think if you compare 1500 to jumbo grams their is significant gains in
> performance, however to be accurate you have to compare to what is used
> today in production DC servers which is 9216.
>
> Please correct me if I am wrong in my calculations but I don’t see any
> performance gains going from 9216 used on DC server today to using Jumbo
> grams.
>

Hi, Gyan,

Generally, I agree there are diminishing returns as MTU increases if you
are just considering the packet processing overhead. However, there are
potentially other benefits to a larger MTU. For instance, Google moved to
9K MTUs because that allows them to send two 4K pages in a packet that can
easily be page flipped to user space addresses thereby achieving a very
efficient form of zero copy. Using jumbograms could conceivably allow even
larger even larger pages to be conveyed over the network (e.g. huge pages
in LInux are 2M). Since this is a host side technique, it's not necessary
for the network to support larger MTUs as the NIC can perform receive
segmentation offload to give the host larger packets. RFC2675 is a win here
because we can use a standard format to represent these jumbo packets that
might be created locally on receive (as opposed to using some npn-standard
custom OS API).

Tom


> Thanks
>
> Gyan
>
> On Sun, Jul 3, 2022 at 2:52 PM Tom Herbert  wrote:
>
>>
>>
>> On Sat, Jul 2, 2022, 9:26 PM Gyan Mishra  wrote:
>>
>>>
>>> I reviewed the draft and don’t support WG adoption.
>>>
>>> Main reason as stated by others is the minimal gain from super jumbo
>>> greater then 9000 bytes is supported today of which most router / switch
>>> vendors for example Cisco supports maximum 9216 MTU and Juniper supports
>>> 16000 MTU.  Most all hosts Windows, MAC and Linux support up to 9216 MTU.
>>>
>>> Servers for 5G MEC Multi Access Edge Compute on the wireless  RAN Radio
>>> network X-Haul are now closer to the user’s 100G connected servers for
>>> network slice services being instantiated at ultra low latency microseconds
>>> as well as MSDC (Massively Scalable Data Centers) 100G connected servers
>>> and ultra low latency all using super Jumbo set to 9000+ MTU.
>>>
>>> Keeping in mind that MPLS, IPSEC, NVO overlays must be taken into
>>> account, the super jumbo server MTU must be less then the network underlay
>>> or overlay MTU so let’s say set to a maximum maximum  of 9116 for Cisco for
>>> now being the lowest common denominator in a network with Cisco and Juniper
>>> routers.
>>>
>>> At this time Transport networks with OTNGN DWDM packet or TDM over
>>> IP/MPLS a single wavelength can be 400G and now soon to be 800G standard
>>> per wavelength.
>>>
>>> At the current network speeds on the core and transport side given the
>>> higher bandwidths we are still at super jumbo which > 9000 and no vendor is
>>> close to 64k -  65535 which is the maximum super jumbo size.
>>>
>>> Jumbo grams RFC 2675  as discussed in the draft is  > 64k to 4.2B.
>>>
>>> We are well off from a network technology point of view coming anywhere
>>> close to use of Jumbo grams RFC 2675.
>>>
>>> Until we get close to the maximum MTU of Super Jumbo and we still feel
>>> we need larger buffers for better performance would this draft be even
>>> remotely a possibility.
>>>
>>> Also the big issue here is if you do the math you do get tremendous
>>> gains in throughout and performance from 1500 byte to jumbo to then super
>>> jumbo up to 9216.  The performance gains are not there to even go to the
>>> maximum of super jumbo 64k which has not happened and more then likely will
>>> never happen.
>>>
>>> However the cost of extra buffers on server NIC and router hardware
>>> proves that the performance gains are nominal and it’s not worth the
>>> investment by router and server vendors to ever support more then what is
>>> supported today which is 9216 on servers and what Cisco supports today.
>>>
>>> So bottom line is RFC 2675 would never come to fruition and should
>>> really be deprecated or made obsolete.
>>>
>>
>> Gyan,
>>
>> Please take a look at the reference I provided to the work to make
>> GRO/GSO jumbo grams. Routers are not the 

Re: [Int-area] Call for WG adoption of draft-templin-intarea-parcels-10

2022-07-03 Thread Gyan Mishra
Hi Tom

Thanks for the info on Big TCP and jumbo grams use case testing.  From that
link It does sound like increased performance numbers going to jumbo grams
with the example 185,000 byte payload yielded 50% increase in throughout.

Doing the math compared to 9216 byte packet which is used today commonly on
servers and comparing to use case of 185,000 MTU.

185,000/9216 = 20 packets

IPv6 + TCP + Ethernet headers overhead = 78 bytes
(L2 MTU worst case - L3 MTU would be 60)

78 bytes x 20 = 1565 bytes saved being transmitted over the wire

185,000  bytes over 10G link .148 ms

9216+ 78 x 20 packets  = 185,880 bytes over 10G link - .1487 ms

>From my calculations please check my math but the performance gains is
negligible almost nill with jumbo grams

I think if you compare 1500 to jumbo grams their is significant gains in
performance, however to be accurate you have to compare to what is used
today in production DC servers which is 9216.

Please correct me if I am wrong in my calculations but I don’t see any
performance gains going from 9216 used on DC server today to using Jumbo
grams.

Thanks

Gyan

On Sun, Jul 3, 2022 at 2:52 PM Tom Herbert  wrote:

>
>
> On Sat, Jul 2, 2022, 9:26 PM Gyan Mishra  wrote:
>
>>
>> I reviewed the draft and don’t support WG adoption.
>>
>> Main reason as stated by others is the minimal gain from super jumbo
>> greater then 9000 bytes is supported today of which most router / switch
>> vendors for example Cisco supports maximum 9216 MTU and Juniper supports
>> 16000 MTU.  Most all hosts Windows, MAC and Linux support up to 9216 MTU.
>>
>> Servers for 5G MEC Multi Access Edge Compute on the wireless  RAN Radio
>> network X-Haul are now closer to the user’s 100G connected servers for
>> network slice services being instantiated at ultra low latency microseconds
>> as well as MSDC (Massively Scalable Data Centers) 100G connected servers
>> and ultra low latency all using super Jumbo set to 9000+ MTU.
>>
>> Keeping in mind that MPLS, IPSEC, NVO overlays must be taken into
>> account, the super jumbo server MTU must be less then the network underlay
>> or overlay MTU so let’s say set to a maximum maximum  of 9116 for Cisco for
>> now being the lowest common denominator in a network with Cisco and Juniper
>> routers.
>>
>> At this time Transport networks with OTNGN DWDM packet or TDM over
>> IP/MPLS a single wavelength can be 400G and now soon to be 800G standard
>> per wavelength.
>>
>> At the current network speeds on the core and transport side given the
>> higher bandwidths we are still at super jumbo which > 9000 and no vendor is
>> close to 64k -  65535 which is the maximum super jumbo size.
>>
>> Jumbo grams RFC 2675  as discussed in the draft is  > 64k to 4.2B.
>>
>> We are well off from a network technology point of view coming anywhere
>> close to use of Jumbo grams RFC 2675.
>>
>> Until we get close to the maximum MTU of Super Jumbo and we still feel we
>> need larger buffers for better performance would this draft be even
>> remotely a possibility.
>>
>> Also the big issue here is if you do the math you do get tremendous gains
>> in throughout and performance from 1500 byte to jumbo to then super jumbo
>> up to 9216.  The performance gains are not there to even go to the maximum
>> of super jumbo 64k which has not happened and more then likely will never
>> happen.
>>
>> However the cost of extra buffers on server NIC and router hardware
>> proves that the performance gains are nominal and it’s not worth the
>> investment by router and server vendors to ever support more then what is
>> supported today which is 9216 on servers and what Cisco supports today.
>>
>> So bottom line is RFC 2675 would never come to fruition and should really
>> be deprecated or made obsolete.
>>
>
> Gyan,
>
> Please take a look at the reference I provided to the work to make GRO/GSO
> jumbo grams. Routers are not the only devices that can make productive use
> of RFC2674, hosts can use it as shown in the example. So IMO, there's no
> need to deprecate the protocol as it is useful.
>
> Tom
>
>
>> I believe this was discussed on 6MAN.
>>
>> Kind Regards
>>
>> Gyan
>>
>> On Sat, Jul 2, 2022 at 11:12 AM Bob Hinden  wrote:
>>
>>> Hi,
>>>
>>> I agree with the other comments that this shouldn’t be adopted at this
>>> point.
>>>
>>> Another point is that what I understand this is proposing would appear
>>> to have non-trivial effect on current transport protocols, as it will add
>>> delay to create the “parcels”.  I don’t see this issue discussed in the
>>> draft, other than pointing to some other perhaps similar work.
>>>
>>> Bob
>>>
>>>
>>> > On Jul 1, 2022, at 5:17 PM, Tommy Pauly >> 40apple@dmarc.ietf.org> wrote:
>>> >
>>> > I agree with the points being raised by Tom and Joel. I don’t think
>>> this is something intarea should adopt at this point. If there’s going to
>>> be further discussion on this, I’d want to see more explanation of who
>>> would intend to support and deploy this 

Re: [Int-area] Jumbograms [was: Call for WG adoption of draft-templin-intarea-parcels-10]

2022-07-03 Thread Gyan Mishra
Thanks

Gyan

On Sun, Jul 3, 2022 at 4:44 PM Brian E Carpenter <
brian.e.carpen...@gmail.com> wrote:

> Gyan,
>
> https://mailarchive.ietf.org/arch/msg/int-area/gNA1peWM46q1upmSybUkcd9MUtY
>
> There was also this report sometime last year (look for slide 8):
>
> https://netdevconf.info/0x15/session.html?BIG-TCP
>
> Regards
> Brian Carpenter
>
> On 04-Jul-22 00:42, Gyan Mishra wrote:
> >
> > Thanks Brian for 6MAN clarification on RFC 2365 that it has been
> implemented for very specialized environments.
> >
> > I agree it does no harm to anyone who doesn’t use it.
> >
> > What is the application where it was implemented if you have a link
> would be greatly appreciated.
> >
> > Thanks
> >
> > Gyan
> >
> > On Sun, Jul 3, 2022 at 1:08 AM Brian E Carpenter <
> brian.e.carpen...@gmail.com > wrote:
> >
> > Hi Gyan,
> >
> > On 03-Jul-22 16:25, Gyan Mishra wrote:
> > ...
> >  > So bottom line is RFC 2675 would never come to fruition and
> should really be deprecated or made obsolete.
> >
> > Why? Firstly, it has come to fruition, as an earlier message in this
> thread said. Secondly, it was intentionally designed for very special
> environments with unusual hardware, rather than for any commodity
> environment. Thirdly, it does no harm whatever to anyone that doesn't use
> it, so there is no reason whatever to deprecate it.
> >
> >  > I believe this was discussed on 6MAN.
> >
> > I believe the conclusion was to leave it as is.
> >
> >  Brian
> >
> > --
> >
> > 
> >
> > *Gyan Mishra*
> >
> > /Network Solutions A//rchitect /
> >
> > /Email gyan.s.mis...@verizon.com //
> > /
> >
> > /M 301 502-1347
> >
> > /
> >
> >
>
-- 



*Gyan Mishra*

*Network Solutions A**rchitect *

*Email gyan.s.mis...@verizon.com *



*M 301 502-1347*
___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area


Re: [Int-area] Jumbograms [was: Call for WG adoption of draft-templin-intarea-parcels-10]

2022-07-03 Thread Brian E Carpenter

Gyan,

https://mailarchive.ietf.org/arch/msg/int-area/gNA1peWM46q1upmSybUkcd9MUtY

There was also this report sometime last year (look for slide 8):

https://netdevconf.info/0x15/session.html?BIG-TCP

Regards
   Brian Carpenter

On 04-Jul-22 00:42, Gyan Mishra wrote:


Thanks Brian for 6MAN clarification on RFC 2365 that it has been implemented 
for very specialized environments.

I agree it does no harm to anyone who doesn’t use it.

What is the application where it was implemented if you have a link would be 
greatly appreciated.

Thanks

Gyan

On Sun, Jul 3, 2022 at 1:08 AM Brian E Carpenter mailto:brian.e.carpen...@gmail.com>> wrote:

Hi Gyan,

On 03-Jul-22 16:25, Gyan Mishra wrote:
...
 > So bottom line is RFC 2675 would never come to fruition and should 
really be deprecated or made obsolete.

Why? Firstly, it has come to fruition, as an earlier message in this thread 
said. Secondly, it was intentionally designed for very special environments 
with unusual hardware, rather than for any commodity environment. Thirdly, it 
does no harm whatever to anyone that doesn't use it, so there is no reason 
whatever to deprecate it.

 > I believe this was discussed on 6MAN.

I believe the conclusion was to leave it as is.

     Brian

--



*Gyan Mishra*

/Network Solutions A//rchitect /

/Email gyan.s.mis...@verizon.com //
/

/M 301 502-1347

/



___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area


Re: [Int-area] Call for WG adoption of draft-templin-intarea-parcels-10

2022-07-03 Thread Tom Herbert
On Sat, Jul 2, 2022, 9:26 PM Gyan Mishra  wrote:

>
> I reviewed the draft and don’t support WG adoption.
>
> Main reason as stated by others is the minimal gain from super jumbo
> greater then 9000 bytes is supported today of which most router / switch
> vendors for example Cisco supports maximum 9216 MTU and Juniper supports
> 16000 MTU.  Most all hosts Windows, MAC and Linux support up to 9216 MTU.
>
> Servers for 5G MEC Multi Access Edge Compute on the wireless  RAN Radio
> network X-Haul are now closer to the user’s 100G connected servers for
> network slice services being instantiated at ultra low latency microseconds
> as well as MSDC (Massively Scalable Data Centers) 100G connected servers
> and ultra low latency all using super Jumbo set to 9000+ MTU.
>
> Keeping in mind that MPLS, IPSEC, NVO overlays must be taken into account,
> the super jumbo server MTU must be less then the network underlay or
> overlay MTU so let’s say set to a maximum maximum  of 9116 for Cisco for
> now being the lowest common denominator in a network with Cisco and Juniper
> routers.
>
> At this time Transport networks with OTNGN DWDM packet or TDM over IP/MPLS
> a single wavelength can be 400G and now soon to be 800G standard per
> wavelength.
>
> At the current network speeds on the core and transport side given the
> higher bandwidths we are still at super jumbo which > 9000 and no vendor is
> close to 64k -  65535 which is the maximum super jumbo size.
>
> Jumbo grams RFC 2675  as discussed in the draft is  > 64k to 4.2B.
>
> We are well off from a network technology point of view coming anywhere
> close to use of Jumbo grams RFC 2675.
>
> Until we get close to the maximum MTU of Super Jumbo and we still feel we
> need larger buffers for better performance would this draft be even
> remotely a possibility.
>
> Also the big issue here is if you do the math you do get tremendous gains
> in throughout and performance from 1500 byte to jumbo to then super jumbo
> up to 9216.  The performance gains are not there to even go to the maximum
> of super jumbo 64k which has not happened and more then likely will never
> happen.
>
> However the cost of extra buffers on server NIC and router hardware proves
> that the performance gains are nominal and it’s not worth the investment by
> router and server vendors to ever support more then what is supported today
> which is 9216 on servers and what Cisco supports today.
>
> So bottom line is RFC 2675 would never come to fruition and should really
> be deprecated or made obsolete.
>

Gyan,

Please take a look at the reference I provided to the work to make GRO/GSO
jumbo grams. Routers are not the only devices that can make productive use
of RFC2674, hosts can use it as shown in the example. So IMO, there's no
need to deprecate the protocol as it is useful.

Tom


> I believe this was discussed on 6MAN.
>
> Kind Regards
>
> Gyan
>
> On Sat, Jul 2, 2022 at 11:12 AM Bob Hinden  wrote:
>
>> Hi,
>>
>> I agree with the other comments that this shouldn’t be adopted at this
>> point.
>>
>> Another point is that what I understand this is proposing would appear to
>> have non-trivial effect on current transport protocols, as it will add
>> delay to create the “parcels”.  I don’t see this issue discussed in the
>> draft, other than pointing to some other perhaps similar work.
>>
>> Bob
>>
>>
>> > On Jul 1, 2022, at 5:17 PM, Tommy Pauly > 40apple@dmarc.ietf.org> wrote:
>> >
>> > I agree with the points being raised by Tom and Joel. I don’t think
>> this is something intarea should adopt at this point. If there’s going to
>> be further discussion on this, I’d want to see more explanation of who
>> would intend to support and deploy this solution to the problem.
>> >
>> > If this is a matter of sending fewer packets over a particular link of
>> the network, the use of a proxy or tunnel between hosts may equally well
>> solve the problem without needing to make changes at this layer.
>> >
>> > Thanks,
>> > Tommy
>> >
>> >> On Jul 1, 2022, at 5:06 PM, Tom Herbert  wrote:
>> >>
>> >> At this point, I don't see IP parcels as being a significant benefit
>> to host performance which, as I understand it, is the primary motivation.
>> While it's an interesting idea, I don't support adoption.
>> >>
>> >> A recent patch to the Linux kernel allows for GSO/GRO segments greater
>> than 64K, using RFC2675 Jumbograms to reassemble so those limitations which
>> were discussed on the list have been addressed in implementation. There is
>> a nice writeup in https://lwn.net/Articles/884104/.
>> >>
>> >> As Joel mentions moving any sort of reassembly into network devices is
>> complex and problematic. For instance, if a middebox is trying to perform
>> reassembly of packets for a flow not addressed to it, it's implicitly
>> requiring that all packets of the flow that go through the device perform
>> reassembly which is contrary to the end-to-end model. Also, if reassembly
>> requires buffering of messages then 

Re: [Int-area] Jumbograms [was: Call for WG adoption of draft-templin-intarea-parcels-10]

2022-07-03 Thread Gyan Mishra
Thanks Brian for 6MAN clarification on RFC 2365 that it has been
implemented for very specialized environments.

I agree it does no harm to anyone who doesn’t use it.

What is the application where it was implemented if you have a link would
be greatly appreciated.

Thanks

Gyan

On Sun, Jul 3, 2022 at 1:08 AM Brian E Carpenter <
brian.e.carpen...@gmail.com> wrote:

> Hi Gyan,
>
> On 03-Jul-22 16:25, Gyan Mishra wrote:
> ...
> > So bottom line is RFC 2675 would never come to fruition and should
> really be deprecated or made obsolete.
>
> Why? Firstly, it has come to fruition, as an earlier message in this
> thread said. Secondly, it was intentionally designed for very special
> environments with unusual hardware, rather than for any commodity
> environment. Thirdly, it does no harm whatever to anyone that doesn't use
> it, so there is no reason whatever to deprecate it.
>
> > I believe this was discussed on 6MAN.
>
> I believe the conclusion was to leave it as is.
>
> Brian
>
> --



*Gyan Mishra*

*Network Solutions A**rchitect *

*Email gyan.s.mis...@verizon.com *



*M 301 502-1347*
___
Int-area mailing list
Int-area@ietf.org
https://www.ietf.org/mailman/listinfo/int-area