Hi,
We have run a test for a RPC workload with 1MB IO sizes, and collected
the tcp_default_output() len(gth) during the first pass in the output loop.
In such a scenario, where the application frequently introduces small
pauses (since the next large IO is only sent after the corresponding
What is the link speed that you're working with?
A long time ago, when I worked for a now-defunct 10GbE NIC vendor, I
experimented with the benefits of TSO as we varied the max TSO size. I cannot
recall the platform (it could have been OSX, Solaris, FreeBSD or Linux). At
the time (~2006?)
On Fri, Feb 2, 2024 at 1:21 AM Scheffenegger, Richard
wrote:
>
> Hi,
>
> We have run a test for a RPC workload with 1MB IO sizes, and collected the
> tcp_default_output() len(gth) during the first pass in the output loop.
>
> In such a scenario, where the application frequently introduces small
On Fri, Feb 2, 2024, at 9:05 PM, Rick Macklem wrote:
> > But the page size is only 4K on most platforms. So while an M_EXTPGS mbuf
> > can hold 5 pages (..from memory, too lazy to do the math right now) and
> > reduces socket buffer mbuf chain lengths by a factor of 10 or so (2k vs 20k
> >
On Fri, Feb 2, 2024, at 6:13 PM, Rick Macklem wrote:
> A factor here is the if_hw_tsomaxsegcount limit. For example, a 1Mbyte NFS
> write request
> or read reply will result in a 514 element mbuf chain. Each of these (mostly
> 2K mbuf clusters)
> are non-contiguous data segments. (I suspect
On Fri, Feb 2, 2024 at 4:48 PM Drew Gallatin wrote:
>
>
>
> On Fri, Feb 2, 2024, at 6:13 PM, Rick Macklem wrote:
>
> A factor here is the if_hw_tsomaxsegcount limit. For example, a 1Mbyte NFS
> write request
> or read reply will result in a 514 element mbuf chain. Each of these (mostly
> 2K
On Fri, Feb 2, 2024 at 6:20 PM Drew Gallatin wrote:
>
>
>
> On Fri, Feb 2, 2024, at 9:05 PM, Rick Macklem wrote:
>
> > But the page size is only 4K on most platforms. So while an M_EXTPGS mbuf
> > can hold 5 pages (..from memory, too lazy to do the math right now) and
> > reduces socket buffer