Erik Nordmark wrote:

> I wasn't just concerned about the complexity in the driver - I am 
> concerned about the total system complexity caused by the MDT 
> implementation.  The amount of code that needs to know about M_MULTIDATA 
> is scary, and in many cases there are different code paths to deal with 
> those which makes understanding, supporting, and bug fixing much more 
> complex.


I think one reason for the above is that we must be
backward compatible, hence we need to keep the good
old path forever.  The sad truth is that we will
always be limited by the existing mblk construct if
we cannot accept different code paths.  Note that I
am not promoting multiple code paths.


> Architecturally it makes more sense to have everything about GLD just 
> view everything as TCP LSO. In the case the hardware doesn't handle LSO 
> it is quite efficient to convert the LSO format to an "MDT format". By 
> this I mean take LSO's 'one TCP/IP header, one large payload' into 
> 'multiple TCP/IP headers, separate payloads but on the same pages'. That 
> means you'd get the performance benefit of doing DMA/IOMMU setup for the 
> single large payload and page with N TCP/IP headers.


As Jim stated, the question is whether we want to do
the above given the already known problems.  For example,
suppose TCP wants to do better PMTUd and wants to change
the segment size on the fly.  In order to recover faster
in case PMTU has not changed, it decides to send alternate
small and big segments.  I think the above GLD LSO scheme
will not allow this easily.  TCP will need to do multiple
sends just like today.  And I guess the above GLD LSO
scheme still won't solve the issues I gave in my previous
email.  So maybe we can just do the simple thing and forget
about this GLD LSO thingy.  And just make the code path
simple and quick enough.



-- 

                                                K. Poon.
                                                [EMAIL PROTECTED]

_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to