Again, I'm the last person to fight against code cleanup, but it's not
quite as cut-and-dried as you're making it to be.  BTW, most of the points
you make are also in strong support of making GLDv3 public, which I am
quite in support of.

 >     1) these drivers have a lot of cut-n-paste code... e.g. DLPI 
 > handling in hme, eri, probably contribute over 5K lines _each_.  Most of 
 > that code is also duplicated in both GLDv2 and in nemo.  More lines of 
 > code == more chances for bugs == more support headaches.

Yes, we're well aware of the swill.  If you look closely, you'll also find
all sorts of one-off bugfixes in various drivers for various customer
escalations -- we need to be careful not to regress any of those -- and
when necessary to pull those fixes into the framework or into other
drivers.  (As an example of these sort of oddities, check out
eri_mk_mblk_tail_space(), which has now spread to some other drivers.)

 >     2) nemo-ification automatically gets quite a bit of "free" 
 > performance in fewer CPU cycles used (direct function calls, etc.)

Quite possibly, but the lion's share of these are not high-performance
drivers.

 >     3) it gets us one step closer to being able to eliminate legacy 
 > APIs... possibly making the _frameworks_ simpler.  (For example, some 
 > Consolidation Private and Contracted Private interfaces for things like 
 > HW checksum offload, DLPI fastpath, etc. can pretty much get eradicated 
 > once the last Sun consumers of them go away.)

It doesn't really matter how much closer we get unless we can deal with
the third-party driver problem (and some third parties won't even consider
a GLDv3 conversion until they don't have to support S8/S9).

 >     4) centralizing functionality for stuff like DLPI handling means 
 > reduces long term support costs for drivers like eri, hme, etc.

Yes, this is why we did GLD in the first place.

 >     5) unifies administrative tasks... e.g. these drivers can adopt 
 > things like Brussels, will "just work" with dladm (they don't today!) etc.

It will "just work" with Clearview UV too.  Actually, all of them will
just work, no porting required.

 >     6) ultimately leading us one step closer towards nixing "ndd" (see 
 > point #3 above, again)  (Also removing duplicated kstat and ndd code in 
 > _these_ drivers.)

See third party driver problem again.

 >     7) paves the way for these drivers to support additional crossbow 
 > features like Multiple Unicast Address support, interrupt blanking 
 > (which may or may not be useful with 100Mb links... but imagine an older 
 > system like an E450 with a few dozen 100Mbit links on it...)

We get this with Clearview UV.

 >     8) as another item on #2, nemoification gets free 
 > multi-data-transmit (and receive!), yielding better performance as well.

Likewise.

 >     9) ultimately, also eradicating special case code in places like 
 > SunVTS and other test suites, that have special cases for devices like 
 > hme, gem, etc. (e.g. using custom ioctls to set loopback modes.)

That seems orthogonal to the GLDv3 work.  GLDv3 does not specify the
loopback testing model AFAIK.

 >   10) making these drivers DLPI style 1, bringing us much closer to 
 > removing the long standing DLPI style 2 "bug".

Again, this is part of Clearview UV.  All drivers will have DLPI style 1
nodes in /dev/net.  No porting necessary.

 > Finally, for most of these legacy drivers, the effort to convert is 
 > quite small.  On the order of a man day.  Seriously.  With all these 
 > benefits at such a low cost, why _wouldn't_ we do it?

The cost is not just (or even primarily) in the porting effort.  It's in
the regression testing, the code churn, and the possible bugs that fallout
of the work.  Again, there's nothing I like better than clean code.  But
given that Clearview UV will benefit all non-GLDv3 drivers without porting
them and also allow a number of framework simplifications, and that
third-party drivers will stand in the way of ripping out legacy support,
I'd rather see us focus on looking ahead, rather than rewriting history.

--
meem
_______________________________________________
networking-discuss mailing list
[email protected]

Reply via email to