I'm working on a unified 10/100/1000 mii layer, and so far I'm pretty
pleased. As a result of my work, iprb gains (for example) full
Brussels support, including dynamic adjustments to link parameters. I
expect other drivers to benefit from this as well, and for the work to
allow elimination of a lot of redundancy in NIC drivers.
There are still some kinks, but one of the problems I'm running into is
a few drivers (hme is an excellent example) that have all kinds of
non-standard tunables or hacks for bizarro buggy hardware configurations.
A good example is the "link-pulse-disabled" property. I can't see any
reason why anyone would ever want to disable the link pulse. hme forces
the card into 10 Mbps half duplex operation when this is done. Are
there 10 Mbps hubs out there that are both still in use, and don't
support the link pulse (which is required by the standard!)? I'd just
assume eliminate this hackery.
Another example, is the workaround in hme for a bug in the Synoptics
28115 switch firmware. Apparently, the solution, which is only
available with the QS6612 phy, is to disable the scrambler for 20 usec
and then to restart it. No other operating system I can find has this
workaround. (The Synoptics 28115 was EOL'd over 10 years ago, I note!)
Do we still need to keep this hack in the code? I've left it in place
for now, but its really "moving" the code, and I have no way to test it.
There were hacks in the Intel code for Intel 82555 bugs as well. A good
example, is the "AutoPolarity" setting. All of the other operating
systems I've looked at just leave the device in autopolarity mode. The
old code tries to detect a bug in the 82555 that potentially causes the
PHY to falsely detect a reverse polarity when combined with a short
cable and certain other PHYs. I've removed it... instead if
AutoPolarity fails for the user, they should just turn it off using a
driver.conf tunable. (One idea would be to just turn off autopolarity
altogether -- its only used with 10 Mbps and hence IMO of limited
utility. These days, if I have a cable reversed, I'd rather have a hard
failure to connect than a successful connection at 10 Mbps of what I
presumed would be 100 Mbps. In fact, I might just do this unless
someone can describe to me why this is a bad idea. I was concerned
about breaking users with working configurations on upgrade, but --
frankly, their configurations are already busted and there probably
aren't that many people that would be impacted by this -- everyone uses
100 Mbps now, right? And if not, they at least hopefully use the right
kind of cable!) The PhyErrataFrequency problem described in iprb(7D)
is another case in point -- in that case I've left it out. (The code
looked like an attempt to correct parallel detection problems with
certain longer cable lengths and 10 Mbps links. But, notably, no such
workarounds exist in any of the other open source operating systems I've
looked at.)
The upshot of all this is that I'd really like to hear from folks
(especially those likely to be upset over any such change!) about the
possibility of removing as much of this legacy stuff as possible. I
really believe that if these devices are working well for Linux and *BSD
users, then they should work well for us when programmed in a similar
fashion.
So, specifically, here's the proposed set of changes (so far):
* Removal of "PhyErrata" frequency tuning in iprb. If your i82555
doesn't work right for you, upgrade to 100 Mbps. :-) (Or configure a
"forced" mode, ;-( I don't think this will be much of a problem.
(affects iprb)
* Removal of "AutoPolarity" handling of 10Base-T links on i82555
(mostly affects iprb) -- as I already indicated, I think this is
probably harmful anyway.
* Removal of "link-pulse-disabled" handling in hme/qfe.
Additionally, I'd like to remove the "ForceSpeedDuplex" driver.conf
tunables from iprb. Instead, this driver can use the same tunable
mechanisms that all other drivers use (including full support via
Brussels and dladm!)
I actually want to eliminate the various driver.conf tunables (that seem
to be specific to each driver anyway) for link mode configuration in
each of the drivers that I convert to the common MII layer. Some
customers might notice that they lose their "forced" configuration
settings on upgrade, but these are easily replaced by the admin using
dladm anyway (and if at least one user decides that its time to switch
to autonegotiation instead as a result of this, then I'll feel like
change was worth it! :-)
So thoughts?
Oh yeah, one other thing: this is only for Solaris Nevada/OpenSolaris.
I have no plans for any of this to backport to S10 (at least not for
*existing* drivers.)
- Garrett
_______________________________________________
networking-discuss mailing list
[email protected]