Nicolas Droux wrote:
Garrett D'Amore wrote:
Nicolas Droux wrote:
Garrett D'Amore wrote:
I've gone ahead and done a conversion of dmfe to GLDv3 (nemo). I
actually have a dmfe device on SPARC (in this case its on a SPARC
laptop), so I figured this would be beneficial.
I'd like to have folks review the work at
http://cr.grommit.com/~gdamore/dmfe_gldv3/webrev
Great to see another driver ported to Nemo.
Want to do the others as well... eri and hme are next on my hit
list. They will get a _lot_ smaller as a result. There's been
request for qfe (and also for qfe sources to be open) with support
for qfe on x86. However, those sources are not open yet, for reasons
beyond my understanding at the moment.
I would suggest focusing on the ones that are still heavily used
instead of the ones for legacy devices, and instead spend time on
making the GLDv3 interface officially public so that more
"interesting" drivers can be ported to GLDv3 :-)
Heavily used depends on what systems you have at hand. hme and eri are
_widely_ deployed, and _widely_ used. E.g. all UltraSPARC II/IIi/IIe
based systems shipped with either eri or hme. This includes "big"
systems like E10k, E6500, etc.
The other "popular" nic is qfe, but that code doesn't live in ON where I
can readily access it. (The argument for doing qfe is probably quite
compelling, as a lot of qfe NICs were sold, even well after gigabit
started to become popular.)
In the meantime, converting the legacy nics is actually pretty darned
easy. It takes only ~1 day of coding effort to convert hme or eri. (I
did an hme->gldv2 conversion 4 years ago, which nobody picked up. But
now it starts to look interesting again with nemo. I've already had a
person ask me about doing an hme conversion, and several asking for qfe.)
Likewise for gem. Cassini is probably the most pressing conversion,
but also the most painful/risky to do, because it supports so many
features. I might be willing to convert it, if I had hardware
readily available.
Yeah, that would be more challenging. Also ce does not live in ON and
I don't think it is opensourced.
Right. Those are the reasons why I'm not chomping at the bit to do ce
right now. If I was reasonably confident that my work to convert ce
would result in ce getting into ON, then I'd probably just go for it.
But I don't have that feeling right now.
FYI, my own afe driver, which is GLDv2 (Solaris 8 DDI only) is soon
to be integrated. I'm not converting to nemo until _after_
integration, as I do not want to cause a reset on the testing that
has already taken place prior to its integration.
That's unfortunate. If the end-goal is to have a GLDv3 version, the
appropriate retesting will have to be done anyway. These extra cycles
could be spent now and the conversion done with. (This is pointing out
another problem which is the level of pain needed to properly QA a
driver before integration in ON, this should be really automated to
lower the barrier of entry for new drivers in ON.)
The testing cycles have _already_ been spent, is my point. So any
change at this point causes a test reset. I think the barrier to entry
for a "new" driver is higher than modifications to an existing driver.
Once it goes in, I'll quickly adapt it, because there are a bunch of
GLDv3 features I want to make use of, including things like
mac_link_update(), multiaddress support, etc.
Are there any sample tests that I can use to validate the
MULTIADDRESS capability? I'm thinking that I'd add that as an RFE
after I commit the initial nemo conversion.
Unfortunately not today. One way to do this today would be to use your
driver on a system running Crossbow and use VNICs to exercise that code.
Okay, I will probably need to start playing around with VNICs soon, anyway.
Btw, have you reviewed the actual code? I'm looking for reviewers
that I can list in an RTI....
Not yet, but I will.
Thanks.
-- Garrett
Nicolas.
-- Garrett
_______________________________________________
networking-discuss mailing list
[email protected]