Sunay Tripathi wrote: > Garrett D'Amore wrote: >> Am I reading this correctly, in that polling for receive packets is >> only used if the NIC has more than a single rx ring? > > Yes. The issue is that we can't let one consumer (which can be a > service or a VNIC tied to a virtual machine) own the entire NIC. > So if the NIC has only one Rx ring, we never put that in poll > mode and let the packets always fly till the S/W classifer and > soft ring set (SRS) (which mimic Rx rings in pseudo H/W layer). At that > point, the SRS can still be put in a polling mode since that is > unique to the VNIC or the service. > > Now having said that, this is probably suboptimal to the small packet > forwarding performance. We can still put the entire NIC in the > poll mode but what would be a good way to figure this out (don't > necessarily like the idea of adding a tunable). Can we do that > by adding a property to the NIC? Is there a more intutive way?
When inbound packets arrive faster than the upper layers can process them, this would be the point to do that. So when the squeues or soft rings (or whatever the crossbow analog is) fill up, then may as well put the NIC in polling mode. Take it back out of polling mode when you catch back up (i.e. you are able to verify that that ring is no longer full and the poll returns no packet.) If there is no soft ring in the middle, then simply let the NIC tell you that its hardware ring is full... perhaps the NIC driver can *ask* to be put into polling mode in this situation? -- Garrett > >> Most of the commonly available NICs only have a single receive ring. >> (Exceptions are nxge, ce, some models of bge, bnx, and certain Intel >> NICs.) Notably the e1000g devices found on current niagra hardware >> only have a single receive ring. > > Correct. > >> When the rx ring gets overfull, it would be nice to be able to >> dynamically switch to polling somehow. > > Agreed. If you have any suggestions, let me know. > > Thanks, > Sunay > >> >> -- Garrett >> >> Sunay Tripathi wrote: >>> Guys, >>> >>> We have a basic implementation in place for this work >>> http://www.opensolaris.org/os/project/crossbow/Design_softringset.txt >>> >>> Basically the idea is to keep the data paths very tight, always do >>> Dynamic polling when possible (independent of the workloads) while >>> utilizing more than 1 CPU for parallelizing the work load. As part of >>> this work, I am trying to see what can be done for small packet >>> forwarding performance and also latency (btw there are two projects >>> coming online soon to specifically target forwarding and latrency). >>> >>> So if people have needs/suggestions/stakes in this area, I would >>> recommend reading the above document and dive in. As for how to >>> test some of these things, you can use 'ttcp' >>> (http://sd.wareonearth.com/~phil/net/ttcp/) on a back to back >>> setup for starters (10Gb NICs might be better). Use the '-D' option >>> with small write (64 bytes) to disable nagle and actually send >>> small packets on the wire. >>> >>> What I typically use >>> server: ./ttcp -s -r -v -u -b 262144 -l 64 -n 500000 >>> client: ./ttcp -D -s -v -u -t <hostname> -b 262144 -l 64 -n 500000 >>> >>> This is with UDP and if you snoop the wire, you will see the small >>> packets going by. >>> >>> Cheers, >>> Sunay >>> >>> >> >> _______________________________________________ >> crossbow-discuss mailing list >> crossbow-discuss at opensolaris.org >> http://mail.opensolaris.org/mailman/listinfo/crossbow-discuss > > _______________________________________________ crossbow-discuss mailing list crossbow-discuss at opensolaris.org http://mail.opensolaris.org/mailman/listinfo/crossbow-discuss