On Wed, Dec 10, 2014 at 11:41:05AM +0900, Ryota Ozaki wrote: > On Wed, Dec 10, 2014 at 11:29 AM, Thor Lancelot Simon <[email protected]> wrote: > > On Wed, Dec 10, 2014 at 11:19:52AM +0900, Ryota Ozaki wrote: > >> > On Tue, Dec 9, 2014 at 1:17 PM, Thor Lancelot Simon <[email protected]> > >> > wrote: > >> >> > >> >> Can you try increasing HZ? > >> > > >> > Thank you for the suggestion. Which HZ is good for the purpose? > >> > 1000 was not good for my environment (KVM for now) and I'm trying > >> > other HZ (500, 200, etc.). > >> > >> I couldn't get good results on this approach... > > > > It may be poorly suited to virtualized platforms. > > Okay, I'll try it on a physical machine. > > BTW, could you tell me how increasing HZ affects vioif and softint?
It should decrease the upper bound on latency to run the softint. As I understand the "fast softint" stuff though, it probably does not decrease the lower bound. How many streams of network traffic are you using in your forwarding test, and are they TCP or UDP? In my experience extreme latency sensitivity of forwarding throughput numbers is generally a feature of single-stream TCP tests. It can also indicate that you are not finishing all pending work each time the softint runs, which would be a bug in my opinion (device driver softints should dequeue and handle _all_ pending work for the device except in unusual cases) though one that probably only impacts performance, not correctness. For what it's worth, I know it's possible to get excellent forwarding performance from pure-polled network device drivers (I converted several NetBSD and FreeBSD drivers to this mode of operation at a former employer) but that does require a high value of HZ. It used to be the case that some values of HZ were more efficient than others because there was a lookup table to avoid division in some critical code -- I believe it was powers-of-2 plus special cases for 60 and 100 -- but I can't seem to find that code any more. Thor
