Mark McLoughlin wrote:
The way I see this (continuing with your example figures) playing out
is:

- If we have a packet rate of <2.5K packets/sec, we essentially have zero added latency - each packet causes a vmexit and the packet is dispatched immediately

- As soon as we go above 2.5k we add, on average, an additional ~400us delay to each packet

- This is almost identical to our current scheme with an 800us timer, except that flushes are typically triggered by a vmexit instead of
    the timer expiring

I don't think this is the effect you're looking for? Am I missing
something?

No.  While it's what my description implies, it's not what I want.

Let's step back for a bit.  What do we want?

Let's assume the virtio queue is connected to a real queue. The guest->host scenario is easier, and less important.

So:

1. we never want to have a situation where the host queue is empty, but the guest queue has unkicked entries. That will drop us below line rate and add latencies. 2. we want to avoid situations where the host queue is non-empty, and we kick the guest queue. The won't improve latency, and will increase cpu utilization - if the host queue is close to depletion, then we _do_ want the kick, to avoid violating the first requirement (which is more important)

Does this seem sane as high-level goals? If so we can think about how to implement it.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to