On Fri, Apr 29, 2011 at 11:53 AM, Sasha Levin <[email protected]> wrote:
> On Fri, 2011-04-29 at 15:13 +0800, Asias He wrote:
>> On 04/29/2011 02:52 PM, Pekka Enberg wrote:
>> > Please make that IRQ latency fix a separate patch. Don't we need to do
>> > it for TX path as well, btw?
>> >
>>
>> No. We only need it in RX path. Sasha's threadpool patch breaks this.
>> I'm just moving it back.
>>
>
> I've moved the kvm__irq_line() call out because what would happen
> sometimes is the following:
>  - Call kvm__irq_line() to signal completion.
>  - virt_queue__available() would return true.
>  - readv() call blocks.
>
> I figured it happens because we catch virt_queue in while it's being
> updated by the guest and use an erroneous state of it.
>
> --
>
> Sasha.
>
>

So, if I understand all the things correct -- making virtio devices to
belong separated irqs
issued some race conditions on read\write operations between host and
guest and adding
thread pool revealed it, right? (because previously we were doing all
the work inside i/o
path on guest site).
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to