On Thu, Jun 21, 2012 at 11:23 PM, Jan Kiszka <jan.kis...@siemens.com> wrote:
> On 2012-06-21 16:49, Liu Ping Fan wrote:
>> Nowadays, we use qemu_mutex_lock_iothread()/qemu_mutex_unlock_iothread() to
>> protect the race to access the emulated dev launched by vcpu threads & 
>> iothread.
>>
>> But this lock is too big. We can break it down.
>> These patches separate the CPUArchState's protection from the other devices, 
>> so we
>> can have a per-cpu lock for each CPUArchState, not the big lock any longer.
>
> Anything that reduces lock dependencies is generally welcome. But can
> you specify in more details what you gain, and under which conditions?
>
In fact, there are several steps to break down the Qemu big lock. This
step just aims to shrink the code area protected by
qemu_mutex_lock_iothread()/qemu_mutex_unlock_iothread(). And I am
working on the following steps, which focus on breaking down the big
lock when calling handle_{io,mmio}

Thanks and regards,
pingfan

> I'm skeptical if this is the right area to start. With the in-kernel
> irqchip enabled, no CPUArchState field is touched during normal
> operations (unless I missed something subtle in the past). At the same
> time, this locking is unfortunately fairly complex and invasive, so not
> "cheap" to integrate.
>
> IMO more interesting is breaking out some I/O path, e.g. from a NIC to a
> network backend, and get this processed in a separate thread without
> touching the BQL (Big QEMU Lock). I've experimental patches for this
> here, but they need rebasing and polishing.
>
> Thanks,
> Jan
>
> --
> Siemens AG, Corporate Technology, CT T DE IT 1
> Corporate Competence Center Embedded Linux

Reply via email to