On 03/01/2012 12:30 PM, Takuya Yoshikawa wrote:
This patch series is the result of the integration of my dirty logging
optimization work, including preparation for the new GET_DIRTY_LOG API,
and the attempt to get rid of controversial synchronize_srcu_expedited().
1 - KVM: MMU: Split the main
Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp wrote:
v2: changed to protect masked pages
Live migration gets a bit faster than v1.
I have noticed that this version is much faster than version 1 when
nr-dirty-pages = 16K, 32K, 64K.
So I have updated the description of PATCH 3 a bit: please
v2: changed to protect masked pages
Live migration gets a bit faster than v1.
Takuya
=== from v1
This patch series is the result of the integration of my dirty logging
optimization work, including preparation for the new GET_DIRTY_LOG API,
and the attempt to get rid of controversial
On 02/23/2012 03:25 PM, Peter Zijlstra wrote:
On Thu, 2012-02-23 at 20:33 +0900, Takuya Yoshikawa wrote:
- Stop allocating extra dirty bitmap buffer area
According to Peter, mmu_notifier has become preemptible. If we can
change mmu_lock from spin_lock to mutex_lock, as Avi said
Avi Kivity a...@redhat.com wrote:
There will be an inversion for sure, if __put_user() faults and triggers
an mmu notifier (perhaps directly, perhaps through an allocation that
triggers a swap).
Ah, I did not notice that possibility.
Thanks,
Takuya
--
To unsubscribe from this list:
This patch series is the result of the integration of my dirty logging
optimization work, including preparation for the new GET_DIRTY_LOG API,
and the attempt to get rid of controversial synchronize_srcu_expedited().
1 - KVM: MMU: Split the main body of rmap_write_protect() off from others
2 -
On Thu, 2012-02-23 at 20:33 +0900, Takuya Yoshikawa wrote:
- Stop allocating extra dirty bitmap buffer area
According to Peter, mmu_notifier has become preemptible. If we can
change mmu_lock from spin_lock to mutex_lock, as Avi said before, this
would be staightforward because we can