On 03/09/2012 02:08 AM, Marcelo Tosatti wrote:
So, it would be helpful if you can apply the patch series and I can work
on top of that: although I cannot use servers with 100GB memory now,
migrating a guest with 16GB memory or so may be possible later: I need
to reserve servers for that.
On Thu, Mar 08, 2012 at 10:35:45AM +0900, Takuya Yoshikawa wrote:
Marcelo Tosatti mtosa...@redhat.com wrote:
What is worrying are large memory cases: think of the 50GB slot case.
100ms hold time is pretty bad (and reacquiring the lock is relatively
simple).
OK, I agree basically.
Marcelo Tosatti mtosa...@redhat.com wrote:
Partly yes: my method mainly depends on the number of dirty pages,
not slot size.
But it is not a new problem: traversing all shadow pages for that
also takes linearly increasing time.
It was not necessary to read the bitmap under mmu_lock
Marcelo Tosatti mtosa...@redhat.com wrote:
We cannot drop the lock.
Do you mean doing TLB flush each time before dropping the lock?
Yes, only if there is contention (see cond_resched_lock).
But how can we conditionally call kvm_flush_remote_tlbs() from
inside __cond_resched_lock right
On Wed, Mar 07, 2012 at 05:18:26PM +0900, Takuya Yoshikawa wrote:
Marcelo Tosatti mtosa...@redhat.com wrote:
We cannot drop the lock.
Do you mean doing TLB flush each time before dropping the lock?
Yes, only if there is contention (see cond_resched_lock).
But how can we
On Wed, Mar 07, 2012 at 05:07:45PM +0900, Takuya Yoshikawa wrote:
Marcelo Tosatti mtosa...@redhat.com wrote:
Partly yes: my method mainly depends on the number of dirty pages,
not slot size.
But it is not a new problem: traversing all shadow pages for that
also takes linearly
Marcelo Tosatti mtosa...@redhat.com wrote:
What is worrying are large memory cases: think of the 50GB slot case.
100ms hold time is pretty bad (and reacquiring the lock is relatively
simple).
OK, I agree basically.
But let me explain one thing before deciding what I should do next.
With
On Sat, Mar 03, 2012 at 02:21:48PM +0900, Takuya Yoshikawa wrote:
We have seen some problems of the current implementation of
get_dirty_log() which uses synchronize_srcu_expedited() for updating
dirty bitmaps; e.g. it is noticeable that this sometimes gives us ms
order of latency when we use
Marcelo Tosatti mtosa...@redhat.com wrote:
+ spin_lock(kvm-mmu_lock);
It is not clear why mmu_lock is needed. Dropping it across the xchg loop
should be similar to srcu implementation, in that concurrent updates
will be visible only on the next get_dirty call? Well, it is necessary
On Tue, Mar 06, 2012 at 11:43:17PM +0900, Takuya Yoshikawa wrote:
Marcelo Tosatti mtosa...@redhat.com wrote:
+ spin_lock(kvm-mmu_lock);
It is not clear why mmu_lock is needed. Dropping it across the xchg loop
should be similar to srcu implementation, in that concurrent updates
will
Marcelo Tosatti mtosa...@redhat.com wrote:
If we do not mind scanning the bitmap twice, we can decouple the
xchg loop and write protection, but it will be a bit slower, and in
any case we need to hold mmu_lock until TLB is flushed.
Why is it necessary to scan twice? Simply continuing to
On Wed, Mar 07, 2012 at 12:23:17AM +0900, Takuya Yoshikawa wrote:
Marcelo Tosatti mtosa...@redhat.com wrote:
If we do not mind scanning the bitmap twice, we can decouple the
xchg loop and write protection, but it will be a bit slower, and in
any case we need to hold mmu_lock until TLB
We have seen some problems of the current implementation of
get_dirty_log() which uses synchronize_srcu_expedited() for updating
dirty bitmaps; e.g. it is noticeable that this sometimes gives us ms
order of latency when we use VGA displays.
Furthermore the recent discussion on the following
13 matches
Mail list logo