On 04/18/2013 09:29 PM, Marcelo Tosatti wrote:
> On Thu, Apr 18, 2013 at 10:03:06AM -0300, Marcelo Tosatti wrote:
>> On Thu, Apr 18, 2013 at 12:00:16PM +0800, Xiao Guangrong wrote:
What is the justification for this?
>>>
>>> We want the rmap of being deleted memslot is removed-only that
On Thu, Apr 18, 2013 at 12:00:16PM +0800, Xiao Guangrong wrote:
> >
> > What is the justification for this?
>
> We want the rmap of being deleted memslot is removed-only that is
> needed for unmapping rmap out of mmu-lock.
>
> ==
> 1) do not corrupt the rmap
> 2) keep pte-list-descs
On Thu, Apr 18, 2013 at 10:03:06AM -0300, Marcelo Tosatti wrote:
> On Thu, Apr 18, 2013 at 12:00:16PM +0800, Xiao Guangrong wrote:
> > >
> > > What is the justification for this?
> >
> > We want the rmap of being deleted memslot is removed-only that is
> > needed for unmapping rmap out of
On Thu, Apr 18, 2013 at 10:03:06AM -0300, Marcelo Tosatti wrote:
On Thu, Apr 18, 2013 at 12:00:16PM +0800, Xiao Guangrong wrote:
What is the justification for this?
We want the rmap of being deleted memslot is removed-only that is
needed for unmapping rmap out of mmu-lock.
On Thu, Apr 18, 2013 at 12:00:16PM +0800, Xiao Guangrong wrote:
What is the justification for this?
We want the rmap of being deleted memslot is removed-only that is
needed for unmapping rmap out of mmu-lock.
==
1) do not corrupt the rmap
2) keep pte-list-descs available
3)
On 04/18/2013 09:29 PM, Marcelo Tosatti wrote:
On Thu, Apr 18, 2013 at 10:03:06AM -0300, Marcelo Tosatti wrote:
On Thu, Apr 18, 2013 at 12:00:16PM +0800, Xiao Guangrong wrote:
What is the justification for this?
We want the rmap of being deleted memslot is removed-only that is
needed for
On 04/18/2013 08:05 AM, Marcelo Tosatti wrote:
> On Tue, Apr 16, 2013 at 02:32:50PM +0800, Xiao Guangrong wrote:
>> The current kvm_mmu_zap_all is really slow - it is holding mmu-lock to
>> walk and zap all shadow pages one by one, also it need to zap all guest
>> page's rmap and all shadow page's
On Tue, Apr 16, 2013 at 02:32:50PM +0800, Xiao Guangrong wrote:
> The current kvm_mmu_zap_all is really slow - it is holding mmu-lock to
> walk and zap all shadow pages one by one, also it need to zap all guest
> page's rmap and all shadow page's parent spte list. Particularly, things
> become
On Tue, Apr 16, 2013 at 02:32:50PM +0800, Xiao Guangrong wrote:
The current kvm_mmu_zap_all is really slow - it is holding mmu-lock to
walk and zap all shadow pages one by one, also it need to zap all guest
page's rmap and all shadow page's parent spte list. Particularly, things
become worse
On 04/18/2013 08:05 AM, Marcelo Tosatti wrote:
On Tue, Apr 16, 2013 at 02:32:50PM +0800, Xiao Guangrong wrote:
The current kvm_mmu_zap_all is really slow - it is holding mmu-lock to
walk and zap all shadow pages one by one, also it need to zap all guest
page's rmap and all shadow page's parent
The current kvm_mmu_zap_all is really slow - it is holding mmu-lock to
walk and zap all shadow pages one by one, also it need to zap all guest
page's rmap and all shadow page's parent spte list. Particularly, things
become worse if guest uses more memory or vcpus. It is not good for
scalability.
The current kvm_mmu_zap_all is really slow - it is holding mmu-lock to
walk and zap all shadow pages one by one, also it need to zap all guest
page's rmap and all shadow page's parent spte list. Particularly, things
become worse if guest uses more memory or vcpus. It is not good for
scalability.
12 matches
Mail list logo