On 06/30/2010 04:18 PM, Alexander Graf wrote:
Book3s suffered from my really bad shadow MMU implementation so far. So
I finally got around to implement a combined hash and list mechanism that
allows for much faster lookup of mapped pages.
To show that it really is faster, I tried to run simple
On 01.07.2010, at 09:29, Avi Kivity wrote:
On 06/30/2010 04:18 PM, Alexander Graf wrote:
Book3s suffered from my really bad shadow MMU implementation so far. So
I finally got around to implement a combined hash and list mechanism that
allows for much faster lookup of mapped pages.
To show
On 07/01/2010 11:18 AM, Alexander Graf wrote:
How does dirty bitmap flushing work on x86 atm? I loop through all mapped pages
and flush the ones that match the range of the region I need to flush. But
wouldn't it be a lot more efficient to have an hlist in the memslot and loop
through that
Avi Kivity wrote:
On 07/01/2010 11:18 AM, Alexander Graf wrote:
How does dirty bitmap flushing work on x86 atm? I loop through all
mapped pages and flush the ones that match the range of the region I
need to flush. But wouldn't it be a lot more efficient to have an
hlist in the memslot and
On 07/01/2010 01:00 PM, Alexander Graf wrote:
But doesn't that mean that you still need to loop through all the hvas
that you want to invalidate?
It does.
Wouldn't it speed up dirty bitmap flushing
a lot if we'd just have a simple linked list of all sPTEs belonging to
that memslot?
Avi Kivity wrote:
On 07/01/2010 01:00 PM, Alexander Graf wrote:
But doesn't that mean that you still need to loop through all the hvas
that you want to invalidate?
It does.
Wouldn't it speed up dirty bitmap flushing
a lot if we'd just have a simple linked list of all sPTEs belonging to
On 07/01/2010 03:28 PM, Alexander Graf wrote:
Wouldn't it speed up dirty bitmap flushing
a lot if we'd just have a simple linked list of all sPTEs belonging to
that memslot?
The complexity is O(pages_in_slot) + O(sptes_for_slot).
Usually, every page is mapped at least once, so
Avi Kivity wrote:
On 07/01/2010 03:28 PM, Alexander Graf wrote:
Wouldn't it speed up dirty bitmap flushing
a lot if we'd just have a simple linked list of all sPTEs belonging to
that memslot?
The complexity is O(pages_in_slot) + O(sptes_for_slot).
Usually, every page is
On 07/01/2010 03:52 PM, Alexander Graf wrote:
Don't you use lazy spte updates?
We do, but given enough time, the guest will touch its entire memory.
Oh, so that's the major difference. On PPC we have the HTAB with a
fraction of all the mapped pages in it. We don't have a
On Wed, Jun 30, 2010 at 03:18:44PM +0200, Alexander Graf wrote:
Book3s suffered from my really bad shadow MMU implementation so far. So
I finally got around to implement a combined hash and list mechanism that
allows for much faster lookup of mapped pages.
To show that it really is faster, I
On Thu, 2010-07-01 at 14:52 +0200, Alexander Graf wrote:
Page ageing is difficult. The HTAB has a hardware set referenced bit,
but we don't have a guarantee that the entry is still there when we look
for it. Something else could have overwritten it by then, but the entry
could still be
On Thu, 2010-07-01 at 16:42 +0300, Avi Kivity wrote:
So I think the only reasonable way to implement page ageing is to
unmap
pages. And that's slow, because it means we have to map them again
on
access. Bleks. Or we could look for the HTAB entry and only unmap
them
if the entry is moot.
Book3s suffered from my really bad shadow MMU implementation so far. So
I finally got around to implement a combined hash and list mechanism that
allows for much faster lookup of mapped pages.
To show that it really is faster, I tried to run simple process spawning
code inside the guest with and
13 matches
Mail list logo