On Wed, 2012-03-28 at 11:37 +0200, Avi Kivity wrote:

> > Now I see that x86 just seems to flush everything, which is quite heavy
> > handed considering how often cirrus does it, but maybe it doesn't have a
> > choice (lack of reverse mapping from GPA ?).
> 
> We do have a reverse mapping, so we could easily flush just a single
> slot.  The reason it hasn't been done is that slot changes are very are
> on x86.  They're usually only done by 16-bit software; 32-bit software
> just maps the entire framebuffer BAR and accesses it directly.  It's
> also usually done in a tight loop, so flushing everything doesn't have a
> large impact (and with a 20-bit address space, you couldn't cause a
> large impact if you wanted to - memory is all of 256 pages).

Right ... except that it definitely seems to be happening here with
cirrusfb in the guest kernel :-)

Every time it does an imageblit it switches the BAR to emulation and
back to direct mapping when the "upload" of the image is complete.

The X cirrus driver doesn't seem to trigger that (at least didn't from
my limited testing so far) so it may not be using host data blits ...
I'll have to compare what they do in more details.

In any case, we are doing something wrong on kvm powerpc, I just wanted
to make sure my analysis was correct.

> > I also noticed that powerpc ... doesn't do anything :-) Ooops....
> >
> > So all translations still present in the TLB will remain there, all
> > translations present in the MMU hash table as well, etc...
> >
> > Now, in order to implement that properly and efficiently, we would need
> > to at least get the GPA (if not the whole memslot).
> >
> > Do you have any objection (provided I didn't completely misunderstand
> > something which is quite possible) to us adding that argument to
> > kvm_arch_flush_shadow() ? We can easily put in a small patch adding that
> > as an unused argument, and later get the proper implementation for
> > powerpc.
> 
> Sure, it makes sense.
> 
> > Another thing I noticed while at it is that my version of
> > __kvm_set_memory_region() appears to call kvm_iommu_map_pages() whenever
> > adding a memslot ... but never does the opposite unmapping when that
> > memory slot is removed.... isn't that potentially an issue ?
> 
> It is.  Alex?

Cheers,
Ben.



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to