Andrea Arcangeli wrote:
> On Wed, Apr 02, 2008 at 02:16:41PM +0300, Avi Kivity wrote:
>
>> Ugh, there's still mark_page_accessed() and SetPageDirty().
>>
>
> btw, like PG_dirty is only set if the spte is writeable,
> mark_page_accessed should only run if the accessed bit is set in the
> spt
Avi Kivity wrote:
> Andrea Arcangeli wrote:
>> On Wed, Apr 02, 2008 at 12:50:50PM +0300, Avi Kivity wrote:
>>
>>> Isn't it faster though? We don't need to pull in the cacheline
>>> containing the struct page anymore.
>>>
>>
>> Exactly, not only that, get_user_pages is likely a bit slower t
On Wed, Apr 02, 2008 at 01:50:19PM +0200, Andrea Arcangeli wrote:
> if (pfn_valid(pfn)) {
> page = pfn_to_page(pfn);
> if (!PageReserved(page)) {
> BUG_ON(page_count(page) != 1);
> if (is_writeable_pte(*spte))
>
On Wed, Apr 02, 2008 at 02:16:41PM +0300, Avi Kivity wrote:
> Ugh, there's still mark_page_accessed() and SetPageDirty().
btw, like PG_dirty is only set if the spte is writeable,
mark_page_accessed should only run if the accessed bit is set in the
spte. It doesn't matter now as nobody could possib
Avi Kivity wrote:
> Andrea Arcangeli wrote:
>> On Wed, Apr 02, 2008 at 12:50:50PM +0300, Avi Kivity wrote:
>>
>>> Isn't it faster though? We don't need to pull in the cacheline
>>> containing the struct page anymore.
>>>
>>
>> Exactly, not only that, get_user_pages is likely a bit slower t
Andrea Arcangeli wrote:
> On Wed, Apr 02, 2008 at 12:50:50PM +0300, Avi Kivity wrote:
>
>> Isn't it faster though? We don't need to pull in the cacheline containing
>> the struct page anymore.
>>
>
> Exactly, not only that, get_user_pages is likely a bit slower that we
> need for just kvm
On Wed, Apr 02, 2008 at 12:50:50PM +0300, Avi Kivity wrote:
> Isn't it faster though? We don't need to pull in the cacheline containing
> the struct page anymore.
Exactly, not only that, get_user_pages is likely a bit slower that we
need for just kvm pte lookup. GRU uses follow_page directly bec
Andrea Arcangeli wrote:
> On Wed, Apr 02, 2008 at 07:32:35AM +0300, Avi Kivity wrote:
>
>> It ought to work. gfn_to_hfn() (old gfn_to_page) will still need to take a
>> refcount if possible.
>>
>
> This reminds me, that mmu notifiers we could implement gfn_to_hfn only
> with follow_page a
On Wed, Apr 02, 2008 at 07:32:35AM +0300, Avi Kivity wrote:
> It ought to work. gfn_to_hfn() (old gfn_to_page) will still need to take a
> refcount if possible.
This reminds me, that mmu notifiers we could implement gfn_to_hfn only
with follow_page and skip the refcounting on the struct page.
I
Anthony Liguori wrote:
> What about switching the KVM MMU code to use hfn_t instead of struct
> page? The initial conversion is pretty straight forward as the places
> where you actually need a struct page you can always get it from
> pfn_to_page() (like in kvm_release_page_dirty).
>
> We can th
Anthony Liguori wrote:
>
> You could get away with supporting reserved RAM another way though.
> If we refactored the MMU to use hfn_t instead of struct page, you
> would then need a mechanism to mmap() reserved ram into userspace
> (similar to ioremap I guess). In fact, you may be able to jus
On Tue, Apr 01, 2008 at 10:22:51PM +0300, Avi Kivity wrote:
> It's just something we discussed, not code.
Yes, the pfn_valid check should skip all refcounting for mmio regions
without a struct page. But gfn_to_page can't work without a struct
page, so some change will be needed there. With the res
Andrea Arcangeli wrote:
> On Tue, Apr 01, 2008 at 01:21:37PM -0500, Anthony Liguori wrote:
>
>> return a page, not a HPA. I haven't looked too deeply yet, but my
>> suspicion is that to properly support mapping in VM_IO pages will require
>> some general refactoring since we always assume tha
On Tue, Apr 01, 2008 at 01:21:37PM -0500, Anthony Liguori wrote:
> return a page, not a HPA. I haven't looked too deeply yet, but my
> suspicion is that to properly support mapping in VM_IO pages will require
> some general refactoring since we always assume that a struct page exists
> for any
Avi Kivity wrote:
> Ben-Ami Yassour1 wrote:
>
>>
>>
>
> Not enough. How do you know if this calling process has permissions to
> access that pci device (I retract my previous "pci passthrough is as
> rootish as you get" remark).
>
>
>> What do you think? Given that the shadow page
Ben-Ami Yassour1 wrote:
>>
>> Can you explain why you're not using the regular memory slot mechanism?
>> i.e. have userspace mmap(/dev/mem) and create a memslot containing that
>> at the appropriate guest physical address?
>>
>>
> Our initial approach was to mmap /sys/bus/pci/devices/.../resou
Avi Kivity <[EMAIL PROTECTED]> wrote on 01/04/2008 16:30:00:
> [EMAIL PROTECTED] wrote:
> > From: Ben-Ami Yassour <[EMAIL PROTECTED]>
> >
> > Enable a guest to access a device's memory mapped I/O regions directly.
> > Userspace sends the mmio regions that the guest can access. On the
first
> > p
Anthony Liguori wrote:
> I looked at Andrea's patches and I didn't see any special handling for
> non-RAM pages. Something Muli mentioned that kept them from doing
> /sys/devices/pci/.../region to begin with was the fact that IO pages do
> not have a struct page backing them so get_user_pages()
Andrea Arcangeli wrote:
> On Tue, Apr 01, 2008 at 10:20:49AM -0500, Anthony Liguori wrote:
>
>> Which is apparently entirely unnecessary as we already have
>> /sys/bus/pci/.../region. It's just a matter of checking if a vma is VM_IO
>> and then dealing with the subsequent reference counting i
Andrea Arcangeli wrote:
> On Tue, Apr 01, 2008 at 06:18:07PM +0100, Daniel P. Berrange wrote:
>
>> and very few application domains are allowed to access them. THe KVM/QEMU
>> policy will not allow this for example. Basically on the X server, HAL and
>> dmidecode have access in current policy. I
Avi Kivity wrote:
> Anthony Liguori wrote:
>> Avi Kivity wrote:
>>> [EMAIL PROTECTED] wrote:
>>>
From: Ben-Ami Yassour <[EMAIL PROTECTED]>
Enable a guest to access a device's memory mapped I/O regions
directly.
Userspace sends the mmio regions that the guest can access.
On Tue, Apr 01, 2008 at 08:10:31PM +0200, Andrea Arcangeli wrote:
> On Tue, Apr 01, 2008 at 06:18:07PM +0100, Daniel P. Berrange wrote:
> > and very few application domains are allowed to access them. THe KVM/QEMU
> > policy will not allow this for example. Basically on the X server, HAL and
> > dm
On Tue, Apr 01, 2008 at 10:20:49AM -0500, Anthony Liguori wrote:
> Which is apparently entirely unnecessary as we already have
> /sys/bus/pci/.../region. It's just a matter of checking if a vma is VM_IO
> and then dealing with the subsequent reference counting issues as Avi
> points out.
Do yo
On Tue, Apr 01, 2008 at 06:18:07PM +0100, Daniel P. Berrange wrote:
> and very few application domains are allowed to access them. THe KVM/QEMU
> policy will not allow this for example. Basically on the X server, HAL and
> dmidecode have access in current policy. It would be undesirable to have to
On Tue, Apr 01, 2008 at 08:03:14PM +0300, Avi Kivity wrote:
> Anthony Liguori wrote:
> > Avi Kivity wrote:
> >> [EMAIL PROTECTED] wrote:
> >>
> >>> From: Ben-Ami Yassour <[EMAIL PROTECTED]>
> >>>
> >>> Enable a guest to access a device's memory mapped I/O regions directly.
> >>> Userspace sends t
Anthony Liguori wrote:
>>
>> Regardless of whether we can use /dev/mem, I think we should
>> introduce a new char device anyway. We only need to mmap() MMIO
>> regions which are mapped by the PCI bus, presumably, the kernel
>> should know about these mappings. The driver should only allow
>>
Anthony Liguori wrote:
> Avi Kivity wrote:
>> [EMAIL PROTECTED] wrote:
>>
>>> From: Ben-Ami Yassour <[EMAIL PROTECTED]>
>>>
>>> Enable a guest to access a device's memory mapped I/O regions directly.
>>> Userspace sends the mmio regions that the guest can access. On the
>>> first
>>> page fault
Anthony Liguori wrote:
> Avi Kivity wrote:
>> [EMAIL PROTECTED] wrote:
>>
>>> From: Ben-Ami Yassour <[EMAIL PROTECTED]>
>>>
>>> Enable a guest to access a device's memory mapped I/O regions directly.
>>> Userspace sends the mmio regions that the guest can access. On the
>>> first
>>> page fault
Avi Kivity wrote:
> [EMAIL PROTECTED] wrote:
>
>> From: Ben-Ami Yassour <[EMAIL PROTECTED]>
>>
>> Enable a guest to access a device's memory mapped I/O regions directly.
>> Userspace sends the mmio regions that the guest can access. On the first
>> page fault for an access to an mmio address the
[EMAIL PROTECTED] wrote:
> From: Ben-Ami Yassour <[EMAIL PROTECTED]>
>
> Enable a guest to access a device's memory mapped I/O regions directly.
> Userspace sends the mmio regions that the guest can access. On the first
> page fault for an access to an mmio address the host translates the gva to
>
From: Ben-Ami Yassour <[EMAIL PROTECTED]>
Enable a guest to access a device's memory mapped I/O regions directly.
Userspace sends the mmio regions that the guest can access. On the first
page fault for an access to an mmio address the host translates the gva to hpa,
and updates the sptes.
Signed-
31 matches
Mail list logo