Hello,
>> Sorry about that. The issue is the BUG in gfn_to_pgn where the pfn is
>> not calculated correctly after looking up the vma.
>> I still don't see how to get the physical address from the vma, since
>> vm_pgoff is zero, and the vm_ops are not filled. The vma does not seem
>> to store
On 09/01/2009 12:13 AM, Stephen Donnelly wrote:
I'm totally confused now.
Sorry about that. The issue is the BUG in gfn_to_pgn where the pfn is
not calculated correctly after looking up the vma.
I still don't see how to get the physical address from the vma, since
vm_pgoff is zero, and
On Mon, Aug 31, 2009 at 8:44 PM, Avi Kivity wrote:
> On 08/31/2009 01:33 AM, Stephen Donnelly wrote:
>>
>>> We can't duplicate mm/ in kvm. However, mm/memory.c says:
>>>
>>> * The way we recognize COWed pages within VM_PFNMAP mappings is through
>>> the
>>> * rules set up by "remap_pfn_range()":
On 08/31/2009 01:33 AM, Stephen Donnelly wrote:
We can't duplicate mm/ in kvm. However, mm/memory.c says:
* The way we recognize COWed pages within VM_PFNMAP mappings is through the
* rules set up by "remap_pfn_range()": the vma will have the VM_PFNMAP bit
* set, and the vm_pgoff will
On Thu, Aug 27, 2009 at 4:08 PM, Avi Kivity wrote:
> On 08/27/2009 05:34 AM, Stephen Donnelly wrote:
>>
>> On Mon, Aug 24, 2009 at 4:55 PM, Avi Kivity wrote:
>>
>>>
>>> On 08/24/2009 12:59 AM, Stephen Donnelly wrote:
>>>
On Thu, Aug 20, 2009 at 12:14 AM, Avi Kivity wrote:
>
>
On 08/27/2009 05:34 AM, Stephen Donnelly wrote:
On Mon, Aug 24, 2009 at 4:55 PM, Avi Kivity wrote:
On 08/24/2009 12:59 AM, Stephen Donnelly wrote:
On Thu, Aug 20, 2009 at 12:14 AM, Avi Kivitywrote:
On 08/13/2009 07:07 AM, Stephen Donnelly wrote:
npages = get_u
On Wed, Aug 26, 2009 at 10:22 PM, Avi Kivity wrote:
> On 08/24/2009 07:55 AM, Avi Kivity wrote:
>>
>> On 08/24/2009 12:59 AM, Stephen Donnelly wrote:
>>>
>>> On Thu, Aug 20, 2009 at 12:14 AM, Avi Kivity wrote:
On 08/13/2009 07:07 AM, Stephen Donnelly wrote:
>
> npages = get_user_
On Mon, Aug 24, 2009 at 4:55 PM, Avi Kivity wrote:
> On 08/24/2009 12:59 AM, Stephen Donnelly wrote:
>>
>> On Thu, Aug 20, 2009 at 12:14 AM, Avi Kivity wrote:
>>> On 08/13/2009 07:07 AM, Stephen Donnelly wrote:
npages = get_user_pages_fast(addr, 1, 1, page); returns -EFAULT,
presuma
On 08/24/2009 07:55 AM, Avi Kivity wrote:
On 08/24/2009 12:59 AM, Stephen Donnelly wrote:
On Thu, Aug 20, 2009 at 12:14 AM, Avi Kivity wrote:
On 08/13/2009 07:07 AM, Stephen Donnelly wrote:
npages = get_user_pages_fast(addr, 1, 1, page); returns -EFAULT,
presumably because (vma->vm_flags&
On 08/24/2009 12:59 AM, Stephen Donnelly wrote:
On Thu, Aug 20, 2009 at 12:14 AM, Avi Kivity wrote:
On 08/13/2009 07:07 AM, Stephen Donnelly wrote:
npages = get_user_pages_fast(addr, 1, 1, page); returns -EFAULT,
presumably because (vma->vm_flags&(VM_IO | VM_PFNMAP)).
It takes t
On Thu, Aug 20, 2009 at 12:14 AM, Avi Kivity wrote:
> On 08/13/2009 07:07 AM, Stephen Donnelly wrote:
>>
>> npages = get_user_pages_fast(addr, 1, 1, page); returns -EFAULT,
>> presumably because (vma->vm_flags& (VM_IO | VM_PFNMAP)).
>>
>> It takes then unlikely branch, and checks the vma, but I do
On 08/13/2009 07:07 AM, Stephen Donnelly wrote:
A less intrusive, but uglier, alternative is to call
qemu_ram_alloc() and them mmap(MAP_FIXED) on top of that.
I did try this, but ended up with a BUG on the host in
/var/lib/dkms/kvm/84/build/x86/kvm_main.c:1266 gfn_to_pfn() on the
line "B
On Wed, Jul 29, 2009 at 11:06 AM, Stephen Donnelly wrote:
> On Tue, Jul 28, 2009 at 8:54 PM, Avi Kivity wrote:
>> On 07/28/2009 12:32 AM, Stephen Donnelly wrote:
>> You need a variant of qemu_ram_alloc() that accepts an fd and offset and
>> mmaps that.
I had a go at this, creating qemu_ram_mmap()
On 07/30/2009 02:52 AM, Cam Macdonell wrote:
You need a variant of qemu_ram_alloc() that accepts an fd and offset
and mmaps that. A less intrusive, but uglier, alternative is to call
qemu_ram_alloc() and them mmap(MAP_FIXED) on top of that.
Hi Avi,
I noticed that the region of memory being
Avi Kivity wrote:
On 07/28/2009 12:32 AM, Stephen Donnelly wrote:
What I don't understand is how to turn the host address returned from
mmap into a ram_addr_t to pass to pci_register_bar.
Memory must be allocated using the qemu RAM functions.
That seems to be the problem. The me
On Tue, Jul 28, 2009 at 8:54 PM, Avi Kivity wrote:
> On 07/28/2009 12:32 AM, Stephen Donnelly wrote:
What I don't understand is how to turn the host address returned from
mmap into a ram_addr_t to pass to pci_register_bar.
>>>
>>> Memory must be allocated using the qemu RAM functions
On 07/28/2009 12:32 AM, Stephen Donnelly wrote:
What I don't understand is how to turn the host address returned from
mmap into a ram_addr_t to pass to pci_register_bar.
Memory must be allocated using the qemu RAM functions.
That seems to be the problem. The memory cannot be allo
Hi Cam,
> Sorry I haven't answered your email from last Thursday. I'll answer it
> shortly.
Thanks, I'm still chipping away at it slowly.
>> On Thu, Jul 9, 2009 at 6:01 PM, Cam Macdonell wrote:
>>
>>> The memory for the device allocated as a POSIX shared memory object and
>>> then
>>> mmapped o
Stephen Donnelly wrote:
Hi Cam,
Hi Steve,
Sorry I haven't answered your email from last Thursday. I'll answer it
shortly.
On Thu, Jul 9, 2009 at 6:01 PM, Cam Macdonell wrote:
The memory for the device allocated as a POSIX shared memory object and then
mmapped on to the allocated BAR re
On Sat, Jul 11, 2009 at 5:03 AM, Cam Macdonell wrote:
> Oops, I realize now that I passed the driver patch both times. Here is the
> old patch.
>
> http://patchwork.kernel.org/patch/22363/
>
> What are you compiling against? the git tree or a particular version? The
> above patch won't compile ag
Stephen Donnelly wrote:
On Thu, Jul 9, 2009 at 6:01 PM, Cam Macdonell wrote:
Is there a corresponding qemu patch for the backend to the guest pci
driver?
Oops right. For some reason I can't my driver patch in patchwork.
http://kerneltrap.org/mailarchive/linux-kvm/2009/5/7/5665734
Thanks f
On Thu, Jul 9, 2009 at 6:01 PM, Cam Macdonell wrote:
>> Is there a corresponding qemu patch for the backend to the guest pci
>> driver?
>
> Oops right. For some reason I can't my driver patch in patchwork.
>
> http://kerneltrap.org/mailarchive/linux-kvm/2009/5/7/5665734
Thanks for the link, I h
On 07/09/2009 12:33 AM, Stephen Donnelly wrote:
Shared memory is fully coherent. You can use the ordinary x86 bus lock
operations for concurrent read-modify-write access, and the memory barrier
instructions to prevent reordering. Just like ordinary shared memory.
Okay, I think I was con
On 8-Jul-09, at 4:01 PM, Stephen Donnelly wrote:
On Thu, Jul 9, 2009 at 9:45 AM, Cam Macdonell
wrote:
Hi Stephen,
Here is the latest patch that supports interrupts. I am currently
working
on a broadcast mechanism that should be ready fairly soon.
http://patchwork.kernel.org/patch/22368/
Avi Kivity wrote:
On 07/08/2009 01:23 AM, Stephen Donnelly wrote:
Also it appears that PCI IO memory (cpu_register_io_memory) is
provided via access functions, like the pci config space?
It can also use ordinary RAM (for example, vga maps its framebuffer
as a PCI
BAR).
So hos
On Thu, Jul 9, 2009 at 9:45 AM, Cam Macdonell wrote:
> Hi Stephen,
>
> Here is the latest patch that supports interrupts. I am currently working
> on a broadcast mechanism that should be ready fairly soon.
>
> http://patchwork.kernel.org/patch/22368/
>
> I have some test scripts that can demonstra
> Shared memory is fully coherent. You can use the ordinary x86 bus lock
> operations for concurrent read-modify-write access, and the memory barrier
> instructions to prevent reordering. Just like ordinary shared memory.
Okay, I think I was confused by the 'dirty' code. Is that just to do
with
On 07/08/2009 01:23 AM, Stephen Donnelly wrote:
Also it appears that PCI IO memory (cpu_register_io_memory) is
provided via access functions, like the pci config space?
It can also use ordinary RAM (for example, vga maps its framebuffer as a PCI
BAR).
So host memory is exported
On Mon, Jul 6, 2009 at 7:38 PM, Avi Kivity wrote:
>> I see virtio_pci uses cpu_physical_memory_map() which provides either
>> read or write mappings and notes "Use only for reads OR writes - not
>> for read-modify-write operations."
>
> Right, these are for unidirectional transient DMA.
Okay, as
On 07/06/2009 01:41 AM, Stephen Donnelly wrote:
I am looking at how to do memory mapped IO between host and guests
under kvm. I expect to use the PCI emulation layer to present a PCI
device to the guest.
I see virtio_pci uses cpu_physical_memory_map() which provides either
read or write mappings
I am looking at how to do memory mapped IO between host and guests
under kvm. I expect to use the PCI emulation layer to present a PCI
device to the guest.
I see virtio_pci uses cpu_physical_memory_map() which provides either
read or write mappings and notes "Use only for reads OR writes - not
for
31 matches
Mail list logo