Hi Stefano,
On 10/08/15 13:03, Stefano Stabellini wrote:
>> +xen_pfn = xen_page_to_pfn(page);
>> +}
>> +fn(pfn_to_gfn(xen_pfn++), data);
>
> What is the purpose of incrementing xen_pfn here?
Because the Linux page is split into multiple xen_pfn, so we
Hi Stefano,
On 10/08/15 13:03, Stefano Stabellini wrote:
>> +xen_pfn = xen_page_to_pfn(page);
>> +}
>> +fn(pfn_to_gfn(xen_pfn++), data);
>
> What is the purpose of incrementing xen_pfn here?
Because the Linux page is split into multiple xen_pfn, so we
On 07/08/15 17:46, Julien Grall wrote:
> The hypercall interface (as well as the toolstack) is always using 4KB
> page granularity. When the toolstack is asking for mapping a series of
> guest PFN in a batch, it expects to have the page map contiguously in
> its virtual memory.
>
> When Linux is
On 07/08/15 17:46, Julien Grall wrote:
The hypercall interface (as well as the toolstack) is always using 4KB
page granularity. When the toolstack is asking for mapping a series of
guest PFN in a batch, it expects to have the page map contiguously in
its virtual memory.
When Linux is using
Hi Stefano,
On 10/08/15 13:57, Stefano Stabellini wrote:
> On Mon, 10 Aug 2015, David Vrabel wrote:
>> On 10/08/15 13:03, Stefano Stabellini wrote:
>>> On Fri, 7 Aug 2015, Julien Grall wrote:
- rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap_range, );
- return rc < 0 ? rc : err;
On Mon, 10 Aug 2015, David Vrabel wrote:
> On 10/08/15 13:03, Stefano Stabellini wrote:
> > On Fri, 7 Aug 2015, Julien Grall wrote:
> >> - rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap_range, );
> >> - return rc < 0 ? rc : err;
> >> + for (i = 0; i < nr_gfn; i++) {
> >> + if ((i %
On 10/08/15 13:03, Stefano Stabellini wrote:
> On Fri, 7 Aug 2015, Julien Grall wrote:
>> -rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap_range, );
>> -return rc < 0 ? rc : err;
>> +for (i = 0; i < nr_gfn; i++) {
>> +if ((i % XEN_PFN_PER_PAGE) == 0) {
>> +
On Fri, 7 Aug 2015, Julien Grall wrote:
> The hypercall interface (as well as the toolstack) is always using 4KB
> page granularity. When the toolstack is asking for mapping a series of
> guest PFN in a batch, it expects to have the page map contiguously in
> its virtual memory.
>
> When Linux is
On Mon, 10 Aug 2015, David Vrabel wrote:
On 10/08/15 13:03, Stefano Stabellini wrote:
On Fri, 7 Aug 2015, Julien Grall wrote:
- rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap_range, xatp);
- return rc 0 ? rc : err;
+ for (i = 0; i nr_gfn; i++) {
+ if ((i %
Hi Stefano,
On 10/08/15 13:57, Stefano Stabellini wrote:
On Mon, 10 Aug 2015, David Vrabel wrote:
On 10/08/15 13:03, Stefano Stabellini wrote:
On Fri, 7 Aug 2015, Julien Grall wrote:
- rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap_range, xatp);
- return rc 0 ? rc : err;
+ for (i = 0;
On Fri, 7 Aug 2015, Julien Grall wrote:
The hypercall interface (as well as the toolstack) is always using 4KB
page granularity. When the toolstack is asking for mapping a series of
guest PFN in a batch, it expects to have the page map contiguously in
its virtual memory.
When Linux is using
On 10/08/15 13:03, Stefano Stabellini wrote:
On Fri, 7 Aug 2015, Julien Grall wrote:
-rc = HYPERVISOR_memory_op(XENMEM_add_to_physmap_range, xatp);
-return rc 0 ? rc : err;
+for (i = 0; i nr_gfn; i++) {
+if ((i % XEN_PFN_PER_PAGE) == 0) {
+page =
The hypercall interface (as well as the toolstack) is always using 4KB
page granularity. When the toolstack is asking for mapping a series of
guest PFN in a batch, it expects to have the page map contiguously in
its virtual memory.
When Linux is using 64KB page granularity, the privcmd driver
The hypercall interface (as well as the toolstack) is always using 4KB
page granularity. When the toolstack is asking for mapping a series of
guest PFN in a batch, it expects to have the page map contiguously in
its virtual memory.
When Linux is using 64KB page granularity, the privcmd driver
14 matches
Mail list logo