Re: [Xen-devel] [V9 3/3] Differentiate IO/mem resources tracked by ioreq server

2016-01-06 Thread Jan Beulich
>>> On 31.12.15 at 10:33,  wrote:
> On 12/21/2015 10:45 PM, Jan Beulich wrote:
> On 15.12.15 at 03:05,  wrote:
>>> @@ -2593,6 +2597,16 @@ struct hvm_ioreq_server 
>>> *hvm_select_ioreq_server(struct domain *d,
>>>   type = (p->type == IOREQ_TYPE_PIO) ?
>>>   HVMOP_IO_RANGE_PORT : HVMOP_IO_RANGE_MEMORY;
>>>   addr = p->addr;
>>> +if ( type == HVMOP_IO_RANGE_MEMORY )
>>> +{
>>> + ram_page = get_page_from_gfn(d, p->addr >> PAGE_SHIFT,
>>> +  , P2M_UNSHARE);
>>> + if ( p2mt == p2m_mmio_write_dm )
>>> + type = HVMOP_IO_RANGE_WP_MEM;
>>> +
>>> + if ( ram_page )
>>> + put_page(ram_page);
>>> +}
>>
>> You evaluate the page's current type here - what if it subsequently
>> changes? I don't think it is appropriate to leave the hypervisor at
>> the mercy of the device model here.
> 
> Well. I do not quite understand your concern. :)
> Here, the get_page_from_gfn() is used to determine if the addr is a MMIO
> or a write-protected ram. If this p2m type is changed, it should be
> triggered by the guest and device model, e.g. this RAM is not supposed
> to be used as the graphic translation table. And it should be fine.
> But I also wonder, if there's any other routine more appropriate to get
> a p2m type from the gfn?

No, the question isn't the choice of method to retrieve the
current type, but the lack of measures against the retrieved
type becoming stale by the time you actually use it.

>>> --- a/xen/include/asm-x86/hvm/domain.h
>>> +++ b/xen/include/asm-x86/hvm/domain.h
>>> @@ -48,8 +48,8 @@ struct hvm_ioreq_vcpu {
>>>   bool_t   pending;
>>>   };
>>>
>>> -#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_PCI + 1)
>>> -#define MAX_NR_IO_RANGES  256
>>> +#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_WP_MEM + 1)
>>> +#define MAX_NR_IO_RANGES  8192
>>
>> I'm sure I've objected before to this universal bumping of the limit:
>> Even if I were to withdraw my objection to the higher limit on the
>> new kind of tracked resource, I would continue to object to all
>> other resources getting their limits bumped too.
>>
> 
> Hah. So how about we keep MAX_NR_IO_RANGES as 256, and use a new value,
> say MAX_NR_WR_MEM_RANGES, set to 8192 in this patch? :)

That would at least limit the damage to the newly introduced type.
But I suppose you realize it would still be a resource consumption
concern. In order for this to not become a security issue, you
might e.g. stay with the conservative old limit and allow a command
line or even better guest config file override to it (effectively making
the admin state his consent with the higher resource use).

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [V9 3/3] Differentiate IO/mem resources tracked by ioreq server

2016-01-06 Thread Paul Durrant
> -Original Message-
> From: Jan Beulich [mailto:jbeul...@suse.com]
> Sent: 06 January 2016 08:59
> To: Zhang Yu
> Cc: Andrew Cooper; Paul Durrant; Wei Liu; Ian Jackson; Stefano Stabellini;
> Kevin Tian; zhiyuan...@intel.com; Shuai Ruan; xen-devel@lists.xen.org; Keir
> (Xen.org)
> Subject: Re: [Xen-devel] [V9 3/3] Differentiate IO/mem resources tracked by
> ioreq server
> 
> >>> On 31.12.15 at 10:33, <yu.c.zh...@linux.intel.com> wrote:
> > On 12/21/2015 10:45 PM, Jan Beulich wrote:
> >>>>> On 15.12.15 at 03:05, <shuai.r...@linux.intel.com> wrote:
> >>> @@ -2593,6 +2597,16 @@ struct hvm_ioreq_server
> *hvm_select_ioreq_server(struct domain *d,
> >>>   type = (p->type == IOREQ_TYPE_PIO) ?
> >>>   HVMOP_IO_RANGE_PORT : HVMOP_IO_RANGE_MEMORY;
> >>>   addr = p->addr;
> >>> +if ( type == HVMOP_IO_RANGE_MEMORY )
> >>> +{
> >>> + ram_page = get_page_from_gfn(d, p->addr >> PAGE_SHIFT,
> >>> +  , P2M_UNSHARE);
> >>> + if ( p2mt == p2m_mmio_write_dm )
> >>> + type = HVMOP_IO_RANGE_WP_MEM;
> >>> +
> >>> + if ( ram_page )
> >>> + put_page(ram_page);
> >>> +}
> >>
> >> You evaluate the page's current type here - what if it subsequently
> >> changes? I don't think it is appropriate to leave the hypervisor at
> >> the mercy of the device model here.
> >
> > Well. I do not quite understand your concern. :)
> > Here, the get_page_from_gfn() is used to determine if the addr is a MMIO
> > or a write-protected ram. If this p2m type is changed, it should be
> > triggered by the guest and device model, e.g. this RAM is not supposed
> > to be used as the graphic translation table. And it should be fine.
> > But I also wonder, if there's any other routine more appropriate to get
> > a p2m type from the gfn?
> 
> No, the question isn't the choice of method to retrieve the
> current type, but the lack of measures against the retrieved
> type becoming stale by the time you actually use it.
> 

I don't think that issue is specific to this code. AFAIK nothing in the I/O 
emulation system protects against a type change whilst a request is in flight.
Also, what are the consequences of a change? Only that the wrong range type is 
selected and the emulation goes to the wrong place. This may be a problem for 
the VM but should cause no other problems.

  Paul

> >>> --- a/xen/include/asm-x86/hvm/domain.h
> >>> +++ b/xen/include/asm-x86/hvm/domain.h
> >>> @@ -48,8 +48,8 @@ struct hvm_ioreq_vcpu {
> >>>   bool_t   pending;
> >>>   };
> >>>
> >>> -#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_PCI + 1)
> >>> -#define MAX_NR_IO_RANGES  256
> >>> +#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_WP_MEM + 1)
> >>> +#define MAX_NR_IO_RANGES  8192
> >>
> >> I'm sure I've objected before to this universal bumping of the limit:
> >> Even if I were to withdraw my objection to the higher limit on the
> >> new kind of tracked resource, I would continue to object to all
> >> other resources getting their limits bumped too.
> >>
> >
> > Hah. So how about we keep MAX_NR_IO_RANGES as 256, and use a new
> value,
> > say MAX_NR_WR_MEM_RANGES, set to 8192 in this patch? :)
> 
> That would at least limit the damage to the newly introduced type.
> But I suppose you realize it would still be a resource consumption
> concern. In order for this to not become a security issue, you
> might e.g. stay with the conservative old limit and allow a command
> line or even better guest config file override to it (effectively making
> the admin state his consent with the higher resource use).
> 
> Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [V9 3/3] Differentiate IO/mem resources tracked by ioreq server

2016-01-06 Thread Jan Beulich
>>> On 06.01.16 at 10:44, <paul.durr...@citrix.com> wrote:
>>  -Original Message-
>> From: Jan Beulich [mailto:jbeul...@suse.com]
>> Sent: 06 January 2016 08:59
>> To: Zhang Yu
>> Cc: Andrew Cooper; Paul Durrant; Wei Liu; Ian Jackson; Stefano Stabellini;
>> Kevin Tian; zhiyuan...@intel.com; Shuai Ruan; xen-devel@lists.xen.org; Keir
>> (Xen.org)
>> Subject: Re: [Xen-devel] [V9 3/3] Differentiate IO/mem resources tracked by
>> ioreq server
>> 
>> >>> On 31.12.15 at 10:33, <yu.c.zh...@linux.intel.com> wrote:
>> > On 12/21/2015 10:45 PM, Jan Beulich wrote:
>> >>>>> On 15.12.15 at 03:05, <shuai.r...@linux.intel.com> wrote:
>> >>> @@ -2593,6 +2597,16 @@ struct hvm_ioreq_server
>> *hvm_select_ioreq_server(struct domain *d,
>> >>>   type = (p->type == IOREQ_TYPE_PIO) ?
>> >>>   HVMOP_IO_RANGE_PORT : HVMOP_IO_RANGE_MEMORY;
>> >>>   addr = p->addr;
>> >>> +if ( type == HVMOP_IO_RANGE_MEMORY )
>> >>> +{
>> >>> + ram_page = get_page_from_gfn(d, p->addr >> PAGE_SHIFT,
>> >>> +  , P2M_UNSHARE);
>> >>> + if ( p2mt == p2m_mmio_write_dm )
>> >>> + type = HVMOP_IO_RANGE_WP_MEM;
>> >>> +
>> >>> + if ( ram_page )
>> >>> + put_page(ram_page);
>> >>> +}
>> >>
>> >> You evaluate the page's current type here - what if it subsequently
>> >> changes? I don't think it is appropriate to leave the hypervisor at
>> >> the mercy of the device model here.
>> >
>> > Well. I do not quite understand your concern. :)
>> > Here, the get_page_from_gfn() is used to determine if the addr is a MMIO
>> > or a write-protected ram. If this p2m type is changed, it should be
>> > triggered by the guest and device model, e.g. this RAM is not supposed
>> > to be used as the graphic translation table. And it should be fine.
>> > But I also wonder, if there's any other routine more appropriate to get
>> > a p2m type from the gfn?
>> 
>> No, the question isn't the choice of method to retrieve the
>> current type, but the lack of measures against the retrieved
>> type becoming stale by the time you actually use it.
> 
> I don't think that issue is specific to this code. AFAIK nothing in the I/O 
> emulation system protects against a type change whilst a request is in 
> flight.
> Also, what are the consequences of a change? Only that the wrong range type 
> is selected and the emulation goes to the wrong place. This may be a problem 
> for the VM but should cause no other problems.

Okay, I buy this argument, but I think it would help if that was spelled
out this way in the commit message.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [V9 3/3] Differentiate IO/mem resources tracked by ioreq server

2016-01-06 Thread Yu, Zhang



On 1/6/2016 4:59 PM, Jan Beulich wrote:

On 31.12.15 at 10:33,  wrote:

On 12/21/2015 10:45 PM, Jan Beulich wrote:

On 15.12.15 at 03:05,  wrote:

--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -48,8 +48,8 @@ struct hvm_ioreq_vcpu {
   bool_t   pending;
   };

-#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_PCI + 1)
-#define MAX_NR_IO_RANGES  256
+#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_WP_MEM + 1)
+#define MAX_NR_IO_RANGES  8192


I'm sure I've objected before to this universal bumping of the limit:
Even if I were to withdraw my objection to the higher limit on the
new kind of tracked resource, I would continue to object to all
other resources getting their limits bumped too.



Hah. So how about we keep MAX_NR_IO_RANGES as 256, and use a new value,
say MAX_NR_WR_MEM_RANGES, set to 8192 in this patch? :)


That would at least limit the damage to the newly introduced type.
But I suppose you realize it would still be a resource consumption
concern. In order for this to not become a security issue, you
might e.g. stay with the conservative old limit and allow a command
line or even better guest config file override to it (effectively making
the admin state his consent with the higher resource use).


Thanks, Jan. I'll try to use the guest config file to set this limit. :)

Yu


Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [V9 3/3] Differentiate IO/mem resources tracked by ioreq server

2016-01-06 Thread Yu, Zhang



On 1/6/2016 5:58 PM, Jan Beulich wrote:

On 06.01.16 at 10:44, <paul.durr...@citrix.com> wrote:

  -Original Message-
From: Jan Beulich [mailto:jbeul...@suse.com]
Sent: 06 January 2016 08:59
To: Zhang Yu
Cc: Andrew Cooper; Paul Durrant; Wei Liu; Ian Jackson; Stefano Stabellini;
Kevin Tian; zhiyuan...@intel.com; Shuai Ruan; xen-devel@lists.xen.org; Keir
(Xen.org)
Subject: Re: [Xen-devel] [V9 3/3] Differentiate IO/mem resources tracked by
ioreq server


On 31.12.15 at 10:33, <yu.c.zh...@linux.intel.com> wrote:

On 12/21/2015 10:45 PM, Jan Beulich wrote:

On 15.12.15 at 03:05, <shuai.r...@linux.intel.com> wrote:

@@ -2593,6 +2597,16 @@ struct hvm_ioreq_server

*hvm_select_ioreq_server(struct domain *d,

   type = (p->type == IOREQ_TYPE_PIO) ?
   HVMOP_IO_RANGE_PORT : HVMOP_IO_RANGE_MEMORY;
   addr = p->addr;
+if ( type == HVMOP_IO_RANGE_MEMORY )
+{
+ ram_page = get_page_from_gfn(d, p->addr >> PAGE_SHIFT,
+  , P2M_UNSHARE);
+ if ( p2mt == p2m_mmio_write_dm )
+ type = HVMOP_IO_RANGE_WP_MEM;
+
+ if ( ram_page )
+ put_page(ram_page);
+}


You evaluate the page's current type here - what if it subsequently
changes? I don't think it is appropriate to leave the hypervisor at
the mercy of the device model here.


Well. I do not quite understand your concern. :)
Here, the get_page_from_gfn() is used to determine if the addr is a MMIO
or a write-protected ram. If this p2m type is changed, it should be
triggered by the guest and device model, e.g. this RAM is not supposed
to be used as the graphic translation table. And it should be fine.
But I also wonder, if there's any other routine more appropriate to get
a p2m type from the gfn?


No, the question isn't the choice of method to retrieve the
current type, but the lack of measures against the retrieved
type becoming stale by the time you actually use it.


I don't think that issue is specific to this code. AFAIK nothing in the I/O
emulation system protects against a type change whilst a request is in
flight.
Also, what are the consequences of a change? Only that the wrong range type
is selected and the emulation goes to the wrong place. This may be a problem
for the VM but should cause no other problems.


Okay, I buy this argument, but I think it would help if that was spelled
out this way in the commit message.


Thank you, Paul & Jan. :)
A note will be added to explain this in the commit message in next
version.

Yu


Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [V9 3/3] Differentiate IO/mem resources tracked by ioreq server

2015-12-31 Thread Yu, Zhang



On 12/21/2015 10:45 PM, Jan Beulich wrote:

On 15.12.15 at 03:05,  wrote:

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -935,6 +935,9 @@ static void hvm_ioreq_server_free_rangesets(struct 
hvm_ioreq_server *s,
  rangeset_destroy(s->range[i]);
  }

+static const char *io_range_name[ NR_IO_RANGE_TYPES ] =


const


OK. Thanks.




+{"port", "mmio", "pci", "wp-ed memory"};


As brief as possible, but still understandable - e.g. "wp-mem"?



Got it. Thanks.


@@ -2593,6 +2597,16 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct 
domain *d,
  type = (p->type == IOREQ_TYPE_PIO) ?
  HVMOP_IO_RANGE_PORT : HVMOP_IO_RANGE_MEMORY;
  addr = p->addr;
+if ( type == HVMOP_IO_RANGE_MEMORY )
+{
+ ram_page = get_page_from_gfn(d, p->addr >> PAGE_SHIFT,
+  , P2M_UNSHARE);
+ if ( p2mt == p2m_mmio_write_dm )
+ type = HVMOP_IO_RANGE_WP_MEM;
+
+ if ( ram_page )
+ put_page(ram_page);
+}


You evaluate the page's current type here - what if it subsequently
changes? I don't think it is appropriate to leave the hypervisor at
the mercy of the device model here.



Well. I do not quite understand your concern. :)
Here, the get_page_from_gfn() is used to determine if the addr is a MMIO
or a write-protected ram. If this p2m type is changed, it should be
triggered by the guest and device model, e.g. this RAM is not supposed
to be used as the graphic translation table. And it should be fine.
But I also wonder, if there's any other routine more appropriate to get
a p2m type from the gfn?


--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -48,8 +48,8 @@ struct hvm_ioreq_vcpu {
  bool_t   pending;
  };

-#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_PCI + 1)
-#define MAX_NR_IO_RANGES  256
+#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_WP_MEM + 1)
+#define MAX_NR_IO_RANGES  8192


I'm sure I've objected before to this universal bumping of the limit:
Even if I were to withdraw my objection to the higher limit on the
new kind of tracked resource, I would continue to object to all
other resources getting their limits bumped too.



Hah. So how about we keep MAX_NR_IO_RANGES as 256, and use a new value,
say MAX_NR_WR_MEM_RANGES, set to 8192 in this patch? :)

Thanks a lot & happy new year!


Yu


Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel



___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [V9 3/3] Differentiate IO/mem resources tracked by ioreq server

2015-12-21 Thread Jan Beulich
>>> On 15.12.15 at 03:05,  wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -935,6 +935,9 @@ static void hvm_ioreq_server_free_rangesets(struct 
> hvm_ioreq_server *s,
>  rangeset_destroy(s->range[i]);
>  }
>  
> +static const char *io_range_name[ NR_IO_RANGE_TYPES ] =

const

> +{"port", "mmio", "pci", "wp-ed memory"};

As brief as possible, but still understandable - e.g. "wp-mem"?

> @@ -2593,6 +2597,16 @@ struct hvm_ioreq_server 
> *hvm_select_ioreq_server(struct domain *d,
>  type = (p->type == IOREQ_TYPE_PIO) ?
>  HVMOP_IO_RANGE_PORT : HVMOP_IO_RANGE_MEMORY;
>  addr = p->addr;
> +if ( type == HVMOP_IO_RANGE_MEMORY )
> +{
> + ram_page = get_page_from_gfn(d, p->addr >> PAGE_SHIFT,
> +  , P2M_UNSHARE);
> + if ( p2mt == p2m_mmio_write_dm )
> + type = HVMOP_IO_RANGE_WP_MEM;
> +
> + if ( ram_page )
> + put_page(ram_page);
> +}

You evaluate the page's current type here - what if it subsequently
changes? I don't think it is appropriate to leave the hypervisor at
the mercy of the device model here.

> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -48,8 +48,8 @@ struct hvm_ioreq_vcpu {
>  bool_t   pending;
>  };
>  
> -#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_PCI + 1)
> -#define MAX_NR_IO_RANGES  256
> +#define NR_IO_RANGE_TYPES (HVMOP_IO_RANGE_WP_MEM + 1)
> +#define MAX_NR_IO_RANGES  8192

I'm sure I've objected before to this universal bumping of the limit:
Even if I were to withdraw my objection to the higher limit on the
new kind of tracked resource, I would continue to object to all
other resources getting their limits bumped too.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


Re: [Xen-devel] [V9 3/3] Differentiate IO/mem resources tracked by ioreq server

2015-12-19 Thread Tian, Kevin
> From: Shuai Ruan [mailto:shuai.r...@linux.intel.com]
> Sent: Tuesday, December 15, 2015 10:05 AM
> 
> From: Yu Zhang 
> 
> Currently in ioreq server, guest write-protected ram pages are
> tracked in the same rangeset with device mmio resources. Yet
> unlike device mmio, which can be in big chunks, the guest write-
> protected pages may be discrete ranges with 4K bytes each. This
> patch uses a seperate rangeset for the guest ram pages.
> 
> Note: Previously, a new hypercall or subop was suggested to map
> write-protected pages into ioreq server. However, it turned out
> handler of this new hypercall would be almost the same with the
> existing pair - HVMOP_[un]map_io_range_to_ioreq_server, and there's
> already a type parameter in this hypercall. So no new hypercall
> defined, only a new type is introduced.
> 
> Signed-off-by: Yu Zhang 
> Acked-by: Wei Liu 
> Acked-by: Ian Campbell 
> Signed-off-by: Shuai Ruan 

Reviewd-by: Kevin Tian 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel


[Xen-devel] [V9 3/3] Differentiate IO/mem resources tracked by ioreq server

2015-12-14 Thread Shuai Ruan
From: Yu Zhang 

Currently in ioreq server, guest write-protected ram pages are
tracked in the same rangeset with device mmio resources. Yet
unlike device mmio, which can be in big chunks, the guest write-
protected pages may be discrete ranges with 4K bytes each. This
patch uses a seperate rangeset for the guest ram pages.

Note: Previously, a new hypercall or subop was suggested to map
write-protected pages into ioreq server. However, it turned out
handler of this new hypercall would be almost the same with the
existing pair - HVMOP_[un]map_io_range_to_ioreq_server, and there's
already a type parameter in this hypercall. So no new hypercall
defined, only a new type is introduced.

Signed-off-by: Yu Zhang 
Acked-by: Wei Liu 
Acked-by: Ian Campbell 
Signed-off-by: Shuai Ruan 
---
 tools/libxc/include/xenctrl.h| 31 
 tools/libxc/xc_domain.c  | 61 
 xen/arch/x86/hvm/hvm.c   | 27 +++---
 xen/include/asm-x86/hvm/domain.h |  4 +--
 xen/include/public/hvm/hvm_op.h  |  1 +
 5 files changed, 118 insertions(+), 6 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 01a6dda..1a08f69 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -2023,6 +2023,37 @@ int xc_hvm_unmap_io_range_from_ioreq_server(xc_interface 
*xch,
 int is_mmio,
 uint64_t start,
 uint64_t end);
+/**
+ * This function registers a range of write-protected memory for emulation.
+ *
+ * @parm xch a handle to an open hypervisor interface.
+ * @parm domid the domain id to be serviced
+ * @parm id the IOREQ Server id.
+ * @parm start start of range
+ * @parm end end of range (inclusive).
+ * @return 0 on success, -1 on failure.
+ */
+int xc_hvm_map_wp_mem_range_to_ioreq_server(xc_interface *xch,
+domid_t domid,
+ioservid_t id,
+xen_pfn_t start,
+xen_pfn_t end);
+
+/**
+ * This function deregisters a range of write-protected memory for emulation.
+ *
+ * @parm xch a handle to an open hypervisor interface.
+ * @parm domid the domain id to be serviced
+ * @parm id the IOREQ Server id.
+ * @parm start start of range
+ * @parm end end of range (inclusive).
+ * @return 0 on success, -1 on failure.
+ */
+int xc_hvm_unmap_wp_mem_range_from_ioreq_server(xc_interface *xch,
+domid_t domid,
+ioservid_t id,
+xen_pfn_t start,
+xen_pfn_t end);
 
 /**
  * This function registers a PCI device for config space emulation.
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 96506d5..41c5ae2 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -1543,6 +1543,67 @@ int xc_hvm_unmap_io_range_from_ioreq_server(xc_interface 
*xch, domid_t domid,
 return rc;
 }
 
+int xc_hvm_map_wp_mem_range_to_ioreq_server(xc_interface *xch,
+domid_t domid,
+ioservid_t id,
+xen_pfn_t start,
+xen_pfn_t end)
+{
+DECLARE_HYPERCALL;
+DECLARE_HYPERCALL_BUFFER(xen_hvm_io_range_t, arg);
+int rc;
+
+arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+if ( arg == NULL )
+return -1;
+
+hypercall.op = __HYPERVISOR_hvm_op;
+hypercall.arg[0] = HVMOP_map_io_range_to_ioreq_server;
+hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+
+arg->domid = domid;
+arg->id = id;
+arg->type = HVMOP_IO_RANGE_WP_MEM;
+arg->start = start;
+arg->end = end;
+
+rc = do_xen_hypercall(xch, );
+
+xc_hypercall_buffer_free(xch, arg);
+return rc;
+}
+
+int xc_hvm_unmap_wp_mem_range_from_ioreq_server(xc_interface *xch,
+domid_t domid,
+ioservid_t id,
+xen_pfn_t start,
+xen_pfn_t end)
+{
+DECLARE_HYPERCALL;
+DECLARE_HYPERCALL_BUFFER(xen_hvm_io_range_t, arg);
+int rc;
+
+arg = xc_hypercall_buffer_alloc(xch, arg, sizeof(*arg));
+if ( arg == NULL )
+return -1;
+
+hypercall.op = __HYPERVISOR_hvm_op;
+hypercall.arg[0] = HVMOP_unmap_io_range_from_ioreq_server;
+hypercall.arg[1] = HYPERCALL_BUFFER_AS_ARG(arg);
+
+arg->domid = domid;
+