>>> On 27.01.16 at 08:02, wrote:
>
> On 1/26/2016 7:24 PM, Jan Beulich wrote:
> On 26.01.16 at 08:59, wrote:
>>
>>>
>>> On 1/22/2016 7:43 PM, Jan Beulich wrote:
>>> On 22.01.16 at 04:20, wrote:
>
>>> On 26.01.16 at 08:59, wrote:
>
> On 1/22/2016 7:43 PM, Jan Beulich wrote:
> On 22.01.16 at 04:20, wrote:
>>> @@ -2601,6 +2605,16 @@ struct hvm_ioreq_server
> *hvm_select_ioreq_server(struct domain *d,
>>> type =
On 1/26/2016 7:24 PM, Jan Beulich wrote:
On 26.01.16 at 08:59, wrote:
On 1/22/2016 7:43 PM, Jan Beulich wrote:
On 22.01.16 at 04:20, wrote:
@@ -2601,6 +2605,16 @@ struct hvm_ioreq_server
*hvm_select_ioreq_server(struct domain *d,
On 1/22/2016 7:43 PM, Jan Beulich wrote:
On 22.01.16 at 04:20, wrote:
@@ -2601,6 +2605,16 @@ struct hvm_ioreq_server *hvm_select_ioreq_server(struct
domain *d,
type = (p->type == IOREQ_TYPE_PIO) ?
HVMOP_IO_RANGE_PORT :
>>> On 22.01.16 at 04:20, wrote:
> @@ -2601,6 +2605,16 @@ struct hvm_ioreq_server
> *hvm_select_ioreq_server(struct domain *d,
> type = (p->type == IOREQ_TYPE_PIO) ?
> HVMOP_IO_RANGE_PORT : HVMOP_IO_RANGE_MEMORY;
> addr = p->addr;
>
Currently in ioreq server, guest write-protected ram pages are
tracked in the same rangeset with device mmio resources. Yet
unlike device mmio, which can be in big chunks, the guest write-
protected pages may be discrete ranges with 4K bytes each. This
patch uses a seperate rangeset for the guest