Re: [Xen-devel] [RFC Patch v4 7/8] x86/hvm: bump the number of pages of shadow memory

2018-04-18 Thread Jan Beulich
>>> On 18.04.18 at 13:39,  wrote:
> On Wed, Apr 18, 2018 at 02:53:03AM -0600, Jan Beulich wrote:
> On 06.12.17 at 08:50,  wrote:
>>> Each vcpu of hvm guest consumes at least one shadow page. Currently, only 
> 256
>>> (for hap case) pages are pre-allocated as shadow memory at beginning. It 
> would
>>> run out if guest has more than 256 vcpus and guest creation fails. Bump the
>>> number of shadow pages to 2 * HVM_MAX_VCPUS for hap case and 8 * 
> HVM_MAX_VCPUS
>>> for shadow case.
>>> 
>>> This patch won't lead to more memory consumption for the size of shadow 
>>> memory
>>> will be adjusted via XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION according to the 
>>> size
>>> of guest memory and the number of vcpus.
>>
>>I don't understand this: What's the purpose of bumping the values if it won't 
>>lead
>>to higher memory consumption? Afaict there's be higher consumption at least
>>transiently. And I don't see why this would need doing independent of the 
>>intended
>>vCPU count in the guest. I guess you want to base your series on top on 
>>Andrew's
>>max-vCPU-s adjustments (which sadly didn't become ready in time for 4.11).
> 
> The situation here is some pages are pre-allocated as P2M page for domain
> initialization. After vCPU creation, the total number of P2M page are
> adjusted by the domctl interface. Before vCPU creation, this domctl
> is unusable for the check in paging_domctl():
>  if ( unlikely(d->vcpu == NULL) || unlikely(d->vcpu[0] == NULL) )

Hence my reference to Andrew's series.

> When the number of a guest's vCPU is small, the pre-allocated are
> enough. But it won't be if the number of vCPU is bigger than 256. Each
> vCPU will at least use one P2M page when it is created, seeing
> contruct_vmcs()->hap_update_paging_modes().

Hmm, that might be a problem perhaps to be addressed in Andrew's series
then, as the implication would be that the amount of shadow/p2m memory
also needs to be set right by XEN_DOMCTL_createdomain (which iirc the
series doesn't do so far).

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC Patch v4 7/8] x86/hvm: bump the number of pages of shadow memory

2018-04-18 Thread Andrew Cooper
On 18/04/18 12:39, Chao Gao wrote:
> On Wed, Apr 18, 2018 at 02:53:03AM -0600, Jan Beulich wrote:
> On 06.12.17 at 08:50,  wrote:
>>> Each vcpu of hvm guest consumes at least one shadow page. Currently, only 
>>> 256
>>> (for hap case) pages are pre-allocated as shadow memory at beginning. It 
>>> would
>>> run out if guest has more than 256 vcpus and guest creation fails. Bump the
>>> number of shadow pages to 2 * HVM_MAX_VCPUS for hap case and 8 * 
>>> HVM_MAX_VCPUS
>>> for shadow case.
>>>
>>> This patch won't lead to more memory consumption for the size of shadow 
>>> memory
>>> will be adjusted via XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION according to the 
>>> size
>>> of guest memory and the number of vcpus.
>> I don't understand this: What's the purpose of bumping the values if it 
>> won't lead
>> to higher memory consumption? Afaict there's be higher consumption at least
>> transiently. And I don't see why this would need doing independent of the 
>> intended
>> vCPU count in the guest. I guess you want to base your series on top on 
>> Andrew's
>> max-vCPU-s adjustments (which sadly didn't become ready in time for 4.11).
> The situation here is some pages are pre-allocated as P2M page for domain
> initialization. After vCPU creation, the total number of P2M page are
> adjusted by the domctl interface. Before vCPU creation, this domctl
> is unusable for the check in paging_domctl():
>  if ( unlikely(d->vcpu == NULL) || unlikely(d->vcpu[0] == NULL) )
>
> When the number of a guest's vCPU is small, the pre-allocated are
> enough. But it won't be if the number of vCPU is bigger than 256. Each
> vCPU will at least use one P2M page when it is created, seeing
> contruct_vmcs()->hap_update_paging_modes().

I've also found with XTF tests that the current minimum shadow
calculations are insufficient for single-vcpu domains with 128M of ram.

I haven't had time fix the algorithm, and XTF is using "shadow_memory=4"
as a bodge workaround.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC Patch v4 7/8] x86/hvm: bump the number of pages of shadow memory

2018-04-18 Thread Chao Gao
On Wed, Apr 18, 2018 at 02:53:03AM -0600, Jan Beulich wrote:
 On 06.12.17 at 08:50,  wrote:
>> Each vcpu of hvm guest consumes at least one shadow page. Currently, only 256
>> (for hap case) pages are pre-allocated as shadow memory at beginning. It 
>> would
>> run out if guest has more than 256 vcpus and guest creation fails. Bump the
>> number of shadow pages to 2 * HVM_MAX_VCPUS for hap case and 8 * 
>> HVM_MAX_VCPUS
>> for shadow case.
>> 
>> This patch won't lead to more memory consumption for the size of shadow 
>> memory
>> will be adjusted via XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION according to the 
>> size
>> of guest memory and the number of vcpus.
>
>I don't understand this: What's the purpose of bumping the values if it won't 
>lead
>to higher memory consumption? Afaict there's be higher consumption at least
>transiently. And I don't see why this would need doing independent of the 
>intended
>vCPU count in the guest. I guess you want to base your series on top on 
>Andrew's
>max-vCPU-s adjustments (which sadly didn't become ready in time for 4.11).

The situation here is some pages are pre-allocated as P2M page for domain
initialization. After vCPU creation, the total number of P2M page are
adjusted by the domctl interface. Before vCPU creation, this domctl
is unusable for the check in paging_domctl():
 if ( unlikely(d->vcpu == NULL) || unlikely(d->vcpu[0] == NULL) )

When the number of a guest's vCPU is small, the pre-allocated are
enough. But it won't be if the number of vCPU is bigger than 256. Each
vCPU will at least use one P2M page when it is created, seeing
contruct_vmcs()->hap_update_paging_modes().

Thanks
Chao

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC Patch v4 7/8] x86/hvm: bump the number of pages of shadow memory

2018-04-18 Thread Jan Beulich
>>> On 06.12.17 at 08:50,  wrote:
> Each vcpu of hvm guest consumes at least one shadow page. Currently, only 256
> (for hap case) pages are pre-allocated as shadow memory at beginning. It would
> run out if guest has more than 256 vcpus and guest creation fails. Bump the
> number of shadow pages to 2 * HVM_MAX_VCPUS for hap case and 8 * HVM_MAX_VCPUS
> for shadow case.
> 
> This patch won't lead to more memory consumption for the size of shadow memory
> will be adjusted via XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION according to the size
> of guest memory and the number of vcpus.

I don't understand this: What's the purpose of bumping the values if it won't 
lead
to higher memory consumption? Afaict there's be higher consumption at least
transiently. And I don't see why this would need doing independent of the 
intended
vCPU count in the guest. I guess you want to base your series on top on Andrew's
max-vCPU-s adjustments (which sadly didn't become ready in time for 4.11).

Jan



___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Re: [Xen-devel] [RFC Patch v4 7/8] x86/hvm: bump the number of pages of shadow memory

2018-02-27 Thread George Dunlap
On 12/06/2017 07:50 AM, Chao Gao wrote:
> Each vcpu of hvm guest consumes at least one shadow page. Currently, only 256
> (for hap case) pages are pre-allocated as shadow memory at beginning. It would
> run out if guest has more than 256 vcpus and guest creation fails. Bump the
> number of shadow pages to 2 * HVM_MAX_VCPUS for hap case and 8 * HVM_MAX_VCPUS
> for shadow case.
> 
> This patch won't lead to more memory consumption for the size of shadow memory
> will be adjusted via XEN_DOMCTL_SHADOW_OP_SET_ALLOCATION according to the size
> of guest memory and the number of vcpus.
> 
> Signed-off-by: Chao Gao 

Acked-by: George Dunlap 

___
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel