On Mon, Feb 26, 2018 at 01:26:42AM -0700, Jan Beulich wrote:
>>>> On 23.02.18 at 19:11, <roger....@citrix.com> wrote:
>> On Wed, Dec 06, 2017 at 03:50:14PM +0800, Chao Gao wrote:
>>> Signed-off-by: Chao Gao <chao....@intel.com>
>>> ---
>>>  xen/include/public/hvm/hvm_info_table.h | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>> 
>>> diff --git a/xen/include/public/hvm/hvm_info_table.h 
>> b/xen/include/public/hvm/hvm_info_table.h
>>> index 08c252e..6833a4c 100644
>>> --- a/xen/include/public/hvm/hvm_info_table.h
>>> +++ b/xen/include/public/hvm/hvm_info_table.h
>>> @@ -32,7 +32,7 @@
>>>  #define HVM_INFO_PADDR       ((HVM_INFO_PFN << 12) + HVM_INFO_OFFSET)
>>>  
>>>  /* Maximum we can support with current vLAPIC ID mapping. */
>>> -#define HVM_MAX_VCPUS        128
>>> +#define HVM_MAX_VCPUS        512
>> 
>> Wow, that looks like a pretty big jump. I certainly don't have access
>> to any box with this number of vCPUs, so that's going to be quite hard
>> to test. What the reasoning behind this bump? Is hardware with 512
>> ways expected soon-ish?
>> 
>> Also osstest is not even able to test the current limit, so I would
>> maybe bump this to 256, but as I expressed in other occasions I don't
>> feel comfortable with have a number of vCPUs that the current test
>> system doesn't have hardware to test with.
>
>I think implementation limit and supported limit need to be clearly
>distinguished here. Therefore I'd put the question the other way
>around: What's causing the limit to be 512, rather than 1024,
>4096, or even 4G-1 (x2APIC IDs are 32 bits wide, after all)?

TBH, I have no idea. When I choose a value, what comes up to my mind is
that the value should be 288, because Intel has Xeon-phi platform which
has 288 physical threads, and some customers wants to use this new platform
for HPC cloud. Furthermore, they requests to support a big VM in which
almost computing and device resources are assigned to the VM. They just
use virtulization technology to manage the machines. In this situation,
I choose 512 is because I feel much better if the limit is a power of 2.

You are asking that as these patches remove limitations imposed by some
components, which one is the next bottleneck and how many vcpus does it
limit.  Maybe it would be the use-case. No one is requesting to support
more than 288 at this moment. So what is the value you prefer? 288 or
512? or you think I should find the next bottleneck in Xen's
implementation.

Thanks
Chao

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to