mlsorensen commented on issue #6744:
URL: https://github.com/apache/cloudstack/issues/6744#issuecomment-1251359590
```
I currently can't see how this might happen as the shares aren't the only
factor, or if they are really used, for the allocation decisions of cloudstack.
Each of the deployed instances using the compute offering will at least block 1
core.
```
Thanks for your reply @Hudratronium. Shares themselves aren't used for
allocation decisions as they're a KVM implementation detail. However, setting
1MHz as your CPU speed in the offering as an effort to manipulate the resulting
shares means that an 8 core VM is only going to subtract 8MHz of capacity from
your multi-Ghz host.
Unless something has changed, CloudStack will not block 1 physical core on
the host per vCPU. The cloudstack allocators just sum the MHz available on the
host (host cores * speed) and sum the MHz required to run the offering, and
subtract each instance's required MHz from the available MHz on the hypervisor
host. I know that at some point in the last decade someone added the concept of
"CPU cores" as a distinct resource to CloudStack, visible on the dashboard, but
you can easily exceed that number.
There is a condition that ensures the number of vCPUs is <= the number of
physical cores on the host, but you could deploy any number of 8vCPU, 1Mhz VMs
on an 8 physical core host. As a side note, to me it seems kind of broken
actually (or should be configurable) to require the host to have at least as
many physical cores as the VM has vCPUs, but I digress :-)
See this example. I have created an 8vCPU 1MHz service offering, and I have
an 8 core KVM host:
```
> list hosts id=617cb1f7-7729-49d8-96ec-f412aca539a4
filter=cpuallocatedvalue,cpunumber,cpusockets,cpuspeed,cpuwithoverprovisioning
{
"count": 1,
"host": [
{
"cpuallocatedvalue": 16,
"cpunumber": 8,
"cpusockets": 1,
"cpuspeed": 2800,
"cpuwithoverprovisioning": "22400"
}
]
}
```
See 8 CPUs * 2.8GHz = 22400 total CPU to allocate. It has 16MHz allocated
right now. If I see which VMs are allocated, I have two of these 8vCPU 1MHz (
thus 16MHz allocated).
```
> list virtualmachines hostid=617cb1f7-7729-49d8-96ec-f412aca539a4
filter=cpunumber,cpuspeed,instancename
{
"count": 2,
"virtualmachine": [
{
"cpunumber": 8,
"cpuspeed": 1,
"instancename": "i-2-131-VM"
},
{
"cpunumber": 8,
"cpuspeed": 1,
"instancename": "i-2-130-VM"
}
]
}
```
I could launch potentially thousands of these 1MHz VMs with the 22400MHz the
host has (it would run out of memory first).
```
I don't think that the approach will work out as the overall value for
shares can be in theory '262144' for each Domain. Which means each Domain will
have the same CPU time. '262144' isn't representing the acutal availeable CPU
time, it's used to generate a 'proportional weighted share'. Not weighting the
actual availeable ressources, more the share between the different Domains. A
short example based on the libvirt docs:
```
Yes, this is why a scale down should work fine, as the proportion is what is
important, not the raw value. Just scaling down by a factor of 100 when
defining the shares in libvirt should work. The main issue is lack of
granularity as anything lower than 100MHz offering would still get 1 share.
Probably not a real issue :-)
```
But keep in mind that this is only relevant when you start overprovisioning.
At least the the cloudstack docs state, that without overprovisioning the
values for 'shares' aren't of importance
```
It's also relevant (or should be if it is working properly) with CPU cap
enabled on service offerings. This limits the VM CPU to a certain amount of
runtime per CFS period regardless of whether there is contention on the system.
This can be useful to provide consistency in experience - for example you can
limit VM performance to 1/2 of a physical core and in theory would always be
the same level of performance so long as you don't over provision greater than
2x, rather than bursting to consume whole physical cores when idle.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]