Hi All,
Now we are working on the NUMA tune feature development for oVirt 3.5. This
feature allow user to configure the vCPU pining according to host CPU/NUMA
topology to get the best performance for created VM. But this will impact the
current function of vCPU pining. Now, we are looking for a solution to
consolidate the conflict. We want to start an open discussion about VM CPU
pinning (the current design) and NUMA CPU pining (new feature we are working
on) in developer list:
Background:
Concept:
1. VM CPU pinning: the exist feature in current oVirt, which allow user to
configure the VM vCPU pinning over ovirt, user can configure vCPU pining
independently without NUMA tune.
2. NUMA CPU pinning: allow user to configure the vCPU pining according to
host NUMA topology to get the best performance for created VM
3. vNode: Virtual NUMA node ( User configured )
4. pNode: host physical NUMA node ( Get capability from host )
Notice:
1. NUMA tuning feature and CPU pinning feature are individually in libvirt
( ovirt backend ).
2. User could configure VM CPU pining individually without NUMA tune setup.
3. When configuring NUMA tuning feature, user need to configure VM NUMA
tuning ( vNode pinto pNode & tuning mode ) and VM CPU pinning ( NUMA included )
to get optimized VM performance, otherwise VM will have low performance.
We have two proposal now for this issue. Please give us some comments and your
feedback, thanks.
Solution 1:
GUI:
Transform between the CPU pinning text and a structure which will be used in
NUMA CPU pinning configure page.
And then save the CPU pinning text to the current VM CPU pinning field of
VM.
Restful:
Transform between the CPU pinning text and a structure which will be used in
restful NUMA CPU pinning.
And then save the CPU pinning text to the current VM CPU pinning field of VM.
Broker:
Remove temporary solution(See the current implementation as below) and follow
previous cpupin configuration procedure.
Solution 2:
GUI:
If current VM CPU pinning is configured, when user open NUMA CPU pinning
configure page, he will get a warning message. If he continues to configure
NUMA CPU pinning and save the data, the current VM CPU pinning
configuration will be cleared.
NUMA CPU pinning configured data will be saved as new structure individually
without changing current VM CPU pinning configuration.
Restful:
Configure NUMA CPU pinning and then save the data with the new NUMA CPU pinning
structure.
Entity and Database:
Individually NUMA CPU pinning entities and data structure.
Broker:
NUMA CPU pinning configuration will be first considered to use. If
it's not configured, it will use the current VM CPU pinning.
Solution 1 The CPU pining data is consistent but the code logic is very complex.
Solution 2 have better adaptive and better data structure. This is the way we
preferred.
We are appreciated that anybody could give us your comments or the better
solution you have.
The below is the current implementation for your reference:
VM CPU pinning feature
GUI:
User input vCPU pining configure data with formatted text.
Restful:
User configure CpuTune and VCpuPin model with mapper to CPU pinning text.
Entity and Database:
CPU pinning is String property.
Broker:
Generate structure of cputune ( libvirt format ) from CPU pining string
property.
NUMA tuning feature
GUI:
User can drag & drop virtual NUMA node to host NUMA node ( pin to or remove pin
to ).
Restful:
Add/update/remove virtual NUMA node with property of pin to host NUMA node
index.
Design NUMACPU model under NUMA node for extend.
Entity and Database:
Individually NUMA node entities ( vNode extend pNode ) and store procedure.
Broker:
Generate structure of numatune, cpu/numa ( libvirt format ) from NUMA node
entities.
Temporary solution prevent Notice 3
Broker:
Generate right structure of cputune ( libivirt format ) from NUMA node entities
( vNode pinto pNode )
Limitation:
This structure of cputune will not get the best performance of vNode.
Best Regards,
Jason Liao
_______________________________________________
Devel mailing list
[email protected]
http://lists.ovirt.org/mailman/listinfo/devel