Re: [RFC 06/10] x86/domain: guard svm specific functions with AMD_SVM

2023-02-14 Thread Xenia Ragiadakou



On 2/14/23 18:24, Jan Beulich wrote:

On 13.02.2023 15:57, Xenia Ragiadakou wrote:

The functions svm_load_segs() and svm_load_segs_prefetch() are AMD-V specific
so guard their calls in common code with AMD_SVM.

Since AMD_SVM depends on HVM, it can be used alone.

No functional change intended.

Signed-off-by: Xenia Ragiadakou 


With whatever the final name of the Kconfig control is going to be
Acked-by: Jan Beulich 

Thinking about it, both here an in the earlier patch it may be worth
considering to switch to use of IS_ENABLED() while making these
adjustments.


Ok will do. Thanks.



Jan


--
Xenia



Re: [RFC 06/10] x86/domain: guard svm specific functions with AMD_SVM

2023-02-14 Thread Jan Beulich
On 13.02.2023 15:57, Xenia Ragiadakou wrote:
> The functions svm_load_segs() and svm_load_segs_prefetch() are AMD-V specific
> so guard their calls in common code with AMD_SVM.
> 
> Since AMD_SVM depends on HVM, it can be used alone.
> 
> No functional change intended.
> 
> Signed-off-by: Xenia Ragiadakou 

With whatever the final name of the Kconfig control is going to be
Acked-by: Jan Beulich 

Thinking about it, both here an in the earlier patch it may be worth
considering to switch to use of IS_ENABLED() while making these
adjustments.

Jan



[RFC 06/10] x86/domain: guard svm specific functions with AMD_SVM

2023-02-13 Thread Xenia Ragiadakou
The functions svm_load_segs() and svm_load_segs_prefetch() are AMD-V specific
so guard their calls in common code with AMD_SVM.

Since AMD_SVM depends on HVM, it can be used alone.

No functional change intended.

Signed-off-by: Xenia Ragiadakou 
---
 xen/arch/x86/domain.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index db3ebf062d..576a410f4f 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1628,7 +1628,7 @@ static void load_segments(struct vcpu *n)
 if ( !(n->arch.flags & TF_kernel_mode) )
 SWAP(gsb, gss);
 
-#ifdef CONFIG_HVM
+#ifdef CONFIG_AMD_SVM
 if ( cpu_has_svm && (uregs->fs | uregs->gs) <= 3 )
 fs_gs_done = svm_load_segs(n->arch.pv.ldt_ents, LDT_VIRT_START(n),
n->arch.pv.fs_base, gsb, gss);
@@ -1951,7 +1951,7 @@ static void __context_switch(void)
 
 write_ptbase(n);
 
-#if defined(CONFIG_PV) && defined(CONFIG_HVM)
+#if defined(CONFIG_PV) && defined(CONFIG_AMD_SVM)
 /* Prefetch the VMCB if we expect to use it later in the context switch */
 if ( cpu_has_svm && is_pv_64bit_domain(nd) && !is_idle_domain(nd) )
 svm_load_segs_prefetch();
-- 
2.37.2