On 2025-11-06 17:26, Grygorii Strashko wrote:
From: Grygorii Strashko <[email protected]>

Xen uses below pattern for raw_x_guest() functions:

define raw_copy_to_guest(dst, src, len)        \
     (is_hvm_vcpu(current) ?                     \
      copy_to_user_hvm((dst), (src), (len)) :    \
      copy_to_guest_pv(dst, src, len))

This pattern works depending on CONFIG_PV/CONFIG_HVM as:
- PV=y and HVM=y
   Proper guest access function is selected depending on domain type.
- PV=y and HVM=n
   Only PV domains are possible. is_hvm_domain/vcpu() will constify to "false"
   and compiler will optimize code and skip HVM specific part.
- PV=n and HVM=y
   Only HVM domains are possible. is_hvm_domain/vcpu() will not be constified.
   No PV specific code will be optimized by compiler.
- PV=n and HVM=n
   No guests should possible. The code will still follow PV path.

Rework raw_x_guest() code to use static inline functions which account for
above PV/HVM possible configurations with main intention to optimize code
for (PV=n and HVM=y) case.

For the case (PV=n and HVM=n) return "len" value indicating a failure (no
guests should be possible in this case, which means no access to guest
memory should ever happen).

Finally build arch/x86/usercopy.c only for PV=y.

The measured (bloat-o-meter) improvement for (PV=n and HVM=y) case is:
   add/remove: 2/9 grow/shrink: 2/90 up/down: 1678/-32560 (-30882)
   Total: Before=1937092, After=1906210, chg -1.59%

Signed-off-by: Grygorii Strashko <[email protected]>
[[email protected]: Suggested to use static inline functions vs macro 
combinations]
Suggested-by: Teddy Astie <[email protected]>

I think Teddy's goes before your SoB.

---

diff --git a/xen/arch/x86/include/asm/guest_access.h 
b/xen/arch/x86/include/asm/guest_access.h
index 69716c8b41bb..576eac9722e6 100644
--- a/xen/arch/x86/include/asm/guest_access.h
+++ b/xen/arch/x86/include/asm/guest_access.h
@@ -13,26 +13,64 @@
  #include <asm/hvm/guest_access.h>
/* Raw access functions: no type checking. */
-#define raw_copy_to_guest(dst, src, len)        \
-    (is_hvm_vcpu(current) ?                     \
-     copy_to_user_hvm((dst), (src), (len)) :    \
-     copy_to_guest_pv(dst, src, len))
-#define raw_copy_from_guest(dst, src, len)      \
-    (is_hvm_vcpu(current) ?                     \
-     copy_from_user_hvm((dst), (src), (len)) :  \
-     copy_from_guest_pv(dst, src, len))
-#define raw_clear_guest(dst,  len)              \
-    (is_hvm_vcpu(current) ?                     \
-     clear_user_hvm((dst), (len)) :             \
-     clear_guest_pv(dst, len))
-#define __raw_copy_to_guest(dst, src, len)      \
-    (is_hvm_vcpu(current) ?                     \
-     copy_to_user_hvm((dst), (src), (len)) :    \
-     __copy_to_guest_pv(dst, src, len))
-#define __raw_copy_from_guest(dst, src, len)    \
-    (is_hvm_vcpu(current) ?                     \
-     copy_from_user_hvm((dst), (src), (len)) :  \
-     __copy_from_guest_pv(dst, src, len))
+static inline unsigned int raw_copy_to_guest(void *to, const void *src,

Maybe s/to/dst/ to keep this consistent with the rest?

+                                             unsigned int len)
+{
+    if ( IS_ENABLED(CONFIG_HVM) &&
+         (!IS_ENABLED(CONFIG_PV) || is_hvm_vcpu(current)) )

Since this is repeated, maybe put into a helper like use_hvm_access(current)?

Thanks,
Jason

Reply via email to