On 20.10.25 12:32, Chenyi Qiang wrote:


On 10/17/2025 11:13 PM, David Hildenbrand wrote:
On 17.10.25 10:14, Chenyi Qiang wrote:
Currently, private memory and shared memory have different backend in
CoCo VMs. It is possible for users to specify the shared memory with
hugetlbfs backend while private memory with guest_memfd backend only
supports 4K page size. In this case, ram_block->page_size is different
from the host page size which will trigger the assertion when getting
block size. Relax the restriction to allow shared memory to use
hugetlbfs backend.

Fixes: 5d6483edaa92 ("ram-block-attributes: Introduce RamBlockAttributes to manage 
RAMBlock with guest_memfd")
Signed-off-by: Chenyi Qiang <[email protected]>
---
   system/ram-block-attributes.c | 7 ++++---
   1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/system/ram-block-attributes.c b/system/ram-block-attributes.c
index 68e8a027032..0f39ccf9090 100644
--- a/system/ram-block-attributes.c
+++ b/system/ram-block-attributes.c
@@ -28,10 +28,11 @@ ram_block_attributes_get_block_size(const 
RamBlockAttributes *attr)
        * Because page conversion could be manipulated in the size of at least 
4K
        * or 4K aligned, Use the host page size as the granularity to track the
        * memory attribute.
+     * When hugetlbfs is used as backend of shared memory, ram_block->page_size
+     * is different from host page size. So it is not appropriate to use
+     * ram_block->page_size here.

But are we sure everything else is working as expected and that this is not a 
check that prevents other code from doing the wrong thing?

I think so. The block size must be 4K due to the page conversion could be in the size of 
4K and we use "bitmap" to track the status.

Indeed.

I originally missed the case of hugetlb so added an assert() here. But it is 
allowed to use hugetlb as shared memory backend
before shared device assignment patches were introduced.


I recall that punching holes was problematic as the VM shares/unshared 4k 
chunks.

I can see the kvm_convert_memory() will skip ram_block_discard_range() if using 
hugetlb backend.
It will cause the double-memory consumption (*). Any other problem?

Right.


What we should be doing is unifying the retrieval of the block size in ram_block_attributes_create() as well. That's where we allocate it.

So either

a) Use qemu_real_host_page_size() everywhere.

b) Use ram_block_attributes_get_block_size() everywhere.

Could be done in a separate patch.

--
Cheers

David / dhildenb


Reply via email to