On 5/9/2025 4:18 PM, David Hildenbrand wrote:
>>>>
>>>> Signed-off-by: Chenyi Qiang<chenyi.qi...@intel.com>
>>>
>>> <...>
>>>
>>>> +
>>>> +int ram_block_attribute_realize(RamBlockAttribute *attr, MemoryRegion
>>>> *mr)
>>>> +{
>>>> +    uint64_t shared_bitmap_size;
>>>> +    const int block_size  = qemu_real_host_page_size();
>>>> +    int ret;
>>>> +
>>>> +    shared_bitmap_size = ROUND_UP(mr->size, block_size) / block_size;
>>>> +
>>>> +    attr->mr = mr;
>>>> +    ret = memory_region_set_generic_state_manager(mr,
>>>> GENERIC_STATE_MANAGER(attr));
>>>> +    if (ret) {
>>>> +        return ret;
>>>> +    }
>>>> +    attr->shared_bitmap_size = shared_bitmap_size;
>>>> +    attr->shared_bitmap = bitmap_new(shared_bitmap_size);
>>>
>>> Above introduces a bitmap to track the private/shared state of each 4KB
>>> page. While functional, for large RAM blocks managed by guest_memfd,
>>> this could lead to significant memory consumption.
>>>
>>> Have you considered an alternative like a Maple Tree or a generic
>>> interval tree? Both are often more memory-efficient for tracking ranges
>>> of contiguous states.
>>
>> Maybe not necessary. The memory overhead is 1 bit per page
>> (1/(4096*8)=0.003%). I think it is not too much.
> 
> It's certainly not optimal.
> 
> IIRC, QEMU already maintains 3 dirty bitmaps in
> ram_list.dirty_memory (DIRTY_MEMORY_NUM = 3) for guest ram.
> 
> With KVM, we also allocate yet another dirty bitmap without
> KVM_MEM_LOG_DIRTY_PAGES.
> 
> Assuming a 4 TiB VM, a single bitmap should be 128 MiB.

OK. So this is a long-term issue which could be optimized in many
places. I think it needs more efforts to evaluate the benefits of the
change. Currently, maybe put it as a future work.

> 


Reply via email to