On 23.05.2025 12:27, Oleksii Kurochko wrote:
> On 5/20/25 4:38 PM, Jan Beulich wrote:
>> On 09.05.2025 17:57, Oleksii Kurochko wrote:
>>> --- a/xen/arch/riscv/p2m.c
>>> +++ b/xen/arch/riscv/p2m.c
>>> @@ -1,4 +1,12 @@
>>>   #include <xen/domain_page.h>
>>> +/*
>>> + * Because of general_preempt_check() from xen/sched.h which uses
>>> + * local_events_need_delivery() but latter is declared in <asm/event.h>.
>>> + * Thereby it is needed to icnlude <xen/event.h> here before xen/sched.h.
>>> + *
>>> + * Shouldn't be xen/event.h be included in <xen/sched.h>?
>>> + */
>>> +#include <xen/event.h>
>> The question doesn't belong here; such could be put in the post-commit-
>> message area. And the answer depends on what dependency you found missing.
> 
> It is needed for local_events_need_delivery() which is used by 
> general_preempt_check()
> in p2m_set_allocation(). Otherwise, the following issue will occur:
> 
> In file included from ././include/xen/config.h:17,
>                   from <command-line>:
> arch/riscv/p2m.c: In function 'p2m_set_allocation':
> ./include/xen/sched.h:941:36: error: implicit declaration of function 
> 'local_events_need_delivery' [-Werror=implicit-function-declaration]
>    941 |         (!is_idle_vcpu(current) && local_events_need_delivery())    \
>        |                                    ^~~~~~~~~~~~~~~~~~~~~~~~~~
> ./include/xen/compiler.h:26:43: note: in definition of macro 'unlikely'
>     26 | #define unlikely(x)   __builtin_expect(!!(x),0)
>        |                                           ^
> arch/riscv/p2m.c:244:27: note: in expansion of macro 'general_preempt_check'
>    244 |         if ( preempted && general_preempt_check() )
>        |                           ^~~~~~~~~~~~~~~~~~~~~
> cc1: all warnings being treated as errors

In which case my answer to your question is "No". Others may take a different
perspective. (xen/sched.h being included virtually everywhere, imo we want to
avoid adding dependencies there which aren't strictly necessary to keep things
building.)

>>> @@ -166,3 +176,60 @@ int p2m_init(struct domain *d)
>>>   
>>>       return 0;
>>>   }
>>> +
>>> +/*
>>> + * Set the pool of pages to the required number of pages.
>>> + * Returns 0 for success, non-zero for failure.
>>> + * Call with d->arch.paging.lock held.
>>> + */
>>> +int p2m_set_allocation(struct domain *d, unsigned long pages, bool 
>>> *preempted)
>>> +{
>>> +    struct page_info *pg;
>>> +
>>> +    ASSERT(spin_is_locked(&d->arch.paging.lock));
>>> +
>>> +    for ( ; ; )
>>> +    {
>>> +        if ( d->arch.paging.p2m_total_pages < pages )
>>> +        {
>>> +            /* Need to allocate more memory from domheap */
>>> +            pg = alloc_domheap_page(d, MEMF_no_owner);
>>> +            if ( pg == NULL )
>>> +            {
>>> +                printk(XENLOG_ERR "Failed to allocate P2M pages.\n");
>>> +                return -ENOMEM;
>>> +            }
>>> +            ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
>>> +                d->arch.paging.p2m_total_pages + 1;
>> Looks like you copied this from Arm, but this code is bogus: Using
>> ACCESS_ONCE() just on the lhs is pretty pointless. Once also used on the
>> rhs the expression can easily become
>>
>>                  ACCESS_ONCE(d->arch.paging.p2m_total_pages) += 1;
>>
>> or even
>>
>>                  ACCESS_ONCE(d->arch.paging.p2m_total_pages)++;
>>
>> .
>>
>>> +            page_list_add_tail(pg, &d->arch.paging.p2m_freelist);
>>> +        }
>>> +        else if ( d->arch.paging.p2m_total_pages > pages )
>>> +        {
>>> +            /* Need to return memory to domheap */
>>> +            pg = page_list_remove_head(&d->arch.paging.p2m_freelist);
>>> +            if( pg )
>>> +            {
>>> +                ACCESS_ONCE(d->arch.paging.p2m_total_pages) =
>>> +                    d->arch.paging.p2m_total_pages - 1;
>> Same here then, obviously.
>>
>>> +                free_domheap_page(pg);
>>> +            }
>>> +            else
>>> +            {
>>> +                printk(XENLOG_ERR
>>> +                       "Failed to free P2M pages, P2M freelist is 
>>> empty.\n");
>>> +                return -ENOMEM;
>>> +            }
>>> +        }
>>> +        else
>>> +            break;
>>> +
>>> +        /* Check to see if we need to yield and try again */
>>> +        if ( preempted && general_preempt_check() )
>> While it's this way on both Arm and x86, I wonder how useful it is
>> to check on every iteration, especially when freeing pages back to the
>> buddy allocator.
> 
> IIUC, but a preemption request could happen for both cases. And destroying of
> a domain could also consume long time and so not to block hypervisor if 
> something
> more urgent should be handled it could be also have this check for the case of
> freeng pages back to the buddy allocator.

The question wasn't whether to check, but how frequently. The check itself is
consuming processing time, too, so one generally wants to balance the number
of checks against the size of the resulting time window without any check.

Jan

Reply via email to