On 26.05.2025 20:44, Oleksii Kurochko wrote:
> On 5/22/25 4:46 PM, Jan Beulich wrote:
>> On 21.05.2025 18:03, Oleksii Kurochko wrote:
>>> +    /* Allocate MMIO resource array */
>>> +    imsic_cfg.mmios = xzalloc_array(struct imsic_mmios, nr_mmios);
>> How large can this and ...
>>
>>> +    if ( !imsic_cfg.mmios )
>>> +    {
>>> +        rc = -ENOMEM;
>>> +        goto imsic_init_err;
>>> +    }
>>> +
>>> +    imsic_cfg.msi = xzalloc_array(struct imsic_msi, nr_parent_irqs);
>> ... this array grow (in principle)?
> 
> Roughly speaking, this is the number of processors. The highests amount of 
> processors
> on the market I saw it was 32. But it was over a year ago when I last checked 
> this.

Unless there's an architectural limit, I don't think it's a good idea to
take as reference what's available at present. But yes, ...

>>   I think you're aware that in principle
>> new code is expected to use xvmalloc() and friends unless there are specific
>> reasons speaking against that.
> 
> Oh, missed 'v'...

... adding the missing 'v' will take care of my concern. Provided of
course this isn't running to early for vmalloc() to be usable just yet.

>>> +    if ( !imsic_cfg.msi )
>>> +    {
>>> +        rc = -ENOMEM;
>>> +        goto imsic_init_err;
>>> +    }
>>> +
>>> +    /* Check MMIO register sets */
>>> +    for ( unsigned int i = 0; i < nr_mmios; i++ )
>>> +    {
>>> +        if ( !alloc_cpumask_var(&imsic_cfg.mmios[i].cpus) )
>>> +        {
>>> +            rc = -ENOMEM;
>>> +            goto imsic_init_err;
>>> +        }
>>> +
>>> +        rc = dt_device_get_address(node, i, &imsic_cfg.mmios[i].base_addr,
>>> +                                   &imsic_cfg.mmios[i].size);
>>> +        if ( rc )
>>> +        {
>>> +            printk(XENLOG_ERR "%s: unable to parse MMIO regset %u\n",
>>> +                   node->name, i);
>>> +            goto imsic_init_err;
>>> +        }
>>> +
>>> +        base_addr = imsic_cfg.mmios[i].base_addr;
>>> +        base_addr &= ~(BIT(imsic_cfg.guest_index_bits +
>>> +                           imsic_cfg.hart_index_bits +
>>> +                           IMSIC_MMIO_PAGE_SHIFT, UL) - 1);
>>> +        base_addr &= ~((BIT(imsic_cfg.group_index_bits, UL) - 1) <<
>>> +                       imsic_cfg.group_index_shift);
>>> +        if ( base_addr != imsic_cfg.base_addr )
>>> +        {
>>> +            rc = -EINVAL;
>>> +            printk(XENLOG_ERR "%s: address mismatch for regset %u\n",
>>> +                   node->name, i);
>>> +            goto imsic_init_err;
>>> +        }
>> Maybe just for my own understanding: There's no similar check for the
>> sizes to match / be consistent wanted / needed?
> 
> If you are speaking about imsic_cfg.mmios[i].size then it depends fully on 
> h/w will
> provide, IMO.
> So I don't what is possible range for imsic_cfg.mmios[i].size.

Well, all I can say is that's it feels odd that you sanity check base_addr
but permit effectively any size.

>>> @@ -18,6 +19,18 @@ static inline unsigned long cpuid_to_hartid(unsigned 
>>> long cpuid)
>>>       return pcpu_info[cpuid].hart_id;
>>>   }
>>>   
>>> +static inline unsigned long hartid_to_cpuid(unsigned long hartid)
>>> +{
>>> +    for ( unsigned int cpuid = 0; cpuid < ARRAY_SIZE(pcpu_info); cpuid++ )
>>> +    {
>>> +        if ( hartid == cpuid_to_hartid(cpuid) )
>>> +            return cpuid;
>>> +    }
>>> +
>>> +    /* hartid isn't valid for some reason */
>>> +    return NR_CPUS;
>>> +}
>> Considering the values being returned, why's the function's return type
>> "unsigned long"?
> 
> mhartid register has MXLEN length, so theoretically we could have from 0 to 
> MXLEN-1
> Harts and so we could have the same amount of Xen's CPUIDs. and MXLEN is 32 
> for RV32
> and MXLEN is 64 for RV64.

Yet the return value here is always constrained by NR_CPUS, isn't it?

Jan

Reply via email to