On 15.04.2025 15:39, Oleksii Kurochko wrote:
> On 4/10/25 5:53 PM, Jan Beulich wrote:
>> On 08.04.2025 17:57, Oleksii Kurochko wrote:
>>> +{
>>> +    const __be32 *cell;
>>> +    int ac;
>>> +    uint32_t len;
>>> +
>>> +    ac = dt_n_addr_cells(cpun);
>>> +    cell = dt_get_property(cpun, "reg", &len);
>>> +    if ( !cell || !ac || ((sizeof(*cell) * ac * (thread + 1)) > len) )
>>> +        return ~0ULL;
>> I'm sorry for my lack of DT knowledge, but what's "thread" representing here?
>> You only pass in 0 below, so it's unclear whether it's what one might expect
>> (the thread number on a multi-threaded core).
> 
> Based on the DT specification alone, the|`reg`| value could refer to either a 
> CPU or a thread ID:
> ```
> The value of reg is a <prop-encoded-array> that defines a unique CPU/thread 
> id for
> the CPU/threads represented by the CPU node. If a CPU supports more than one 
> thread
> (i.e. multiple streams of execution) the reg prop-erty is an array with 1 
> element
> per thread.
> ```
> 
> My understanding is that the term/thread/ was used in the Linux kernel to 
> cover both
> cases.
> When SMT isn't supported, the CPU can be considered to have a single thread.
> For example, RISC-V uses the term/hardware thread/ to describe a hart (i.e., 
> a CPU).
> 
> Interestingly, the Linux kernel always uses|thread = 0|.
> 
> We could potentially drop this ambiguity and introduce an|ASSERT()| to check 
> that
> the|`reg`| property contains only one entry, representing the HART (CPU) ID:
> ```
>    Software can determine the number of threads by dividing the size of reg 
> by the parent
>    node’s #address-cells. If `|reg`| has more than one entry, it would simply 
> SMT support
>    is required.
> ```
> 
> Does that approach make sense, or should we stick with the current 
> implementation?

If extra enabling is required to make multi-thread CPUs work, then panic()ing
(not so much ASSERT()ing) may make sense, for the time being. Better would be
if we could use all threads in a system right away.

Jan

Reply via email to