On 29.07.2024 18:32, oleksii.kuroc...@gmail.com wrote: > On Mon, 2024-07-29 at 17:28 +0200, Jan Beulich wrote: >> On 24.07.2024 17:31, Oleksii Kurochko wrote: >>> +static inline unsigned int smp_processor_id(void) >>> +{ >>> + unsigned int id; >>> + >>> + id = get_processor_id(); >>> + >>> + /* >>> + * Technically the hartid can be greater than what a uint can >>> hold. >>> + * If such a system were to exist, we will need to change >>> + * the smp_processor_id() API to be unsigned long instead of >>> + * unsigned int. >>> + */ >>> + BUG_ON(id > UINT_MAX); >> >> Compilers may complaing about this condition being always false. But: >> Why >> do you check against UINT_MAX, not against NR_CPUS? > Because HART id theoretically could be greater then what unsigned int > can provide thereby NR_CPUS could be also greater then unsigned int ( > or it can't ? ).
Well, there are two aspects here. On a system with billions of harts, we wouldn't be able to bring up all of them anyway. NR_CPUS is presently limited at about 16k. Yet than I have no idea whether hart IDs need to be all consecutive. On other hardware (x86 for example), the analog APIC IDs don't need to be. Hence I could see there being large hart IDs (unless excluded somewhere in the spec), which then you map to consecutive Xen CPU IDs, all having relatively small numbers (less than NR_CPUS). If hart IDs can be sparse and wider than 32 bits, then I'd suggest to use an appropriately typed array right away for the Xen -> hart translation. For the hart -> Xen translation, if also needed, an array may then not be appropriate to use. Jan