Trying a different way to avoid the ARM disclaimer. But just to make sure, this email does NOT contain any confidential information.
On 05/04/2017, 01:25, "Dmitry Eremin-Solenikov" <dmitry.ereminsoleni...@linaro.org> wrote: >On 04.04.2017 23:52, Ola Liljedahl wrote: >> Sending from my ARM email account, I hope Outlook does not mess up the >> format. >> >> >> >> On 04/04/2017, 22:21, "Dmitry Eremin-Solenikov" >> <dmitry.ereminsoleni...@linaro.org> wrote: >> >>> On 04.04.2017 21:48, Brian Brooks wrote: >>>> Signed-off-by: Ola Liljedahl <ola.liljed...@arm.com> >>>> Reviewed-by: Brian Brooks <brian.bro...@arm.com> >>>> Reviewed-by: Honnappa Nagarahalli <honnappa.nagaraha...@arm.com> >>> >>>> >>>> >>>>+/********************************************************************* >>>>** >>>> ******* >>>> + * bitset abstract data type >>>> + >>>> >>>>*********************************************************************** >>>>** >>>> ****/ >>>> +/* This could be a struct of scalars to support larger bit sets */ >>>> + >>>> +#if ATOM_BITSET_SIZE <= 32 >>> >>> Maybe I missed, where did you set this macro? >> In odp_config_internal.h >> It is a build time configuration. >> >>> >>> Also, why do you need several versions of bitset? Can you stick to one >>> size that fits all? >> Some 32-bit archs (ARMv7a, x86) will only support 64-bit atomics >>(AFAIK). >> Only x86-64 and ARMv8a supports 128-bit atomics (and compiler support >>for >> 128-bit atomics for ARMv8a is a bit lackingÅ ). >> Other architectures might only support 32-bit atomic operations. > >What will be the major outcome of settling on the 64-bit atomics? The size of the bitset determines the maximum number of threads, the maximum number of scheduler groups and the maximum number of reorder contexts (per thread). > >> I think the user should have control over this but if you think that we >> should just select the max value that is supported by the architecture >>in >> question and thus skip one build configuration, I am open to this. We >>will >> still need separate versions for 32/64/128 bits because there are slight >> differences in the syntax and implementation. Such are the vagaries of >>the >> C standard (and GCC extensions). >> >> >>> Any real reason for the following defines? Why do you need them? >> The functions were added as they were needed, e.g. in >> odp_schedule_scalable.c. >> I dont think there is anyone which is not used anymore but can >> double-check that. > >Well. I maybe should rephrase my question: why do you think that it's >better to have bitset_andn(a, b), rather than just a &~b ? The atomic bitset is an abstract data type. The implementation does not have to use a scalar word. Alternative implementation paths exist, e.g. use a struct with multiple words and perform the requested operation one word at a time (this is OK but perhaps not well documented). > > >-- >With best wishes >Dmitry