Thank you Andre for the patch. It looks good overall.

> On Nov 12, 2024, at 4:02 PM, Andre Muezerie <andre...@linux.microsoft.com> 
> wrote:
> 
> ../lib/rcu/rte_rcu_qsbr.c(101): warning C4334: '<<': result of 32-bit
> shift implicitly converted to 64 bits (was 64-bit shift intended?)
> ../lib/rcu/rte_rcu_qsbr.c(107): warning C4334: '<<': result of 32-bit
> shift implicitly converted to 64 bits (was 64-bit shift intended?)
> ../lib/rcu/rte_rcu_qsbr.c(145): warning C4334: '<<': result of 32-bit
> shift implicitly converted to 64 bits (was 64-bit shift intended?)
> 
> These warnings are being issued by the MSVC compiler. Since the result is
> being stored in a variable of type uint64_t, it makes sense to shift a
> 64-bit number instead of shifting a 32-bit number and then having the
> compiler to convert the result implicitly to 64 bits.
> UINT64_C was used in the fix as it is the portable way to define a 64-bit
> constant (ULL suffix is architecture dependent).
> 
> Signed-off-by: Andre Muezerie <andre...@linux.microsoft.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagaraha...@arm.com>

> ---
> lib/rcu/rte_rcu_qsbr.c | 12 ++++++------
> 1 file changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/lib/rcu/rte_rcu_qsbr.c b/lib/rcu/rte_rcu_qsbr.c
> index 09a14a15f1..1d19d1dc95 100644
> --- a/lib/rcu/rte_rcu_qsbr.c
> +++ b/lib/rcu/rte_rcu_qsbr.c
> @@ -99,12 +99,12 @@ rte_rcu_qsbr_thread_register(struct rte_rcu_qsbr *v, 
> unsigned int thread_id)
> 
> /* Add the thread to the bitmap of registered threads */
> old_bmap = rte_atomic_fetch_or_explicit(__RTE_QSBR_THRID_ARRAY_ELM(v, i),
> - (1UL << id), rte_memory_order_release);
> + (UINT64_C(1) << id), rte_memory_order_release);
> 
> /* Increment the number of threads registered only if the thread was not 
> already
> * registered
> */
> - if (!(old_bmap & (1UL << id)))
> + if (!(old_bmap & (UINT64_C(1) << id)))
> rte_atomic_fetch_add_explicit(&v->num_threads, 1, rte_memory_order_relaxed);
> 
> return 0;
> @@ -137,12 +137,12 @@ rte_rcu_qsbr_thread_unregister(struct rte_rcu_qsbr *v, 
> unsigned int thread_id)
> * reporting threads.
> */
> old_bmap = rte_atomic_fetch_and_explicit(__RTE_QSBR_THRID_ARRAY_ELM(v, i),
> - ~(1UL << id), rte_memory_order_release);
> + ~(UINT64_C(1) << id), rte_memory_order_release);
> 
> /* Decrement the number of threads unregistered only if the thread was not 
> already
> * unregistered
> */
> - if (old_bmap & (1UL << id))
> + if (old_bmap & (UINT64_C(1) << id))
> rte_atomic_fetch_sub_explicit(&v->num_threads, 1, rte_memory_order_relaxed);
> 
> return 0;
> @@ -198,7 +198,7 @@ rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
> t = rte_ctz64(bmap);
> fprintf(f, "%u ", id + t);
> 
> - bmap &= ~(1UL << t);
> + bmap &= ~(UINT64_C(1) << t);
> }
> }
> 
> @@ -225,7 +225,7 @@ rte_rcu_qsbr_dump(FILE *f, struct rte_rcu_qsbr *v)
> rte_atomic_load_explicit(
> &v->qsbr_cnt[id + t].lock_cnt,
> rte_memory_order_relaxed));
> - bmap &= ~(1UL << t);
> + bmap &= ~(UINT64_C(1) << t);
> }
> }
> 
> -- 
> 2.34.1
> 

Reply via email to