Hi Colin,

On 10/28/2016 08:11 PM, Colin King wrote:
From: Colin Ian King <colin.k...@canonical.com>

The left shift amount is sop->sem_num % 64, which is up to 63, so
ensure we are shifting a ULL rather than a 32 bit value.
Good catch, thanks.
CoverityScan CID#1372862 "Bad bit shift operation"

Fixes: 7c24530cb4e3c0ae ("ipc/sem: optimize perform_atomic_semop()")
Signed-off-by: Colin Ian King <colin.k...@canonical.com>
---
  ipc/sem.c | 2 +-
  1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ipc/sem.c b/ipc/sem.c
index ebd18a7..ca4aa23 100644
--- a/ipc/sem.c
+++ b/ipc/sem.c
@@ -1839,7 +1839,7 @@ SYSCALL_DEFINE4(semtimedop, int, semid, struct sembuf 
__user *, tsops,
max = 0;
        for (sop = sops; sop < sops + nsops; sop++) {
-               unsigned long mask = 1 << ((sop->sem_num) % BITS_PER_LONG);
+               unsigned long mask = 1ULL << ((sop->sem_num) % BITS_PER_LONG);
Why 1ULL? Is 1UL not sufficient?

--
    Manfred

Reply via email to