On 12/04/2018 04:39 PM, Bart Van Assche wrote:
> On Tue, 2018-12-04 at 16:08 -0500, Waiman Long wrote:
>> On 12/03/2018 07:28 PM, Bart Van Assche wrote:
>>> Cc: Peter Zijlstra <[email protected]>
>>> Cc: Waiman Long <[email protected]>
>>> Cc: Johannes Berg <[email protected]>
>>> Signed-off-by: Bart Van Assche <[email protected]>
>>> ---
>>>  kernel/locking/lockdep.c | 9 +++++++++
>>>  1 file changed, 9 insertions(+)
>>>
>>> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
>>> index c936fce5b9d7..b4772e5fc176 100644
>>> --- a/kernel/locking/lockdep.c
>>> +++ b/kernel/locking/lockdep.c
>>> @@ -727,6 +727,15 @@ static bool assign_lock_key(struct lockdep_map *lock)
>>>  {
>>>     unsigned long can_addr, addr = (unsigned long)lock;
>>>  
>>> +   /*
>>> +    * lockdep_free_key_range() assumes that struct lock_class_key
>>> +    * objects do not overlap. Since we use the address of lock
>>> +    * objects as class key for static objects, check whether the
>>> +    * size of lock_class_key objects does not exceed the size of
>>> +    * the smallest lock object.
>>> +    */
>>> +   BUILD_BUG_ON(sizeof(struct lock_class_key) > sizeof(raw_spinlock_t));
>>> +
>>>     if (__is_kernel_percpu_address(addr, &can_addr))
>>>             lock->key = (void *)can_addr;
>>>     else if (__is_module_percpu_address(addr, &can_addr))
>> I don't understand what this check is for. lock_class_key and spinlock
>> are different objects. Their relative size shouldn't matter.
> Hi Waiman,
>
> Peter asked me to add this check.
>
> Bart.

I haven't finished reviewing all your patches yet. Maybe one of the
subsequent patches requires this. If that is the case, you should move
this patch after that.

-Longman

Reply via email to