On 6/3/20 9:04 AM, Linhaifeng wrote:
> 
> 
>> -----Original Message-----
>> From: Ben Pfaff [mailto:b...@ovn.org]
>> Sent: Wednesday, June 3, 2020 1:26 PM
>> To: Linhaifeng <haifeng....@huawei.com>
>> Cc: Yanqin Wei <yanqin....@arm.com>; d...@openvswitch.org; nd
>> <n...@arm.com>; Lilijun (Jerry) <jerry.lili...@huawei.com>; chenchanghu
>> <chenchan...@huawei.com>; Lichunhe <lichu...@huawei.com>
>> Subject: Re: [ovs-dev] [PATCH v2] ovs rcu: update rcu pointer first
>>
>> On Wed, Jun 03, 2020 at 01:22:52AM +0000, Linhaifeng wrote:
>>>
>>>
>>> -----Original Message-----
>>> From: Ben Pfaff [mailto:b...@ovn.org]
>>> Sent: Wednesday, June 3, 2020 1:28 AM
>>> To: Linhaifeng <haifeng....@huawei.com>
>>> Cc: Yanqin Wei <yanqin....@arm.com>; d...@openvswitch.org; nd
>>> <n...@arm.com>; Lilijun (Jerry) <jerry.lili...@huawei.com>; chenchanghu
>>> <chenchan...@huawei.com>; Lichunhe <lichu...@huawei.com>
>>> Subject: Re: [ovs-dev] [PATCH v2] ovs rcu: update rcu pointer first
>>>
>>> On Tue, Jun 02, 2020 at 07:27:59AM +0000, Linhaifeng wrote:
>>>> We should update rcu pointer first then use ovsrcu_postpone to free
>>>> otherwise maybe cause use-after-free.
>>>> e.g.,reader indicates momentary quiescent and access old pointer
>>>> after writer postpone free old pointer and before setting new pointer.
>>>>
>>>> Signed-off-by: Linhaifeng <haifeng....@huawei.com>
>>>
>>> I don't see how that's possible, since the writer hasn't quiesced.
>>>
>>> I think the logic is as follow, Could you help me find out where is 
>>> incorrect?
>>>
>>> 1.1 -> 1.2 -> 3.1 -> 3.2 -> 2.1 -> 2.2 -> 2.3 -> 2.1 -> 1.3 -> 1.4 ->
>>> 3.3 -> 2.2(use after free)
>>>
>>> wirter:
>>> 1.1 use postone to free old pointer
>>> 1.2 flush cbsets to flushed_cbsets
>>> 1.3 update new pointer
>>> 1.4 quiesced
>>>
>>> Read:
>>> 2.1. read pointer
>>> 2.2. use pointer
>>> 2.3. quiesced
>>>
>>> Rcu:
>>> 3.1 pop flushed_cbsets
>>> 3.2 ovsrcu_synchronize
>>> 3.3 call all cb to free
>>
>> So you're saying this:
>>
>>     1.1 use postone to free old pointer (A)
>>     1.2 flush cbsets to flushed_cbsets
>>
>>     3.1 pop flushed_cbsets
>>     3.2 ovsrcu_synchronize
>>
>>     2.1. read pointer (A)
>>     2.2. use pointer (A)
>>     2.3. quiesced
>>
>>     2.1. read pointer (A)
>>
>>     1.3 update new pointer (B)
>>     1.4 quiesced
>>
>>     3.3 call all cb to free (A)
>>
>>     2.2. use pointer (A)
>>
>> Wow, you are absolutely right.  This had never occurred to me.  Thank you!
>> I'll review your patch.
> 
> Yes, it's really hard to happen. If it happened it's also hard to find the 
> reason so I suggest it can be a rule for using rcu.

I agree that there is an issue here, but I think that we should not force
users to call ovsrcu_set() before ovsrcu_postpone().  Current users doesn't
do anything illegal since pointer must not be freed before the next grace
period from their point of view.

For me it looks like the main issue is existence of point 1.2, i.e. flushing
cbsets while writer is not quiesced yet.  And we need to fix this inside
rcu library itself.  For example, we could avoid flushing inside
ovsrcu_postpone() by making cbs[16] a dynamically allocated array and using
x2nrealloc instead of flushing.

Thoughts?

Regarding the patch itself:

> diff --git a/lib/ovs-rcu.h b/lib/ovs-rcu.h
> index ecc4c9201..98c238aea 100644
> --- a/lib/ovs-rcu.h
> +++ b/lib/ovs-rcu.h
> @@ -118,10 +118,10 @@
>   *     void
>   *     change_flow(struct flow *new_flow)
>   *     {
> + *         struct flow *old_flow = ovsrcu_get_protected(struct flow *, 
> &flowp)

ovsrcu_get_protected() can not be used outside of the critical section.

>   *         ovs_mutex_lock(&mutex);
> - *         ovsrcu_postpone(free,
> - *                         ovsrcu_get_protected(struct flow *, &flowp));
>   *         ovsrcu_set(&flowp, new_flow);
> + *         ovsrcu_postpone(free, old_flow);
>   *         ovs_mutex_unlock(&mutex);
>   *     }
>   
Best regards, Ilya Maximets.
_______________________________________________
dev mailing list
d...@openvswitch.org
https://mail.openvswitch.org/mailman/listinfo/ovs-dev

Reply via email to