On 18.03.2016 13:47, Xie, Huawei wrote:
> On 3/18/2016 6:39 PM, Ilya Maximets wrote:
>>
>> On 18.03.2016 13:27, Xie, Huawei wrote:
>>> On 3/18/2016 6:23 PM, Ilya Maximets wrote:
>>>> On 18.03.2016 13:08, Xie, Huawei wrote:
>>>>> On 2/24/2016 7:47 PM, Ilya Maximets wrote:
>>>>>>                   * Wait until it's our turn to add our buffer
>>>>>> @@ -979,7 +979,7 @@ rte_vhost_dequeue_burst(struct virtio_net *dev, 
>>>>>> uint16_t queue_id,
>>>>>>                  entry_success++;
>>>>>>          }
>>>>>>  
>>>>>> -        rte_compiler_barrier();
>>>>>> +        rte_smp_rmb();
>>>>> smp_rmb()?
>>>> There is no such function 'smp_rmb' in DPDK.
>>>> But:
>>>> .../arch/arm/rte_atomic.h:#define rte_smp_rmb() rte_rmb()
>>>> .../arch/ppc_64/rte_atomic.h:#define rte_smp_rmb() rte_compiler_barrier()
>>>> .../arch/tile/rte_atomic.h:#define rte_smp_rmb() rte_compiler_barrier()
>>>> .../arch/x86/rte_atomic.h:#define rte_smp_rmb() rte_compiler_barrier()
>>> I mean shoudn't be rte_smp_wmb()?
>> No. Here we need to be sure that copying of data from descriptor to
>> our local mbuf completed before 'vq->used->idx += entry_success;'.
>>
>> Read memory barrier will help us with it.
>>
>> In other places write barriers used because copying performed in
>> opposite direction.
> 
> What about the udpate to the used ring?

Next line is the only place where this vq->used->idx accessed.
So, there must be no issues.

>>>>>>          vq->used->idx += entry_success;
>>>>>>          vhost_log_used_vring(dev, vq, offsetof(struct vring_used, idx),
>>>>>>                          sizeof(vq->used->idx));
>>>>>> -- 2.5.0
>>>>>
>>>
>>>
> 
> 
> 

Reply via email to