20/03/2024 10:31, fengchengwen:
> Reviewed-by: Chengwen Feng <fengcheng...@huawei.com>
> 
> On 2024/3/20 15:23, Wenwu Ma wrote:
> > The structure rte_dma_dev needs to be aligned to the cache line, but
> > the return value of malloc may not be aligned to the cache line. When
> > we use memset to clear the rte_dma_dev object, it may cause a segmentation
> > fault in clang-x86-platform.
> > 
> > This is because clang uses the "vmovaps" assembly instruction for
> > memset, which requires that the operands (rte_dma_dev objects) must
> > aligned on a 16-byte boundary or a general-protection exception (#GP)
> > is generated.
> > 
> > Therefore, either additional memory is applied for re-alignment, or the
> > rte_dma_dev object does not require cache line alignment. The patch
> > chooses the former option to fix the issue.
> > 
> > Fixes: b36970f2e13e ("dmadev: introduce DMA device library")
> > Cc: sta...@dpdk.org
> > 
> > Signed-off-by: Wenwu Ma <wenwux...@intel.com>

I keep thinking we should have a wrapper for aligned allocations,
with Windows support and fallback to malloc + RTE_PTR_ALIGN.

Probably not a reason to block this patch, so applied, thanks.


Reply via email to