Hi All,

The DMA mapping works great on our PowerPC machines currently. It was a long way to get the new DMA mapping code to work successfully on our PowerPC machines.

P L E A S E  don't modify the good working DMA mapping code. There are many other topics which needs improvements. For us (first level + second level support) it is really laborious to find your problematic code and patch it. It takes a long time to find the problematic code because we have to do it besides our main work.

P L E A S E test your code on PowerPC machines before you add it to the mainline vanilla kernel.


On Tue, Mar 24, 2020 at 12:00:09PM +0530, Aneesh Kumar K.V wrote:
> dma_addr_t dma_direct_map_page(struct device *dev, struct page *page,
>         unsigned long offset, size_t size, enum dma_data_direction dir,
>         unsigned long attrs)
> {
>     phys_addr_t phys = page_to_phys(page) + offset;
>     dma_addr_t dma_addr = phys_to_dma(dev, phys);
>     if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
>             return iommu_map(dev, phys, size, dir, attrs);
>         return DMA_MAPPING_ERROR;

If powerpc hardware / firmware people really come up with crap that
stupid you'll have to handle it yourself and will always pay the
indirect call penality.

Reply via email to