On Tue, 2014-04-01 at 18:29 +0100, Catalin Marinas wrote:
> On Tue, Apr 01, 2014 at 05:10:57PM +0100, Jon Medhurst (Tixy) wrote:
> > On Mon, 2014-03-31 at 18:52 +0100, Catalin Marinas wrote:
> > > The following changes since commit 
> > > cfbf8d4857c26a8a307fb7cd258074c9dcd8c691:
> > > 
> > >   Linux 3.14-rc4 (2014-02-23 17:40:03 -0800)
> > > 
> > > are available in the git repository at:
> > > 
> > >   git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux 
> > > tags/arm64-upstream
> > > 
> > > for you to fetch changes up to 196adf2f3015eacac0567278ba538e3ffdd16d0e:
> > > 
> > >   arm64: Remove pgprot_dmacoherent() (2014-03-24 10:35:35 +0000)
> > 
> > I may have spotted a bug in commit 7363590d2c46 (arm64: Implement
> > coherent DMA API based on swiotlb), see my inline comment below...
> > 
> > [...]
> > > diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> > > index 1ea9f26..97fcef5 100644
> > > --- a/arch/arm64/mm/cache.S
> > > +++ b/arch/arm64/mm/cache.S
> > > @@ -166,3 +166,81 @@ ENTRY(__flush_dcache_area)
> > >   dsb     sy
> > >   ret
> > >  ENDPROC(__flush_dcache_area)
> > > +
> > > +/*
> > > + *       __dma_inv_range(start, end)
> > > + *       - start   - virtual start address of region
> > > + *       - end     - virtual end address of region
> > > + */
> > > +__dma_inv_range:
> > > + dcache_line_size x2, x3
> > > + sub     x3, x2, #1
> > > + bic     x0, x0, x3
> > > + bic     x1, x1, x3
> > 
> > Why is the 'end' value in x1 above rounded down to be cache aligned?
> > This means the cache invalidate won't include the cache line containing
> > the final bytes of the region, unless it happened to already be cache
> > line aligned. This looks especially suspect as the other two cache
> > operations added in the same patch (below) don't do that.
> 
> Cache invalidation is destructive, so we want to make sure that it
> doesn't affect anything beyond x1. But you are right, if either end of
> the buffer is not cache line aligned it can get it wrong. The fix is to
> use clean+invalidate on the unaligned ends:

Like the ARMv7 implementation does :-) However, I wonder, is it possible
for the Cache Writeback Granule (CWG) to come into play? If the CWG of
further out caches was bigger than closer (to CPU) caches then it would
cause data corruption. So for these region ends, should we not be using
the CWG size, not the minimum D cache line size? On second thoughts,
that wouldn't be safe either in the converse case where the CWG of a
closer cache was bigger. So we would need to first use minimum cache
line size to clean a CWG sized region, then invalidate cache lines by
the same method. But then that leaves a time period where a write can
happen between the clean and the invalidate, again leading to data
corruption. I hope all this means I've either got rather confused or
that that cache architectures are smart enough to automatically cope. 

I also have a couple of comments on the specific changes below...

> 
> diff --git a/arch/arm64/mm/cache.S b/arch/arm64/mm/cache.S
> index c46f48b33c14..6a26bf1965d3 100644
> --- a/arch/arm64/mm/cache.S
> +++ b/arch/arm64/mm/cache.S
> @@ -175,10 +175,17 @@ ENDPROC(__flush_dcache_area)
>  __dma_inv_range:
>       dcache_line_size x2, x3
>       sub     x3, x2, #1
> -     bic     x0, x0, x3
> +     tst     x1, x3                          // end cache line aligned?
>       bic     x1, x1, x3
> -1:   dc      ivac, x0                        // invalidate D / U line
> -     add     x0, x0, x2
> +     b.eq    1f
> +     dc      civac, x1                       // clean & invalidate D / U line

That is actually cleaning the address one byte past the end of the
region, not sure it matters though because it is still within the same
minimum cache line sized region.

> +1:   tst     x0, x3                          // start cache line aligned?
> +     bic     x0, x0, x3
> +     b.eq    2f
> +     dc      civac, x0                       // clean & invalidate D / U line
> +     b       3f
> +2:   dc      ivac, x0                        // invalidate D / U line
> +3:   add     x0, x0, x2
>       cmp     x0, x1
>       b.lo    1b

The above obviously also needs changing to branch to 3b

>       dsb     sy
> 

-- 
Tixy

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to