> Date: Sat, 15 Jan 2022 12:21:59 +0000
> From: Visa Hankala <[email protected]>
> 
> If the ending address in sfcc_cache_wbinv_range(), pa + len, is not
> cache-line-aligned, the function spins because len wraps around.
> The following patch fixes this.
> 
> There still is a subtle detail. If len is zero but pa is non-aligned,
> the function is not a no-op. However, shouldn't upper layers handle
> things so that zero length does not reach this deep?

Probably.  But one could also argue that even when the length is zero,
this should be a barrier.  And the barrier doesn't really hurt.  So I
don't think we need to worry about this.

I noticed that len has the type paddr_t.  That is clealy wrong and
probably should be psize_t.  Can you fix that as well?

> OK?

ok kettenis@


> Index: arch/riscv64/dev/sfcc.c
> ===================================================================
> RCS file: src/sys/arch/riscv64/dev/sfcc.c,v
> retrieving revision 1.2
> diff -u -p -r1.2 sfcc.c
> --- arch/riscv64/dev/sfcc.c   22 May 2021 17:07:28 -0000      1.2
> +++ arch/riscv64/dev/sfcc.c   15 Jan 2022 11:53:12 -0000
> @@ -96,15 +96,15 @@ void
>  sfcc_cache_wbinv_range(paddr_t pa, paddr_t len)
>  {
>       struct sfcc_softc *sc = sfcc_sc;
> +     paddr_t end;
>  
> -     len += pa & (sc->sc_line_size - 1);
> +     end = pa + len;
>       pa &= ~((paddr_t)sc->sc_line_size - 1);
>  
>       __asm volatile ("fence iorw,iorw" ::: "memory");
> -     while (len > 0) {
> +     while (pa < end) {
>               bus_space_write_8(sc->sc_iot, sc->sc_ioh, SFCC_FLUSH64, pa);
>               __asm volatile ("fence iorw,iorw" ::: "memory");
>               pa += sc->sc_line_size;
> -             len -= sc->sc_line_size;
>       }
>  }
> 
> 

Reply via email to