Re: [U-Boot] [PATCH] nvme: add more cache flushes

2019-10-17 Thread J. William Campbell

Hi All,
On 10/17/2019 12:12 AM, Patrick Wildt wrote:

On Thu, Oct 17, 2019 at 03:08:59PM +0800, Bin Meng wrote:

Hi Patrick,

On Thu, Oct 17, 2019 at 2:44 PM Patrick Wildt  wrote:

On Thu, Oct 17, 2019 at 10:55:11AM +0800, Bin Meng wrote:

Hi Patrick,

On Wed, Oct 16, 2019 at 11:35 PM Patrick Wildt  wrote:

On Wed, Oct 16, 2019 at 06:11:23PM +0800, Bin Meng wrote:

On Mon, Oct 14, 2019 at 7:11 PM Patrick Wildt  wrote:

On an i.MX8MQ our nvme driver doesn't completely work since we are
missing a few cache flushes.  One is the prp list, which is an extra
buffer that we need to flush before handing it to the hardware.  Also
the block read/write operations needs more cache flushes on this SoC.

Signed-off-by: Patrick Wildt 
---
  drivers/nvme/nvme.c | 15 +--
  1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
index 2444e0270f..69d5e3eedc 100644
--- a/drivers/nvme/nvme.c
+++ b/drivers/nvme/nvme.c
@@ -123,6 +123,9 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
 }
 *prp2 = (ulong)dev->prp_pool;

+   flush_dcache_range((ulong)dev->prp_pool, (ulong)dev->prp_pool +
+  dev->prp_entry_num * sizeof(u64));
+
 return 0;
  }

@@ -717,9 +720,10 @@ static ulong nvme_blk_rw(struct udevice *udev, lbaint_t 
blknr,
 u16 lbas = 1 << (dev->max_transfer_shift - ns->lba_shift);
 u64 total_lbas = blkcnt;

-   if (!read)
-   flush_dcache_range((unsigned long)buffer,
-  (unsigned long)buffer + total_len);
+   flush_dcache_range((unsigned long)buffer,
+  (unsigned long)buffer + total_len);

Why we need this for read?

I'm no processor engineer, but I have read (and "seen") the following
on ARMs.  If I'm wrong. please correct me.

Since the buffer is usually allocated cache-aligned on the stack,
it is very possible that this buffer was previously still used
as it's supposed to be used: as stack.  Thus, the caches can still
be filled, and are not yet evicted/flushed to memory.  Now it is
possible that between the DMA access from the device and our cache
invalidation, the CPU cache writes back its contents to memory,
overwriting whatever the NVMe just gave us.

OK, this makes sense. So if we allocate the buffer from the heap, we
should only care about flush on write, right? Can you test this?

If you're talking about having a bounce buffer:  You'd need to flush
it once upon allocation, because that part of the heap could have also
be used earlier by someone else.


I was not talking about bounce buffer. I mean the buffer allocated
from help and use that directly for DMA.

Regards,
Bin

If you allocate a buffer from the heap, you still need to make sure
to flush it once, since there's still the chance that you have used
it shortly earlier.  But it's less of an issue as on the stack.


The "rules" for cache management of DMA buffers for non-cache-coherent 
CPUs are immutable. It doesn't matter where the memory came from 
(static, heap, or stack). It may be in cache, and it may be dirty. You 
can't know  it is for sure "clean". It is assumed that the DMA buffers 
are allocated to begin on a cache line boundary and are a multiple of a 
cache line in length. If this is not the case, references by the CPU to 
locations outside the buffer can make the cache state of the buffer 
dirty, which is an error. It is also required that there be no accesses 
to the DMA buffer by the CPU while DMA is in progress. This is normally 
true by default, and if it isn't true, it is an error. The rules are 
then as follows:


On write, before DMA is started. the cache corresponding to the DMA 
buffer addresses MUST be flushed to ensure the desired content is 
transferred from cache to RAM before write DMA begins.


On read, before DMA is started, the cache corresponding to the DMA 
buffer addresses MUST be either invalidated or flushed to ensure that no 
cache system initiated writes to RAM will occur while read DMA is in 
progress. Normally, invalidate is preferred because it is faster. 
However, for programming simplicity some drivers choose to flush before 
both read or write DMA is started. If that is the case, it is good 
practice to note that choice in a comment.


On write, after DMA is completed, no additional cache actions are required.

On read, after DMA is completed, the cache corresponding to the DMA 
buffer addresses MUST be invalidated. This is necessary to ensure that 
stale data in the cache will not be used instead of the new read data in 
RAM. If one elected to invalidate the cache before the read DMA started, 
one may wonder why a second invalidate is necessary.  Since the buffer 
is not allowed to be referenced by the program in the interim, the cache 
should not contain any data from the DMA buffer area. The reason is that 
modern CPUs, may load data from the DMA buffer into cache while DMA is 
in progress. This can 

Re: [U-Boot] [PATCH] nvme: add more cache flushes

2019-10-17 Thread Patrick Wildt
On Thu, Oct 17, 2019 at 03:08:59PM +0800, Bin Meng wrote:
> Hi Patrick,
> 
> On Thu, Oct 17, 2019 at 2:44 PM Patrick Wildt  wrote:
> >
> > On Thu, Oct 17, 2019 at 10:55:11AM +0800, Bin Meng wrote:
> > > Hi Patrick,
> > >
> > > On Wed, Oct 16, 2019 at 11:35 PM Patrick Wildt  wrote:
> > > >
> > > > On Wed, Oct 16, 2019 at 06:11:23PM +0800, Bin Meng wrote:
> > > > > On Mon, Oct 14, 2019 at 7:11 PM Patrick Wildt  
> > > > > wrote:
> > > > > >
> > > > > > On an i.MX8MQ our nvme driver doesn't completely work since we are
> > > > > > missing a few cache flushes.  One is the prp list, which is an extra
> > > > > > buffer that we need to flush before handing it to the hardware.  
> > > > > > Also
> > > > > > the block read/write operations needs more cache flushes on this 
> > > > > > SoC.
> > > > > >
> > > > > > Signed-off-by: Patrick Wildt 
> > > > > > ---
> > > > > >  drivers/nvme/nvme.c | 15 +--
> > > > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > > >
> > > > > > diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
> > > > > > index 2444e0270f..69d5e3eedc 100644
> > > > > > --- a/drivers/nvme/nvme.c
> > > > > > +++ b/drivers/nvme/nvme.c
> > > > > > @@ -123,6 +123,9 @@ static int nvme_setup_prps(struct nvme_dev 
> > > > > > *dev, u64 *prp2,
> > > > > > }
> > > > > > *prp2 = (ulong)dev->prp_pool;
> > > > > >
> > > > > > +   flush_dcache_range((ulong)dev->prp_pool, 
> > > > > > (ulong)dev->prp_pool +
> > > > > > +  dev->prp_entry_num * sizeof(u64));
> > > > > > +
> > > > > > return 0;
> > > > > >  }
> > > > > >
> > > > > > @@ -717,9 +720,10 @@ static ulong nvme_blk_rw(struct udevice *udev, 
> > > > > > lbaint_t blknr,
> > > > > > u16 lbas = 1 << (dev->max_transfer_shift - ns->lba_shift);
> > > > > > u64 total_lbas = blkcnt;
> > > > > >
> > > > > > -   if (!read)
> > > > > > -   flush_dcache_range((unsigned long)buffer,
> > > > > > -  (unsigned long)buffer + 
> > > > > > total_len);
> > > > > > +   flush_dcache_range((unsigned long)buffer,
> > > > > > +  (unsigned long)buffer + total_len);
> > > > >
> > > > > Why we need this for read?
> > > >
> > > > I'm no processor engineer, but I have read (and "seen") the following
> > > > on ARMs.  If I'm wrong. please correct me.
> > > >
> > > > Since the buffer is usually allocated cache-aligned on the stack,
> > > > it is very possible that this buffer was previously still used
> > > > as it's supposed to be used: as stack.  Thus, the caches can still
> > > > be filled, and are not yet evicted/flushed to memory.  Now it is
> > > > possible that between the DMA access from the device and our cache
> > > > invalidation, the CPU cache writes back its contents to memory,
> > > > overwriting whatever the NVMe just gave us.
> > >
> > > OK, this makes sense. So if we allocate the buffer from the heap, we
> > > should only care about flush on write, right? Can you test this?
> >
> > If you're talking about having a bounce buffer:  You'd need to flush
> > it once upon allocation, because that part of the heap could have also
> > be used earlier by someone else.
> >
> 
> I was not talking about bounce buffer. I mean the buffer allocated
> from help and use that directly for DMA.
> 
> Regards,
> Bin

If you allocate a buffer from the heap, you still need to make sure
to flush it once, since there's still the chance that you have used
it shortly earlier.  But it's less of an issue as on the stack.

Regards,
Patrick
___
U-Boot mailing list
U-Boot@lists.denx.de
https://lists.denx.de/listinfo/u-boot


Re: [U-Boot] [PATCH] nvme: add more cache flushes

2019-10-17 Thread Bin Meng
Hi Patrick,

On Thu, Oct 17, 2019 at 2:44 PM Patrick Wildt  wrote:
>
> On Thu, Oct 17, 2019 at 10:55:11AM +0800, Bin Meng wrote:
> > Hi Patrick,
> >
> > On Wed, Oct 16, 2019 at 11:35 PM Patrick Wildt  wrote:
> > >
> > > On Wed, Oct 16, 2019 at 06:11:23PM +0800, Bin Meng wrote:
> > > > On Mon, Oct 14, 2019 at 7:11 PM Patrick Wildt  wrote:
> > > > >
> > > > > On an i.MX8MQ our nvme driver doesn't completely work since we are
> > > > > missing a few cache flushes.  One is the prp list, which is an extra
> > > > > buffer that we need to flush before handing it to the hardware.  Also
> > > > > the block read/write operations needs more cache flushes on this SoC.
> > > > >
> > > > > Signed-off-by: Patrick Wildt 
> > > > > ---
> > > > >  drivers/nvme/nvme.c | 15 +--
> > > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > >
> > > > > diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
> > > > > index 2444e0270f..69d5e3eedc 100644
> > > > > --- a/drivers/nvme/nvme.c
> > > > > +++ b/drivers/nvme/nvme.c
> > > > > @@ -123,6 +123,9 @@ static int nvme_setup_prps(struct nvme_dev *dev, 
> > > > > u64 *prp2,
> > > > > }
> > > > > *prp2 = (ulong)dev->prp_pool;
> > > > >
> > > > > +   flush_dcache_range((ulong)dev->prp_pool, (ulong)dev->prp_pool 
> > > > > +
> > > > > +  dev->prp_entry_num * sizeof(u64));
> > > > > +
> > > > > return 0;
> > > > >  }
> > > > >
> > > > > @@ -717,9 +720,10 @@ static ulong nvme_blk_rw(struct udevice *udev, 
> > > > > lbaint_t blknr,
> > > > > u16 lbas = 1 << (dev->max_transfer_shift - ns->lba_shift);
> > > > > u64 total_lbas = blkcnt;
> > > > >
> > > > > -   if (!read)
> > > > > -   flush_dcache_range((unsigned long)buffer,
> > > > > -  (unsigned long)buffer + total_len);
> > > > > +   flush_dcache_range((unsigned long)buffer,
> > > > > +  (unsigned long)buffer + total_len);
> > > >
> > > > Why we need this for read?
> > >
> > > I'm no processor engineer, but I have read (and "seen") the following
> > > on ARMs.  If I'm wrong. please correct me.
> > >
> > > Since the buffer is usually allocated cache-aligned on the stack,
> > > it is very possible that this buffer was previously still used
> > > as it's supposed to be used: as stack.  Thus, the caches can still
> > > be filled, and are not yet evicted/flushed to memory.  Now it is
> > > possible that between the DMA access from the device and our cache
> > > invalidation, the CPU cache writes back its contents to memory,
> > > overwriting whatever the NVMe just gave us.
> >
> > OK, this makes sense. So if we allocate the buffer from the heap, we
> > should only care about flush on write, right? Can you test this?
>
> If you're talking about having a bounce buffer:  You'd need to flush
> it once upon allocation, because that part of the heap could have also
> be used earlier by someone else.
>

I was not talking about bounce buffer. I mean the buffer allocated
from help and use that directly for DMA.

Regards,
Bin
___
U-Boot mailing list
U-Boot@lists.denx.de
https://lists.denx.de/listinfo/u-boot


Re: [U-Boot] [PATCH] nvme: add more cache flushes

2019-10-17 Thread Simon Goldschmidt
On Thu, Oct 17, 2019 at 8:44 AM Patrick Wildt  wrote:
>
> On Thu, Oct 17, 2019 at 10:55:11AM +0800, Bin Meng wrote:
> > Hi Patrick,
> >
> > On Wed, Oct 16, 2019 at 11:35 PM Patrick Wildt  wrote:
> > >
> > > On Wed, Oct 16, 2019 at 06:11:23PM +0800, Bin Meng wrote:
> > > > On Mon, Oct 14, 2019 at 7:11 PM Patrick Wildt  wrote:
> > > > >
> > > > > On an i.MX8MQ our nvme driver doesn't completely work since we are
> > > > > missing a few cache flushes.  One is the prp list, which is an extra
> > > > > buffer that we need to flush before handing it to the hardware.  Also
> > > > > the block read/write operations needs more cache flushes on this SoC.
> > > > >
> > > > > Signed-off-by: Patrick Wildt 
> > > > > ---
> > > > >  drivers/nvme/nvme.c | 15 +--
> > > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > > >
> > > > > diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
> > > > > index 2444e0270f..69d5e3eedc 100644
> > > > > --- a/drivers/nvme/nvme.c
> > > > > +++ b/drivers/nvme/nvme.c
> > > > > @@ -123,6 +123,9 @@ static int nvme_setup_prps(struct nvme_dev *dev, 
> > > > > u64 *prp2,
> > > > > }
> > > > > *prp2 = (ulong)dev->prp_pool;
> > > > >
> > > > > +   flush_dcache_range((ulong)dev->prp_pool, (ulong)dev->prp_pool 
> > > > > +
> > > > > +  dev->prp_entry_num * sizeof(u64));
> > > > > +
> > > > > return 0;
> > > > >  }
> > > > >
> > > > > @@ -717,9 +720,10 @@ static ulong nvme_blk_rw(struct udevice *udev, 
> > > > > lbaint_t blknr,
> > > > > u16 lbas = 1 << (dev->max_transfer_shift - ns->lba_shift);
> > > > > u64 total_lbas = blkcnt;
> > > > >
> > > > > -   if (!read)
> > > > > -   flush_dcache_range((unsigned long)buffer,
> > > > > -  (unsigned long)buffer + total_len);
> > > > > +   flush_dcache_range((unsigned long)buffer,
> > > > > +  (unsigned long)buffer + total_len);
> > > >
> > > > Why we need this for read?
> > >
> > > I'm no processor engineer, but I have read (and "seen") the following
> > > on ARMs.  If I'm wrong. please correct me.
> > >
> > > Since the buffer is usually allocated cache-aligned on the stack,
> > > it is very possible that this buffer was previously still used
> > > as it's supposed to be used: as stack.  Thus, the caches can still
> > > be filled, and are not yet evicted/flushed to memory.  Now it is
> > > possible that between the DMA access from the device and our cache
> > > invalidation, the CPU cache writes back its contents to memory,
> > > overwriting whatever the NVMe just gave us.
> >
> > OK, this makes sense. So if we allocate the buffer from the heap, we
> > should only care about flush on write, right? Can you test this?
>
> If you're talking about having a bounce buffer:  You'd need to flush
> it once upon allocation, because that part of the heap could have also
> be used earlier by someone else.

And this is exactly what common/bouncebuf.c does ;-)

Regards,
Simon

>
> Best regards,
> Patrick
>
> > >
> > > > > +   invalidate_dcache_range((unsigned long)buffer,
> > > > > +   (unsigned long)buffer + total_len);
> > > > >
> > > > > c.rw.opcode = read ? nvme_cmd_read : nvme_cmd_write;
> > > > > c.rw.flags = 0;
> > > > > @@ -755,9 +759,8 @@ static ulong nvme_blk_rw(struct udevice *udev, 
> > > > > lbaint_t blknr,
> > > > > buffer += lbas << ns->lba_shift;
> > > > > }
> > > > >
> > > > > -   if (read)
> > > > > -   invalidate_dcache_range((unsigned long)buffer,
> > > > > -   (unsigned long)buffer + 
> > > > > total_len);
> > > > > +   invalidate_dcache_range((unsigned long)buffer,
> > > > > +   (unsigned long)buffer + total_len);
> > > >
> > > > Why we need this for write?
> > >
> > > That's a good point.  After the transaction, if it was a read then
> > > we need to invalidate it, as we might have speculatively read it.
> > > On a write, we don't care about its contents.  I will test it w/o
> > > this chunk and report back.
> > >
> >
> > Regards,
> > Bin
> ___
> U-Boot mailing list
> U-Boot@lists.denx.de
> https://lists.denx.de/listinfo/u-boot
___
U-Boot mailing list
U-Boot@lists.denx.de
https://lists.denx.de/listinfo/u-boot


Re: [U-Boot] [PATCH] nvme: add more cache flushes

2019-10-17 Thread Patrick Wildt
On Thu, Oct 17, 2019 at 10:55:11AM +0800, Bin Meng wrote:
> Hi Patrick,
> 
> On Wed, Oct 16, 2019 at 11:35 PM Patrick Wildt  wrote:
> >
> > On Wed, Oct 16, 2019 at 06:11:23PM +0800, Bin Meng wrote:
> > > On Mon, Oct 14, 2019 at 7:11 PM Patrick Wildt  wrote:
> > > >
> > > > On an i.MX8MQ our nvme driver doesn't completely work since we are
> > > > missing a few cache flushes.  One is the prp list, which is an extra
> > > > buffer that we need to flush before handing it to the hardware.  Also
> > > > the block read/write operations needs more cache flushes on this SoC.
> > > >
> > > > Signed-off-by: Patrick Wildt 
> > > > ---
> > > >  drivers/nvme/nvme.c | 15 +--
> > > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > > >
> > > > diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
> > > > index 2444e0270f..69d5e3eedc 100644
> > > > --- a/drivers/nvme/nvme.c
> > > > +++ b/drivers/nvme/nvme.c
> > > > @@ -123,6 +123,9 @@ static int nvme_setup_prps(struct nvme_dev *dev, 
> > > > u64 *prp2,
> > > > }
> > > > *prp2 = (ulong)dev->prp_pool;
> > > >
> > > > +   flush_dcache_range((ulong)dev->prp_pool, (ulong)dev->prp_pool +
> > > > +  dev->prp_entry_num * sizeof(u64));
> > > > +
> > > > return 0;
> > > >  }
> > > >
> > > > @@ -717,9 +720,10 @@ static ulong nvme_blk_rw(struct udevice *udev, 
> > > > lbaint_t blknr,
> > > > u16 lbas = 1 << (dev->max_transfer_shift - ns->lba_shift);
> > > > u64 total_lbas = blkcnt;
> > > >
> > > > -   if (!read)
> > > > -   flush_dcache_range((unsigned long)buffer,
> > > > -  (unsigned long)buffer + total_len);
> > > > +   flush_dcache_range((unsigned long)buffer,
> > > > +  (unsigned long)buffer + total_len);
> > >
> > > Why we need this for read?
> >
> > I'm no processor engineer, but I have read (and "seen") the following
> > on ARMs.  If I'm wrong. please correct me.
> >
> > Since the buffer is usually allocated cache-aligned on the stack,
> > it is very possible that this buffer was previously still used
> > as it's supposed to be used: as stack.  Thus, the caches can still
> > be filled, and are not yet evicted/flushed to memory.  Now it is
> > possible that between the DMA access from the device and our cache
> > invalidation, the CPU cache writes back its contents to memory,
> > overwriting whatever the NVMe just gave us.
> 
> OK, this makes sense. So if we allocate the buffer from the heap, we
> should only care about flush on write, right? Can you test this?

If you're talking about having a bounce buffer:  You'd need to flush
it once upon allocation, because that part of the heap could have also
be used earlier by someone else.

Best regards,
Patrick

> >
> > > > +   invalidate_dcache_range((unsigned long)buffer,
> > > > +   (unsigned long)buffer + total_len);
> > > >
> > > > c.rw.opcode = read ? nvme_cmd_read : nvme_cmd_write;
> > > > c.rw.flags = 0;
> > > > @@ -755,9 +759,8 @@ static ulong nvme_blk_rw(struct udevice *udev, 
> > > > lbaint_t blknr,
> > > > buffer += lbas << ns->lba_shift;
> > > > }
> > > >
> > > > -   if (read)
> > > > -   invalidate_dcache_range((unsigned long)buffer,
> > > > -   (unsigned long)buffer + 
> > > > total_len);
> > > > +   invalidate_dcache_range((unsigned long)buffer,
> > > > +   (unsigned long)buffer + total_len);
> > >
> > > Why we need this for write?
> >
> > That's a good point.  After the transaction, if it was a read then
> > we need to invalidate it, as we might have speculatively read it.
> > On a write, we don't care about its contents.  I will test it w/o
> > this chunk and report back.
> >
> 
> Regards,
> Bin
___
U-Boot mailing list
U-Boot@lists.denx.de
https://lists.denx.de/listinfo/u-boot


Re: [U-Boot] [PATCH] nvme: add more cache flushes

2019-10-16 Thread Bin Meng
Hi Patrick,

On Wed, Oct 16, 2019 at 11:35 PM Patrick Wildt  wrote:
>
> On Wed, Oct 16, 2019 at 06:11:23PM +0800, Bin Meng wrote:
> > On Mon, Oct 14, 2019 at 7:11 PM Patrick Wildt  wrote:
> > >
> > > On an i.MX8MQ our nvme driver doesn't completely work since we are
> > > missing a few cache flushes.  One is the prp list, which is an extra
> > > buffer that we need to flush before handing it to the hardware.  Also
> > > the block read/write operations needs more cache flushes on this SoC.
> > >
> > > Signed-off-by: Patrick Wildt 
> > > ---
> > >  drivers/nvme/nvme.c | 15 +--
> > >  1 file changed, 9 insertions(+), 6 deletions(-)
> > >
> > > diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
> > > index 2444e0270f..69d5e3eedc 100644
> > > --- a/drivers/nvme/nvme.c
> > > +++ b/drivers/nvme/nvme.c
> > > @@ -123,6 +123,9 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 
> > > *prp2,
> > > }
> > > *prp2 = (ulong)dev->prp_pool;
> > >
> > > +   flush_dcache_range((ulong)dev->prp_pool, (ulong)dev->prp_pool +
> > > +  dev->prp_entry_num * sizeof(u64));
> > > +
> > > return 0;
> > >  }
> > >
> > > @@ -717,9 +720,10 @@ static ulong nvme_blk_rw(struct udevice *udev, 
> > > lbaint_t blknr,
> > > u16 lbas = 1 << (dev->max_transfer_shift - ns->lba_shift);
> > > u64 total_lbas = blkcnt;
> > >
> > > -   if (!read)
> > > -   flush_dcache_range((unsigned long)buffer,
> > > -  (unsigned long)buffer + total_len);
> > > +   flush_dcache_range((unsigned long)buffer,
> > > +  (unsigned long)buffer + total_len);
> >
> > Why we need this for read?
>
> I'm no processor engineer, but I have read (and "seen") the following
> on ARMs.  If I'm wrong. please correct me.
>
> Since the buffer is usually allocated cache-aligned on the stack,
> it is very possible that this buffer was previously still used
> as it's supposed to be used: as stack.  Thus, the caches can still
> be filled, and are not yet evicted/flushed to memory.  Now it is
> possible that between the DMA access from the device and our cache
> invalidation, the CPU cache writes back its contents to memory,
> overwriting whatever the NVMe just gave us.

OK, this makes sense. So if we allocate the buffer from the heap, we
should only care about flush on write, right? Can you test this?

>
> > > +   invalidate_dcache_range((unsigned long)buffer,
> > > +   (unsigned long)buffer + total_len);
> > >
> > > c.rw.opcode = read ? nvme_cmd_read : nvme_cmd_write;
> > > c.rw.flags = 0;
> > > @@ -755,9 +759,8 @@ static ulong nvme_blk_rw(struct udevice *udev, 
> > > lbaint_t blknr,
> > > buffer += lbas << ns->lba_shift;
> > > }
> > >
> > > -   if (read)
> > > -   invalidate_dcache_range((unsigned long)buffer,
> > > -   (unsigned long)buffer + 
> > > total_len);
> > > +   invalidate_dcache_range((unsigned long)buffer,
> > > +   (unsigned long)buffer + total_len);
> >
> > Why we need this for write?
>
> That's a good point.  After the transaction, if it was a read then
> we need to invalidate it, as we might have speculatively read it.
> On a write, we don't care about its contents.  I will test it w/o
> this chunk and report back.
>

Regards,
Bin
___
U-Boot mailing list
U-Boot@lists.denx.de
https://lists.denx.de/listinfo/u-boot


Re: [U-Boot] [PATCH] nvme: add more cache flushes

2019-10-16 Thread Patrick Wildt
On Wed, Oct 16, 2019 at 06:11:23PM +0800, Bin Meng wrote:
> On Mon, Oct 14, 2019 at 7:11 PM Patrick Wildt  wrote:
> >
> > On an i.MX8MQ our nvme driver doesn't completely work since we are
> > missing a few cache flushes.  One is the prp list, which is an extra
> > buffer that we need to flush before handing it to the hardware.  Also
> > the block read/write operations needs more cache flushes on this SoC.
> >
> > Signed-off-by: Patrick Wildt 
> > ---
> >  drivers/nvme/nvme.c | 15 +--
> >  1 file changed, 9 insertions(+), 6 deletions(-)
> >
> > diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
> > index 2444e0270f..69d5e3eedc 100644
> > --- a/drivers/nvme/nvme.c
> > +++ b/drivers/nvme/nvme.c
> > @@ -123,6 +123,9 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 
> > *prp2,
> > }
> > *prp2 = (ulong)dev->prp_pool;
> >
> > +   flush_dcache_range((ulong)dev->prp_pool, (ulong)dev->prp_pool +
> > +  dev->prp_entry_num * sizeof(u64));
> > +
> > return 0;
> >  }
> >
> > @@ -717,9 +720,10 @@ static ulong nvme_blk_rw(struct udevice *udev, 
> > lbaint_t blknr,
> > u16 lbas = 1 << (dev->max_transfer_shift - ns->lba_shift);
> > u64 total_lbas = blkcnt;
> >
> > -   if (!read)
> > -   flush_dcache_range((unsigned long)buffer,
> > -  (unsigned long)buffer + total_len);
> > +   flush_dcache_range((unsigned long)buffer,
> > +  (unsigned long)buffer + total_len);
> 
> Why we need this for read?

I'm no processor engineer, but I have read (and "seen") the following
on ARMs.  If I'm wrong. please correct me.

Since the buffer is usually allocated cache-aligned on the stack,
it is very possible that this buffer was previously still used
as it's supposed to be used: as stack.  Thus, the caches can still
be filled, and are not yet evicted/flushed to memory.  Now it is
possible that between the DMA access from the device and our cache
invalidation, the CPU cache writes back its contents to memory,
overwriting whatever the NVMe just gave us.

> > +   invalidate_dcache_range((unsigned long)buffer,
> > +   (unsigned long)buffer + total_len);
> >
> > c.rw.opcode = read ? nvme_cmd_read : nvme_cmd_write;
> > c.rw.flags = 0;
> > @@ -755,9 +759,8 @@ static ulong nvme_blk_rw(struct udevice *udev, lbaint_t 
> > blknr,
> > buffer += lbas << ns->lba_shift;
> > }
> >
> > -   if (read)
> > -   invalidate_dcache_range((unsigned long)buffer,
> > -   (unsigned long)buffer + total_len);
> > +   invalidate_dcache_range((unsigned long)buffer,
> > +   (unsigned long)buffer + total_len);
> 
> Why we need this for write?

That's a good point.  After the transaction, if it was a read then
we need to invalidate it, as we might have speculatively read it.
On a write, we don't care about its contents.  I will test it w/o
this chunk and report back.

Best regards,
Patrick

> >
> > return (total_len - temp_len) >> desc->log2blksz;
> >  }
> > --
> 
> Regards,
> Bin
___
U-Boot mailing list
U-Boot@lists.denx.de
https://lists.denx.de/listinfo/u-boot


Re: [U-Boot] [PATCH] nvme: add more cache flushes

2019-10-16 Thread Bin Meng
On Mon, Oct 14, 2019 at 7:11 PM Patrick Wildt  wrote:
>
> On an i.MX8MQ our nvme driver doesn't completely work since we are
> missing a few cache flushes.  One is the prp list, which is an extra
> buffer that we need to flush before handing it to the hardware.  Also
> the block read/write operations needs more cache flushes on this SoC.
>
> Signed-off-by: Patrick Wildt 
> ---
>  drivers/nvme/nvme.c | 15 +--
>  1 file changed, 9 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
> index 2444e0270f..69d5e3eedc 100644
> --- a/drivers/nvme/nvme.c
> +++ b/drivers/nvme/nvme.c
> @@ -123,6 +123,9 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 
> *prp2,
> }
> *prp2 = (ulong)dev->prp_pool;
>
> +   flush_dcache_range((ulong)dev->prp_pool, (ulong)dev->prp_pool +
> +  dev->prp_entry_num * sizeof(u64));
> +
> return 0;
>  }
>
> @@ -717,9 +720,10 @@ static ulong nvme_blk_rw(struct udevice *udev, lbaint_t 
> blknr,
> u16 lbas = 1 << (dev->max_transfer_shift - ns->lba_shift);
> u64 total_lbas = blkcnt;
>
> -   if (!read)
> -   flush_dcache_range((unsigned long)buffer,
> -  (unsigned long)buffer + total_len);
> +   flush_dcache_range((unsigned long)buffer,
> +  (unsigned long)buffer + total_len);

Why we need this for read?

> +   invalidate_dcache_range((unsigned long)buffer,
> +   (unsigned long)buffer + total_len);
>
> c.rw.opcode = read ? nvme_cmd_read : nvme_cmd_write;
> c.rw.flags = 0;
> @@ -755,9 +759,8 @@ static ulong nvme_blk_rw(struct udevice *udev, lbaint_t 
> blknr,
> buffer += lbas << ns->lba_shift;
> }
>
> -   if (read)
> -   invalidate_dcache_range((unsigned long)buffer,
> -   (unsigned long)buffer + total_len);
> +   invalidate_dcache_range((unsigned long)buffer,
> +   (unsigned long)buffer + total_len);

Why we need this for write?

>
> return (total_len - temp_len) >> desc->log2blksz;
>  }
> --

Regards,
Bin
___
U-Boot mailing list
U-Boot@lists.denx.de
https://lists.denx.de/listinfo/u-boot


[U-Boot] [PATCH] nvme: add more cache flushes

2019-10-14 Thread Patrick Wildt
On an i.MX8MQ our nvme driver doesn't completely work since we are
missing a few cache flushes.  One is the prp list, which is an extra
buffer that we need to flush before handing it to the hardware.  Also
the block read/write operations needs more cache flushes on this SoC.

Signed-off-by: Patrick Wildt 
---
 drivers/nvme/nvme.c | 15 +--
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/drivers/nvme/nvme.c b/drivers/nvme/nvme.c
index 2444e0270f..69d5e3eedc 100644
--- a/drivers/nvme/nvme.c
+++ b/drivers/nvme/nvme.c
@@ -123,6 +123,9 @@ static int nvme_setup_prps(struct nvme_dev *dev, u64 *prp2,
}
*prp2 = (ulong)dev->prp_pool;
 
+   flush_dcache_range((ulong)dev->prp_pool, (ulong)dev->prp_pool +
+  dev->prp_entry_num * sizeof(u64));
+
return 0;
 }
 
@@ -717,9 +720,10 @@ static ulong nvme_blk_rw(struct udevice *udev, lbaint_t 
blknr,
u16 lbas = 1 << (dev->max_transfer_shift - ns->lba_shift);
u64 total_lbas = blkcnt;
 
-   if (!read)
-   flush_dcache_range((unsigned long)buffer,
-  (unsigned long)buffer + total_len);
+   flush_dcache_range((unsigned long)buffer,
+  (unsigned long)buffer + total_len);
+   invalidate_dcache_range((unsigned long)buffer,
+   (unsigned long)buffer + total_len);
 
c.rw.opcode = read ? nvme_cmd_read : nvme_cmd_write;
c.rw.flags = 0;
@@ -755,9 +759,8 @@ static ulong nvme_blk_rw(struct udevice *udev, lbaint_t 
blknr,
buffer += lbas << ns->lba_shift;
}
 
-   if (read)
-   invalidate_dcache_range((unsigned long)buffer,
-   (unsigned long)buffer + total_len);
+   invalidate_dcache_range((unsigned long)buffer,
+   (unsigned long)buffer + total_len);
 
return (total_len - temp_len) >> desc->log2blksz;
 }
-- 
2.23.0

___
U-Boot mailing list
U-Boot@lists.denx.de
https://lists.denx.de/listinfo/u-boot