Re: [PATCH 1/2] mmc: let the dma map ops handle bouncing

2019-07-08 Thread Ulf Hansson
On Tue, 25 Jun 2019 at 11:21, Christoph Hellwig  wrote:
>
> Just like we do for all other block drivers.  Especially as the limit
> imposed at the moment might be way to pessimistic for iommus.
>
> Signed-off-by: Christoph Hellwig 

>From your earlier reply, I decided to fold in the following
information to the changelog, as to clarify things a bit:

"This also means we are not going to set a bounce limit for the queue, in
case we have a dma mask. On most architectures it was never needed, the
major hold out was x86-32 with PAE, but that has been fixed by now."

Please tell, if you want me to change something.

Applied for next, thanks!

Kind regards
Uffe


> ---
>  drivers/mmc/core/queue.c | 7 ++-
>  1 file changed, 2 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
> index 3557d5c51141..e327f80ebe70 100644
> --- a/drivers/mmc/core/queue.c
> +++ b/drivers/mmc/core/queue.c
> @@ -350,18 +350,15 @@ static const struct blk_mq_ops mmc_mq_ops = {
>  static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
>  {
> struct mmc_host *host = card->host;
> -   u64 limit = BLK_BOUNCE_HIGH;
> unsigned block_size = 512;
>
> -   if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
> -   limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT;
> -
> blk_queue_flag_set(QUEUE_FLAG_NONROT, mq->queue);
> blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, mq->queue);
> if (mmc_can_erase(card))
> mmc_queue_setup_discard(mq->queue, card);
>
> -   blk_queue_bounce_limit(mq->queue, limit);
> +   if (!mmc_dev(host)->dma_mask || !*mmc_dev(host)->dma_mask)
> +   blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_HIGH);
> blk_queue_max_hw_sectors(mq->queue,
> min(host->max_blk_count, host->max_req_size / 512));
> blk_queue_max_segments(mq->queue, host->max_segs);
> --
> 2.20.1
>


[PATCH 1/2] mmc: let the dma map ops handle bouncing

2019-06-25 Thread Christoph Hellwig
Just like we do for all other block drivers.  Especially as the limit
imposed at the moment might be way to pessimistic for iommus.

Signed-off-by: Christoph Hellwig 
---
 drivers/mmc/core/queue.c | 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 3557d5c51141..e327f80ebe70 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -350,18 +350,15 @@ static const struct blk_mq_ops mmc_mq_ops = {
 static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
 {
struct mmc_host *host = card->host;
-   u64 limit = BLK_BOUNCE_HIGH;
unsigned block_size = 512;
 
-   if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
-   limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT;
-
blk_queue_flag_set(QUEUE_FLAG_NONROT, mq->queue);
blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, mq->queue);
if (mmc_can_erase(card))
mmc_queue_setup_discard(mq->queue, card);
 
-   blk_queue_bounce_limit(mq->queue, limit);
+   if (!mmc_dev(host)->dma_mask || !*mmc_dev(host)->dma_mask)
+   blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_HIGH);
blk_queue_max_hw_sectors(mq->queue,
min(host->max_blk_count, host->max_req_size / 512));
blk_queue_max_segments(mq->queue, host->max_segs);
-- 
2.20.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 1/2] mmc: let the dma map ops handle bouncing

2019-04-11 Thread Christoph Hellwig
On Thu, Apr 11, 2019 at 11:00:56AM +0200, Ulf Hansson wrote:
> > blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, mq->queue);
> > if (mmc_can_erase(card))
> > mmc_queue_setup_discard(mq->queue, card);
> >
> > -   blk_queue_bounce_limit(mq->queue, limit);
> > +   if (!mmc_dev(host)->dma_mask || !*mmc_dev(host)->dma_mask)
> > +   blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_HIGH);
> 
> So this means we are not going to set a bounce limit for the queue, in
> case we have a dma mask.
> 
> Why isn't that needed no more? Whats has changed?

On most architectures it was never needed, the major hold out was x86-32
with PAE.  In general the dma_mask tells the DMA API layer what is
supported, and if the physical addressing doesn't support that it has to
use bounce buffering like swiotlb (or dmabounce on arm32).  A couple
month ago I finally fixes x86-32 to also properly set up swiotlb,
and remove the block layerer bounce buffering that wasn't for highmem
(which is about having a kernel mapping, not addressing), and ISA DMA
(which is not handled like everything else, but we'll get there).
But for some reason I missed mmc back then, so mmc right now is the
only remaining user of address based block layer bouncing.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH 1/2] mmc: let the dma map ops handle bouncing

2019-04-11 Thread Ulf Hansson
Hi Christoph,

On Thu, 11 Apr 2019 at 09:10, Christoph Hellwig  wrote:
>
> Just like we do for all other block drivers.  Especially as the limit
> imposed at the moment might be way to pessimistic for iommus.

I would appreciate some information in the changelog, as it's quite
unclear of what this change really means.

>
> Signed-off-by: Christoph Hellwig 
> ---
>  drivers/mmc/core/queue.c | 7 ++-
>  1 file changed, 2 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
> index 7c364a9c4eeb..eb9c0692062c 100644
> --- a/drivers/mmc/core/queue.c
> +++ b/drivers/mmc/core/queue.c
> @@ -354,18 +354,15 @@ static const struct blk_mq_ops mmc_mq_ops = {
>  static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
>  {
> struct mmc_host *host = card->host;
> -   u64 limit = BLK_BOUNCE_HIGH;
> unsigned block_size = 512;
>
> -   if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
> -   limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT;
> -
> blk_queue_flag_set(QUEUE_FLAG_NONROT, mq->queue);
> blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, mq->queue);
> if (mmc_can_erase(card))
> mmc_queue_setup_discard(mq->queue, card);
>
> -   blk_queue_bounce_limit(mq->queue, limit);
> +   if (!mmc_dev(host)->dma_mask || !*mmc_dev(host)->dma_mask)
> +   blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_HIGH);

So this means we are not going to set a bounce limit for the queue, in
case we have a dma mask.

Why isn't that needed no more? Whats has changed?

> blk_queue_max_hw_sectors(mq->queue,
> min(host->max_blk_count, host->max_req_size / 512));
> blk_queue_max_segments(mq->queue, host->max_segs);
> --
> 2.20.1
>

Kind regards
Uffe
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


[PATCH 1/2] mmc: let the dma map ops handle bouncing

2019-04-11 Thread Christoph Hellwig
Just like we do for all other block drivers.  Especially as the limit
imposed at the moment might be way to pessimistic for iommus.

Signed-off-by: Christoph Hellwig 
---
 drivers/mmc/core/queue.c | 7 ++-
 1 file changed, 2 insertions(+), 5 deletions(-)

diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c
index 7c364a9c4eeb..eb9c0692062c 100644
--- a/drivers/mmc/core/queue.c
+++ b/drivers/mmc/core/queue.c
@@ -354,18 +354,15 @@ static const struct blk_mq_ops mmc_mq_ops = {
 static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card)
 {
struct mmc_host *host = card->host;
-   u64 limit = BLK_BOUNCE_HIGH;
unsigned block_size = 512;
 
-   if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask)
-   limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT;
-
blk_queue_flag_set(QUEUE_FLAG_NONROT, mq->queue);
blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, mq->queue);
if (mmc_can_erase(card))
mmc_queue_setup_discard(mq->queue, card);
 
-   blk_queue_bounce_limit(mq->queue, limit);
+   if (!mmc_dev(host)->dma_mask || !*mmc_dev(host)->dma_mask)
+   blk_queue_bounce_limit(mq->queue, BLK_BOUNCE_HIGH);
blk_queue_max_hw_sectors(mq->queue,
min(host->max_blk_count, host->max_req_size / 512));
blk_queue_max_segments(mq->queue, host->max_segs);
-- 
2.20.1

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu