Re: [PATCH] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls

2012-11-20 Thread Andrew Morton
On Tue, 20 Nov 2012 11:48:44 +0100
Marek Szyprowski  wrote:

> > As this patch is already in -next and is stuck there for two more
> > weeks I can't (or at least won't) merge this patch, so I can't help
> > with any of the above.
> 
> I will fix both issues in the next version of the patch. Would like to
> merge it to your tree or should I keep it in my dma-mapping tree?

The patch looks OK to me now (still wondering about the -stable
question though).

It would be a bit of a pain for me to carry a patch which conflicts
with one in linux-next, and this patch doesn't appear to conflict with
the other pending dmapool.c patches in -mm so you may as well keep it
in the dma-mapping tree now.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls

2012-11-20 Thread Marek Szyprowski

Hello,

On 11/19/2012 11:48 PM, Andrew Morton wrote:

On Sun, 18 Nov 2012 19:18:46 -0500
Jason Cooper  wrote:

> I've added the maintainers for mm/*.  Hopefully they can let us know if
> this is good for v3.8...

As Marek has inexplicably put this patch into linux-next via his tree,
we don't appear to be getting a say in the matter!


I've just put this patch to linux-next via my dma-mapping tree to give it
some testing asap to check if other changes to arm dma-mapping are required
or not.


The patch looks good to me.  That open-coded wait loop predates the
creation of bitkeeper tree(!) but doesn't appear to be needed.  There
will perhaps be some behavioural changes observable for GFP_KERNEL
callers as dma_pool_alloc() will no longer dip into page reserves but I
see nothing special about dma_pool_alloc() which justifies doing that
anyway.

The patch makes pool->waitq and its manipulation obsolete, but it
failed to remove all that stuff.


Right, I missed that part, I will update it asap.


The changelog failed to describe the problem which Soren reported.
That should be included, and as the problem sounds fairly serious we
might decide to backport the fix into -stable kernels.


Ok, I will extend the changelog.


dma_pool_alloc()'s use of a local "struct dma_page *page" is
distressing - MM developers very much expect a local called "page" to
have type "struct page *".  But that's a separate issue.


I will prepare a separate patch cleaning it. I was also a bit surprised
by such naming scheme, but it is probably related to the fact that this
come has not been touched much since a very ancient times.


As this patch is already in -next and is stuck there for two more
weeks I can't (or at least won't) merge this patch, so I can't help
with any of the above.


I will fix both issues in the next version of the patch. Would like to
merge it to your tree or should I keep it in my dma-mapping tree?

Best regards
--
Marek Szyprowski
Samsung Poland R Center


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls

2012-11-20 Thread Marek Szyprowski

Hello,

On 11/19/2012 11:48 PM, Andrew Morton wrote:

On Sun, 18 Nov 2012 19:18:46 -0500
Jason Cooper ja...@lakedaemon.net wrote:

 I've added the maintainers for mm/*.  Hopefully they can let us know if
 this is good for v3.8...

As Marek has inexplicably put this patch into linux-next via his tree,
we don't appear to be getting a say in the matter!


I've just put this patch to linux-next via my dma-mapping tree to give it
some testing asap to check if other changes to arm dma-mapping are required
or not.


The patch looks good to me.  That open-coded wait loop predates the
creation of bitkeeper tree(!) but doesn't appear to be needed.  There
will perhaps be some behavioural changes observable for GFP_KERNEL
callers as dma_pool_alloc() will no longer dip into page reserves but I
see nothing special about dma_pool_alloc() which justifies doing that
anyway.

The patch makes pool-waitq and its manipulation obsolete, but it
failed to remove all that stuff.


Right, I missed that part, I will update it asap.


The changelog failed to describe the problem which Soren reported.
That should be included, and as the problem sounds fairly serious we
might decide to backport the fix into -stable kernels.


Ok, I will extend the changelog.


dma_pool_alloc()'s use of a local struct dma_page *page is
distressing - MM developers very much expect a local called page to
have type struct page *.  But that's a separate issue.


I will prepare a separate patch cleaning it. I was also a bit surprised
by such naming scheme, but it is probably related to the fact that this
come has not been touched much since a very ancient times.


As this patch is already in -next and is stuck there for two more
weeks I can't (or at least won't) merge this patch, so I can't help
with any of the above.


I will fix both issues in the next version of the patch. Would like to
merge it to your tree or should I keep it in my dma-mapping tree?

Best regards
--
Marek Szyprowski
Samsung Poland RD Center


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls

2012-11-20 Thread Andrew Morton
On Tue, 20 Nov 2012 11:48:44 +0100
Marek Szyprowski m.szyprow...@samsung.com wrote:

  As this patch is already in -next and is stuck there for two more
  weeks I can't (or at least won't) merge this patch, so I can't help
  with any of the above.
 
 I will fix both issues in the next version of the patch. Would like to
 merge it to your tree or should I keep it in my dma-mapping tree?

The patch looks OK to me now (still wondering about the -stable
question though).

It would be a bit of a pain for me to carry a patch which conflicts
with one in linux-next, and this patch doesn't appear to conflict with
the other pending dmapool.c patches in -mm so you may as well keep it
in the dma-mapping tree now.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls

2012-11-19 Thread Andrew Morton
On Sun, 18 Nov 2012 19:18:46 -0500
Jason Cooper  wrote:

> I've added the maintainers for mm/*.  Hopefully they can let us know if
> this is good for v3.8...

As Marek has inexplicably put this patch into linux-next via his tree,
we don't appear to be getting a say in the matter!

The patch looks good to me.  That open-coded wait loop predates the
creation of bitkeeper tree(!) but doesn't appear to be needed.  There
will perhaps be some behavioural changes observable for GFP_KERNEL
callers as dma_pool_alloc() will no longer dip into page reserves but I
see nothing special about dma_pool_alloc() which justifies doing that
anyway.

The patch makes pool->waitq and its manipulation obsolete, but it
failed to remove all that stuff.

The changelog failed to describe the problem which Soren reported. 
That should be included, and as the problem sounds fairly serious we
might decide to backport the fix into -stable kernels.

dma_pool_alloc()'s use of a local "struct dma_page *page" is
distressing - MM developers very much expect a local called "page" to
have type "struct page *".  But that's a separate issue.

As this patch is already in -next and is stuck there for two more
weeks I can't (or at least won't) merge this patch, so I can't help
with any of the above.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls

2012-11-19 Thread Andrew Morton
On Sun, 18 Nov 2012 19:18:46 -0500
Jason Cooper ja...@lakedaemon.net wrote:

 I've added the maintainers for mm/*.  Hopefully they can let us know if
 this is good for v3.8...

As Marek has inexplicably put this patch into linux-next via his tree,
we don't appear to be getting a say in the matter!

The patch looks good to me.  That open-coded wait loop predates the
creation of bitkeeper tree(!) but doesn't appear to be needed.  There
will perhaps be some behavioural changes observable for GFP_KERNEL
callers as dma_pool_alloc() will no longer dip into page reserves but I
see nothing special about dma_pool_alloc() which justifies doing that
anyway.

The patch makes pool-waitq and its manipulation obsolete, but it
failed to remove all that stuff.

The changelog failed to describe the problem which Soren reported. 
That should be included, and as the problem sounds fairly serious we
might decide to backport the fix into -stable kernels.

dma_pool_alloc()'s use of a local struct dma_page *page is
distressing - MM developers very much expect a local called page to
have type struct page *.  But that's a separate issue.

As this patch is already in -next and is stuck there for two more
weeks I can't (or at least won't) merge this patch, so I can't help
with any of the above.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls

2012-11-18 Thread Jason Cooper
Marek,

I've added the maintainers for mm/*.  Hopefully they can let us know if
this is good for v3.8...

thx,

Jason.

On Thu, Nov 08, 2012 at 07:38:57AM +0100, Marek Szyprowski wrote:
> dmapool always calls dma_alloc_coherent() with GFP_ATOMIC flag, regardless
> the flags provided by the caller. This causes excessive pruning of
> emergency memory pools without any good reason. This patch changes the code
> to correctly use gfp flags provided by the dmapool caller. This should
> solve the dmapool usage on ARM architecture, where GFP_ATOMIC DMA
> allocations can be served only from the special, very limited memory pool.
> 
> Reported-by: Soren Moch 
> Reported-by: Thomas Petazzoni 
> Signed-off-by: Marek Szyprowski 
> ---
>  mm/dmapool.c |   27 +++
>  1 file changed, 7 insertions(+), 20 deletions(-)
> 
> diff --git a/mm/dmapool.c b/mm/dmapool.c
> index c5ab33b..86de9b2 100644
> --- a/mm/dmapool.c
> +++ b/mm/dmapool.c
> @@ -62,8 +62,6 @@ struct dma_page {   /* cacheable header for 
> 'allocation' bytes */
>   unsigned int offset;
>  };
>  
> -#define  POOL_TIMEOUT_JIFFIES((100 /* msec */ * HZ) / 1000)
> -
>  static DEFINE_MUTEX(pools_lock);
>  
>  static ssize_t
> @@ -227,7 +225,6 @@ static struct dma_page *pool_alloc_page(struct dma_pool 
> *pool, gfp_t mem_flags)
>   memset(page->vaddr, POOL_POISON_FREED, pool->allocation);
>  #endif
>   pool_initialise_page(pool, page);
> - list_add(>page_list, >page_list);
>   page->in_use = 0;
>   page->offset = 0;
>   } else {
> @@ -315,30 +312,21 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t 
> mem_flags,
>   might_sleep_if(mem_flags & __GFP_WAIT);
>  
>   spin_lock_irqsave(>lock, flags);
> - restart:
>   list_for_each_entry(page, >page_list, page_list) {
>   if (page->offset < pool->allocation)
>   goto ready;
>   }
> - page = pool_alloc_page(pool, GFP_ATOMIC);
> - if (!page) {
> - if (mem_flags & __GFP_WAIT) {
> - DECLARE_WAITQUEUE(wait, current);
>  
> - __set_current_state(TASK_UNINTERRUPTIBLE);
> - __add_wait_queue(>waitq, );
> - spin_unlock_irqrestore(>lock, flags);
> + /* pool_alloc_page() might sleep, so temporarily drop >lock */
> + spin_unlock_irqrestore(>lock, flags);
>  
> - schedule_timeout(POOL_TIMEOUT_JIFFIES);
> + page = pool_alloc_page(pool, mem_flags);
> + if (!page)
> + return NULL;
>  
> - spin_lock_irqsave(>lock, flags);
> - __remove_wait_queue(>waitq, );
> - goto restart;
> - }
> - retval = NULL;
> - goto done;
> - }
> + spin_lock_irqsave(>lock, flags);
>  
> + list_add(>page_list, >page_list);
>   ready:
>   page->in_use++;
>   offset = page->offset;
> @@ -348,7 +336,6 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t 
> mem_flags,
>  #ifdef   DMAPOOL_DEBUG
>   memset(retval, POOL_POISON_ALLOCATED, pool->size);
>  #endif
> - done:
>   spin_unlock_irqrestore(>lock, flags);
>   return retval;
>  }
> -- 
> 1.7.9.5
> 
> 
> ___
> linux-arm-kernel mailing list
> linux-arm-ker...@lists.infradead.org
> http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls

2012-11-18 Thread Jason Cooper
Marek,

I've added the maintainers for mm/*.  Hopefully they can let us know if
this is good for v3.8...

thx,

Jason.

On Thu, Nov 08, 2012 at 07:38:57AM +0100, Marek Szyprowski wrote:
 dmapool always calls dma_alloc_coherent() with GFP_ATOMIC flag, regardless
 the flags provided by the caller. This causes excessive pruning of
 emergency memory pools without any good reason. This patch changes the code
 to correctly use gfp flags provided by the dmapool caller. This should
 solve the dmapool usage on ARM architecture, where GFP_ATOMIC DMA
 allocations can be served only from the special, very limited memory pool.
 
 Reported-by: Soren Moch sm...@web.de
 Reported-by: Thomas Petazzoni thomas.petazz...@free-electrons.com
 Signed-off-by: Marek Szyprowski m.szyprow...@samsung.com
 ---
  mm/dmapool.c |   27 +++
  1 file changed, 7 insertions(+), 20 deletions(-)
 
 diff --git a/mm/dmapool.c b/mm/dmapool.c
 index c5ab33b..86de9b2 100644
 --- a/mm/dmapool.c
 +++ b/mm/dmapool.c
 @@ -62,8 +62,6 @@ struct dma_page {   /* cacheable header for 
 'allocation' bytes */
   unsigned int offset;
  };
  
 -#define  POOL_TIMEOUT_JIFFIES((100 /* msec */ * HZ) / 1000)
 -
  static DEFINE_MUTEX(pools_lock);
  
  static ssize_t
 @@ -227,7 +225,6 @@ static struct dma_page *pool_alloc_page(struct dma_pool 
 *pool, gfp_t mem_flags)
   memset(page-vaddr, POOL_POISON_FREED, pool-allocation);
  #endif
   pool_initialise_page(pool, page);
 - list_add(page-page_list, pool-page_list);
   page-in_use = 0;
   page-offset = 0;
   } else {
 @@ -315,30 +312,21 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t 
 mem_flags,
   might_sleep_if(mem_flags  __GFP_WAIT);
  
   spin_lock_irqsave(pool-lock, flags);
 - restart:
   list_for_each_entry(page, pool-page_list, page_list) {
   if (page-offset  pool-allocation)
   goto ready;
   }
 - page = pool_alloc_page(pool, GFP_ATOMIC);
 - if (!page) {
 - if (mem_flags  __GFP_WAIT) {
 - DECLARE_WAITQUEUE(wait, current);
  
 - __set_current_state(TASK_UNINTERRUPTIBLE);
 - __add_wait_queue(pool-waitq, wait);
 - spin_unlock_irqrestore(pool-lock, flags);
 + /* pool_alloc_page() might sleep, so temporarily drop pool-lock */
 + spin_unlock_irqrestore(pool-lock, flags);
  
 - schedule_timeout(POOL_TIMEOUT_JIFFIES);
 + page = pool_alloc_page(pool, mem_flags);
 + if (!page)
 + return NULL;
  
 - spin_lock_irqsave(pool-lock, flags);
 - __remove_wait_queue(pool-waitq, wait);
 - goto restart;
 - }
 - retval = NULL;
 - goto done;
 - }
 + spin_lock_irqsave(pool-lock, flags);
  
 + list_add(page-page_list, pool-page_list);
   ready:
   page-in_use++;
   offset = page-offset;
 @@ -348,7 +336,6 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t 
 mem_flags,
  #ifdef   DMAPOOL_DEBUG
   memset(retval, POOL_POISON_ALLOCATED, pool-size);
  #endif
 - done:
   spin_unlock_irqrestore(pool-lock, flags);
   return retval;
  }
 -- 
 1.7.9.5
 
 
 ___
 linux-arm-kernel mailing list
 linux-arm-ker...@lists.infradead.org
 http://lists.infradead.org/mailman/listinfo/linux-arm-kernel
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls

2012-11-12 Thread Soeren Moch

On 12.11.2012 11:38, Andrew Lunn wrote:

On Mon, Nov 12, 2012 at 10:48:02AM +0100, Soeren Moch wrote:

On 11.11.2012 18:22, Andrew Lunn wrote:

On Thu, Nov 08, 2012 at 07:38:57AM +0100, Marek Szyprowski wrote:

dmapool always calls dma_alloc_coherent() with GFP_ATOMIC flag,

regardless

the flags provided by the caller. This causes excessive pruning of
emergency memory pools without any good reason. This patch

changes the code

to correctly use gfp flags provided by the dmapool caller. This should
solve the dmapool usage on ARM architecture, where GFP_ATOMIC DMA
allocations can be served only from the special, very limited

memory pool.

Reported-by: Soren Moch 

Please use
Reported-by: Soeren Moch 


Reported-by: Thomas Petazzoni 
Signed-off-by: Marek Szyprowski 

Tested-by: Andrew Lunn 

I tested this on a Kirkwood QNAP after removing the call to
init_dma_coherent_pool_size().

 Andrew

Tested-by: Soeren Moch 

Now I had a chance to test this patch on my Kirkwood guruplug
system with linux-3.6.6 . It is running much better now, but with the
original 256K coherent pool size I still see errors after several hours
of runtime:

Nov 12 09:42:32 guru kernel: ERROR: 256 KiB atomic DMA coherent pool
is too small!
Nov 12 09:42:32 guru kernel: Please increase it with coherent_pool=
kernel parameter!

Hi Soeren

Could you tell us what DVB devices you are using.

Thanks
Andrew


from lsusb:
Bus 001 Device 005: ID 0ccd:00b2 TerraTec Electronic GmbH
Bus 001 Device 006: ID 2040:5200 Hauppauge
Bus 001 Device 009: ID 2304:0242 Pinnacle Systems, Inc.

If you want to check the drivers, I recommend to start with "em28xx".

Regards,
Soeren
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls

2012-11-12 Thread Andrew Lunn
On Mon, Nov 12, 2012 at 10:48:02AM +0100, Soeren Moch wrote:
> On 11.11.2012 18:22, Andrew Lunn wrote:
> > On Thu, Nov 08, 2012 at 07:38:57AM +0100, Marek Szyprowski wrote:
> >> dmapool always calls dma_alloc_coherent() with GFP_ATOMIC flag,
> regardless
> >> the flags provided by the caller. This causes excessive pruning of
> >> emergency memory pools without any good reason. This patch
> changes the code
> >> to correctly use gfp flags provided by the dmapool caller. This should
> >> solve the dmapool usage on ARM architecture, where GFP_ATOMIC DMA
> >> allocations can be served only from the special, very limited
> memory pool.
> >>
> >> Reported-by: Soren Moch 
> Please use
> Reported-by: Soeren Moch 
> 
> >> Reported-by: Thomas Petazzoni 
> >> Signed-off-by: Marek Szyprowski 
> >
> > Tested-by: Andrew Lunn 
> >
> > I tested this on a Kirkwood QNAP after removing the call to
> > init_dma_coherent_pool_size().
> >
> > Andrew
> 
> Tested-by: Soeren Moch 
> 
> Now I had a chance to test this patch on my Kirkwood guruplug
> system with linux-3.6.6 . It is running much better now, but with the
> original 256K coherent pool size I still see errors after several hours
> of runtime:
> 
> Nov 12 09:42:32 guru kernel: ERROR: 256 KiB atomic DMA coherent pool
> is too small!
> Nov 12 09:42:32 guru kernel: Please increase it with coherent_pool=
> kernel parameter!

Hi Soeren

Could you tell us what DVB devices you are using.

Thanks
Andrew
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls

2012-11-12 Thread Soeren Moch

On 11.11.2012 18:22, Andrew Lunn wrote:
> On Thu, Nov 08, 2012 at 07:38:57AM +0100, Marek Szyprowski wrote:
>> dmapool always calls dma_alloc_coherent() with GFP_ATOMIC flag, 
regardless

>> the flags provided by the caller. This causes excessive pruning of
>> emergency memory pools without any good reason. This patch changes 
the code

>> to correctly use gfp flags provided by the dmapool caller. This should
>> solve the dmapool usage on ARM architecture, where GFP_ATOMIC DMA
>> allocations can be served only from the special, very limited memory 
pool.

>>
>> Reported-by: Soren Moch 
Please use
Reported-by: Soeren Moch 

>> Reported-by: Thomas Petazzoni 
>> Signed-off-by: Marek Szyprowski 
>
> Tested-by: Andrew Lunn 
>
> I tested this on a Kirkwood QNAP after removing the call to
> init_dma_coherent_pool_size().
>
> Andrew

Tested-by: Soeren Moch 

Now I had a chance to test this patch on my Kirkwood guruplug
system with linux-3.6.6 . It is running much better now, but with the
original 256K coherent pool size I still see errors after several hours
of runtime:

Nov 12 09:42:32 guru kernel: ERROR: 256 KiB atomic DMA coherent pool is 
too small!
Nov 12 09:42:32 guru kernel: Please increase it with coherent_pool= 
kernel parameter!


  Soeren

>> ---
>>  mm/dmapool.c |   27 +++
>>  1 file changed, 7 insertions(+), 20 deletions(-)
>>
>> diff --git a/mm/dmapool.c b/mm/dmapool.c
>> index c5ab33b..86de9b2 100644
>> --- a/mm/dmapool.c
>> +++ b/mm/dmapool.c
>> @@ -62,8 +62,6 @@ struct dma_page {/* cacheable header for 
'allocation' bytes */

>>  unsigned int offset;
>>  };
>>
>> -#definePOOL_TIMEOUT_JIFFIES((100 /* msec */ * HZ) / 1000)
>> -
>>  static DEFINE_MUTEX(pools_lock);
>>
>>  static ssize_t
>> @@ -227,7 +225,6 @@ static struct dma_page *pool_alloc_page(struct 
dma_pool *pool, gfp_t mem_flags)

>>  memset(page->vaddr, POOL_POISON_FREED, pool->allocation);
>>  #endif
>>  pool_initialise_page(pool, page);
>> -list_add(>page_list, >page_list);
>>  page->in_use = 0;
>>  page->offset = 0;
>>  } else {
>> @@ -315,30 +312,21 @@ void *dma_pool_alloc(struct dma_pool *pool, 
gfp_t mem_flags,

>>  might_sleep_if(mem_flags & __GFP_WAIT);
>>
>>  spin_lock_irqsave(>lock, flags);
>> - restart:
>>  list_for_each_entry(page, >page_list, page_list) {
>>  if (page->offset < pool->allocation)
>>  goto ready;
>>  }
>> -page = pool_alloc_page(pool, GFP_ATOMIC);
>> -if (!page) {
>> -if (mem_flags & __GFP_WAIT) {
>> -DECLARE_WAITQUEUE(wait, current);
>>
>> -__set_current_state(TASK_UNINTERRUPTIBLE);
>> -__add_wait_queue(>waitq, );
>> -spin_unlock_irqrestore(>lock, flags);
>> +/* pool_alloc_page() might sleep, so temporarily drop 
>lock */

>> +spin_unlock_irqrestore(>lock, flags);
>>
>> -schedule_timeout(POOL_TIMEOUT_JIFFIES);
>> +page = pool_alloc_page(pool, mem_flags);
>> +if (!page)
>> +return NULL;
>>
>> -spin_lock_irqsave(>lock, flags);
>> -__remove_wait_queue(>waitq, );
>> -goto restart;
>> -}
>> -retval = NULL;
>> -goto done;
>> -}
>> +spin_lock_irqsave(>lock, flags);
>>
>> +list_add(>page_list, >page_list);
>>   ready:
>>  page->in_use++;
>>  offset = page->offset;
>> @@ -348,7 +336,6 @@ void *dma_pool_alloc(struct dma_pool *pool, 
gfp_t mem_flags,

>>  #ifdefDMAPOOL_DEBUG
>>  memset(retval, POOL_POISON_ALLOCATED, pool->size);
>>  #endif
>> - done:
>>  spin_unlock_irqrestore(>lock, flags);
>>  return retval;
>>  }
>> --
>> 1.7.9.5
>>

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls

2012-11-12 Thread Soeren Moch

On 11.11.2012 18:22, Andrew Lunn wrote:
 On Thu, Nov 08, 2012 at 07:38:57AM +0100, Marek Szyprowski wrote:
 dmapool always calls dma_alloc_coherent() with GFP_ATOMIC flag, 
regardless

 the flags provided by the caller. This causes excessive pruning of
 emergency memory pools without any good reason. This patch changes 
the code

 to correctly use gfp flags provided by the dmapool caller. This should
 solve the dmapool usage on ARM architecture, where GFP_ATOMIC DMA
 allocations can be served only from the special, very limited memory 
pool.


 Reported-by: Soren Moch sm...@web.de
Please use
Reported-by: Soeren Moch sm...@web.de

 Reported-by: Thomas Petazzoni thomas.petazz...@free-electrons.com
 Signed-off-by: Marek Szyprowski m.szyprow...@samsung.com

 Tested-by: Andrew Lunn and...@lunn.ch

 I tested this on a Kirkwood QNAP after removing the call to
 init_dma_coherent_pool_size().

 Andrew

Tested-by: Soeren Moch sm...@web.de

Now I had a chance to test this patch on my Kirkwood guruplug
system with linux-3.6.6 . It is running much better now, but with the
original 256K coherent pool size I still see errors after several hours
of runtime:

Nov 12 09:42:32 guru kernel: ERROR: 256 KiB atomic DMA coherent pool is 
too small!
Nov 12 09:42:32 guru kernel: Please increase it with coherent_pool= 
kernel parameter!


  Soeren

 ---
  mm/dmapool.c |   27 +++
  1 file changed, 7 insertions(+), 20 deletions(-)

 diff --git a/mm/dmapool.c b/mm/dmapool.c
 index c5ab33b..86de9b2 100644
 --- a/mm/dmapool.c
 +++ b/mm/dmapool.c
 @@ -62,8 +62,6 @@ struct dma_page {/* cacheable header for 
'allocation' bytes */

  unsigned int offset;
  };

 -#definePOOL_TIMEOUT_JIFFIES((100 /* msec */ * HZ) / 1000)
 -
  static DEFINE_MUTEX(pools_lock);

  static ssize_t
 @@ -227,7 +225,6 @@ static struct dma_page *pool_alloc_page(struct 
dma_pool *pool, gfp_t mem_flags)

  memset(page-vaddr, POOL_POISON_FREED, pool-allocation);
  #endif
  pool_initialise_page(pool, page);
 -list_add(page-page_list, pool-page_list);
  page-in_use = 0;
  page-offset = 0;
  } else {
 @@ -315,30 +312,21 @@ void *dma_pool_alloc(struct dma_pool *pool, 
gfp_t mem_flags,

  might_sleep_if(mem_flags  __GFP_WAIT);

  spin_lock_irqsave(pool-lock, flags);
 - restart:
  list_for_each_entry(page, pool-page_list, page_list) {
  if (page-offset  pool-allocation)
  goto ready;
  }
 -page = pool_alloc_page(pool, GFP_ATOMIC);
 -if (!page) {
 -if (mem_flags  __GFP_WAIT) {
 -DECLARE_WAITQUEUE(wait, current);

 -__set_current_state(TASK_UNINTERRUPTIBLE);
 -__add_wait_queue(pool-waitq, wait);
 -spin_unlock_irqrestore(pool-lock, flags);
 +/* pool_alloc_page() might sleep, so temporarily drop 
pool-lock */

 +spin_unlock_irqrestore(pool-lock, flags);

 -schedule_timeout(POOL_TIMEOUT_JIFFIES);
 +page = pool_alloc_page(pool, mem_flags);
 +if (!page)
 +return NULL;

 -spin_lock_irqsave(pool-lock, flags);
 -__remove_wait_queue(pool-waitq, wait);
 -goto restart;
 -}
 -retval = NULL;
 -goto done;
 -}
 +spin_lock_irqsave(pool-lock, flags);

 +list_add(page-page_list, pool-page_list);
   ready:
  page-in_use++;
  offset = page-offset;
 @@ -348,7 +336,6 @@ void *dma_pool_alloc(struct dma_pool *pool, 
gfp_t mem_flags,

  #ifdefDMAPOOL_DEBUG
  memset(retval, POOL_POISON_ALLOCATED, pool-size);
  #endif
 - done:
  spin_unlock_irqrestore(pool-lock, flags);
  return retval;
  }
 --
 1.7.9.5


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls

2012-11-12 Thread Andrew Lunn
On Mon, Nov 12, 2012 at 10:48:02AM +0100, Soeren Moch wrote:
 On 11.11.2012 18:22, Andrew Lunn wrote:
  On Thu, Nov 08, 2012 at 07:38:57AM +0100, Marek Szyprowski wrote:
  dmapool always calls dma_alloc_coherent() with GFP_ATOMIC flag,
 regardless
  the flags provided by the caller. This causes excessive pruning of
  emergency memory pools without any good reason. This patch
 changes the code
  to correctly use gfp flags provided by the dmapool caller. This should
  solve the dmapool usage on ARM architecture, where GFP_ATOMIC DMA
  allocations can be served only from the special, very limited
 memory pool.
 
  Reported-by: Soren Moch sm...@web.de
 Please use
 Reported-by: Soeren Moch sm...@web.de
 
  Reported-by: Thomas Petazzoni thomas.petazz...@free-electrons.com
  Signed-off-by: Marek Szyprowski m.szyprow...@samsung.com
 
  Tested-by: Andrew Lunn and...@lunn.ch
 
  I tested this on a Kirkwood QNAP after removing the call to
  init_dma_coherent_pool_size().
 
  Andrew
 
 Tested-by: Soeren Moch sm...@web.de
 
 Now I had a chance to test this patch on my Kirkwood guruplug
 system with linux-3.6.6 . It is running much better now, but with the
 original 256K coherent pool size I still see errors after several hours
 of runtime:
 
 Nov 12 09:42:32 guru kernel: ERROR: 256 KiB atomic DMA coherent pool
 is too small!
 Nov 12 09:42:32 guru kernel: Please increase it with coherent_pool=
 kernel parameter!

Hi Soeren

Could you tell us what DVB devices you are using.

Thanks
Andrew
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls

2012-11-12 Thread Soeren Moch

On 12.11.2012 11:38, Andrew Lunn wrote:

On Mon, Nov 12, 2012 at 10:48:02AM +0100, Soeren Moch wrote:

On 11.11.2012 18:22, Andrew Lunn wrote:

On Thu, Nov 08, 2012 at 07:38:57AM +0100, Marek Szyprowski wrote:

dmapool always calls dma_alloc_coherent() with GFP_ATOMIC flag,

regardless

the flags provided by the caller. This causes excessive pruning of
emergency memory pools without any good reason. This patch

changes the code

to correctly use gfp flags provided by the dmapool caller. This should
solve the dmapool usage on ARM architecture, where GFP_ATOMIC DMA
allocations can be served only from the special, very limited

memory pool.

Reported-by: Soren Moch sm...@web.de

Please use
Reported-by: Soeren Moch sm...@web.de


Reported-by: Thomas Petazzoni thomas.petazz...@free-electrons.com
Signed-off-by: Marek Szyprowski m.szyprow...@samsung.com

Tested-by: Andrew Lunn and...@lunn.ch

I tested this on a Kirkwood QNAP after removing the call to
init_dma_coherent_pool_size().

 Andrew

Tested-by: Soeren Moch sm...@web.de

Now I had a chance to test this patch on my Kirkwood guruplug
system with linux-3.6.6 . It is running much better now, but with the
original 256K coherent pool size I still see errors after several hours
of runtime:

Nov 12 09:42:32 guru kernel: ERROR: 256 KiB atomic DMA coherent pool
is too small!
Nov 12 09:42:32 guru kernel: Please increase it with coherent_pool=
kernel parameter!

Hi Soeren

Could you tell us what DVB devices you are using.

Thanks
Andrew


from lsusb:
Bus 001 Device 005: ID 0ccd:00b2 TerraTec Electronic GmbH
Bus 001 Device 006: ID 2040:5200 Hauppauge
Bus 001 Device 009: ID 2304:0242 Pinnacle Systems, Inc.

If you want to check the drivers, I recommend to start with em28xx.

Regards,
Soeren
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls

2012-11-11 Thread Andrew Lunn
On Thu, Nov 08, 2012 at 07:38:57AM +0100, Marek Szyprowski wrote:
> dmapool always calls dma_alloc_coherent() with GFP_ATOMIC flag, regardless
> the flags provided by the caller. This causes excessive pruning of
> emergency memory pools without any good reason. This patch changes the code
> to correctly use gfp flags provided by the dmapool caller. This should
> solve the dmapool usage on ARM architecture, where GFP_ATOMIC DMA
> allocations can be served only from the special, very limited memory pool.
> 
> Reported-by: Soren Moch 
> Reported-by: Thomas Petazzoni 
> Signed-off-by: Marek Szyprowski 

Tested-by: Andrew Lunn 

I tested this on a Kirkwood QNAP after removing the call to
init_dma_coherent_pool_size().

Andrew


> ---
>  mm/dmapool.c |   27 +++
>  1 file changed, 7 insertions(+), 20 deletions(-)
> 
> diff --git a/mm/dmapool.c b/mm/dmapool.c
> index c5ab33b..86de9b2 100644
> --- a/mm/dmapool.c
> +++ b/mm/dmapool.c
> @@ -62,8 +62,6 @@ struct dma_page {   /* cacheable header for 
> 'allocation' bytes */
>   unsigned int offset;
>  };
>  
> -#define  POOL_TIMEOUT_JIFFIES((100 /* msec */ * HZ) / 1000)
> -
>  static DEFINE_MUTEX(pools_lock);
>  
>  static ssize_t
> @@ -227,7 +225,6 @@ static struct dma_page *pool_alloc_page(struct dma_pool 
> *pool, gfp_t mem_flags)
>   memset(page->vaddr, POOL_POISON_FREED, pool->allocation);
>  #endif
>   pool_initialise_page(pool, page);
> - list_add(>page_list, >page_list);
>   page->in_use = 0;
>   page->offset = 0;
>   } else {
> @@ -315,30 +312,21 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t 
> mem_flags,
>   might_sleep_if(mem_flags & __GFP_WAIT);
>  
>   spin_lock_irqsave(>lock, flags);
> - restart:
>   list_for_each_entry(page, >page_list, page_list) {
>   if (page->offset < pool->allocation)
>   goto ready;
>   }
> - page = pool_alloc_page(pool, GFP_ATOMIC);
> - if (!page) {
> - if (mem_flags & __GFP_WAIT) {
> - DECLARE_WAITQUEUE(wait, current);
>  
> - __set_current_state(TASK_UNINTERRUPTIBLE);
> - __add_wait_queue(>waitq, );
> - spin_unlock_irqrestore(>lock, flags);
> + /* pool_alloc_page() might sleep, so temporarily drop >lock */
> + spin_unlock_irqrestore(>lock, flags);
>  
> - schedule_timeout(POOL_TIMEOUT_JIFFIES);
> + page = pool_alloc_page(pool, mem_flags);
> + if (!page)
> + return NULL;
>  
> - spin_lock_irqsave(>lock, flags);
> - __remove_wait_queue(>waitq, );
> - goto restart;
> - }
> - retval = NULL;
> - goto done;
> - }
> + spin_lock_irqsave(>lock, flags);
>  
> + list_add(>page_list, >page_list);
>   ready:
>   page->in_use++;
>   offset = page->offset;
> @@ -348,7 +336,6 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t 
> mem_flags,
>  #ifdef   DMAPOOL_DEBUG
>   memset(retval, POOL_POISON_ALLOCATED, pool->size);
>  #endif
> - done:
>   spin_unlock_irqrestore(>lock, flags);
>   return retval;
>  }
> -- 
> 1.7.9.5
> 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls

2012-11-11 Thread Andrew Lunn
On Thu, Nov 08, 2012 at 07:38:57AM +0100, Marek Szyprowski wrote:
 dmapool always calls dma_alloc_coherent() with GFP_ATOMIC flag, regardless
 the flags provided by the caller. This causes excessive pruning of
 emergency memory pools without any good reason. This patch changes the code
 to correctly use gfp flags provided by the dmapool caller. This should
 solve the dmapool usage on ARM architecture, where GFP_ATOMIC DMA
 allocations can be served only from the special, very limited memory pool.
 
 Reported-by: Soren Moch sm...@web.de
 Reported-by: Thomas Petazzoni thomas.petazz...@free-electrons.com
 Signed-off-by: Marek Szyprowski m.szyprow...@samsung.com

Tested-by: Andrew Lunn and...@lunn.ch

I tested this on a Kirkwood QNAP after removing the call to
init_dma_coherent_pool_size().

Andrew


 ---
  mm/dmapool.c |   27 +++
  1 file changed, 7 insertions(+), 20 deletions(-)
 
 diff --git a/mm/dmapool.c b/mm/dmapool.c
 index c5ab33b..86de9b2 100644
 --- a/mm/dmapool.c
 +++ b/mm/dmapool.c
 @@ -62,8 +62,6 @@ struct dma_page {   /* cacheable header for 
 'allocation' bytes */
   unsigned int offset;
  };
  
 -#define  POOL_TIMEOUT_JIFFIES((100 /* msec */ * HZ) / 1000)
 -
  static DEFINE_MUTEX(pools_lock);
  
  static ssize_t
 @@ -227,7 +225,6 @@ static struct dma_page *pool_alloc_page(struct dma_pool 
 *pool, gfp_t mem_flags)
   memset(page-vaddr, POOL_POISON_FREED, pool-allocation);
  #endif
   pool_initialise_page(pool, page);
 - list_add(page-page_list, pool-page_list);
   page-in_use = 0;
   page-offset = 0;
   } else {
 @@ -315,30 +312,21 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t 
 mem_flags,
   might_sleep_if(mem_flags  __GFP_WAIT);
  
   spin_lock_irqsave(pool-lock, flags);
 - restart:
   list_for_each_entry(page, pool-page_list, page_list) {
   if (page-offset  pool-allocation)
   goto ready;
   }
 - page = pool_alloc_page(pool, GFP_ATOMIC);
 - if (!page) {
 - if (mem_flags  __GFP_WAIT) {
 - DECLARE_WAITQUEUE(wait, current);
  
 - __set_current_state(TASK_UNINTERRUPTIBLE);
 - __add_wait_queue(pool-waitq, wait);
 - spin_unlock_irqrestore(pool-lock, flags);
 + /* pool_alloc_page() might sleep, so temporarily drop pool-lock */
 + spin_unlock_irqrestore(pool-lock, flags);
  
 - schedule_timeout(POOL_TIMEOUT_JIFFIES);
 + page = pool_alloc_page(pool, mem_flags);
 + if (!page)
 + return NULL;
  
 - spin_lock_irqsave(pool-lock, flags);
 - __remove_wait_queue(pool-waitq, wait);
 - goto restart;
 - }
 - retval = NULL;
 - goto done;
 - }
 + spin_lock_irqsave(pool-lock, flags);
  
 + list_add(page-page_list, pool-page_list);
   ready:
   page-in_use++;
   offset = page-offset;
 @@ -348,7 +336,6 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t 
 mem_flags,
  #ifdef   DMAPOOL_DEBUG
   memset(retval, POOL_POISON_ALLOCATED, pool-size);
  #endif
 - done:
   spin_unlock_irqrestore(pool-lock, flags);
   return retval;
  }
 -- 
 1.7.9.5
 
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls

2012-11-07 Thread Marek Szyprowski
dmapool always calls dma_alloc_coherent() with GFP_ATOMIC flag, regardless
the flags provided by the caller. This causes excessive pruning of
emergency memory pools without any good reason. This patch changes the code
to correctly use gfp flags provided by the dmapool caller. This should
solve the dmapool usage on ARM architecture, where GFP_ATOMIC DMA
allocations can be served only from the special, very limited memory pool.

Reported-by: Soren Moch 
Reported-by: Thomas Petazzoni 
Signed-off-by: Marek Szyprowski 
---
 mm/dmapool.c |   27 +++
 1 file changed, 7 insertions(+), 20 deletions(-)

diff --git a/mm/dmapool.c b/mm/dmapool.c
index c5ab33b..86de9b2 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -62,8 +62,6 @@ struct dma_page { /* cacheable header for 
'allocation' bytes */
unsigned int offset;
 };
 
-#definePOOL_TIMEOUT_JIFFIES((100 /* msec */ * HZ) / 1000)
-
 static DEFINE_MUTEX(pools_lock);
 
 static ssize_t
@@ -227,7 +225,6 @@ static struct dma_page *pool_alloc_page(struct dma_pool 
*pool, gfp_t mem_flags)
memset(page->vaddr, POOL_POISON_FREED, pool->allocation);
 #endif
pool_initialise_page(pool, page);
-   list_add(>page_list, >page_list);
page->in_use = 0;
page->offset = 0;
} else {
@@ -315,30 +312,21 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t 
mem_flags,
might_sleep_if(mem_flags & __GFP_WAIT);
 
spin_lock_irqsave(>lock, flags);
- restart:
list_for_each_entry(page, >page_list, page_list) {
if (page->offset < pool->allocation)
goto ready;
}
-   page = pool_alloc_page(pool, GFP_ATOMIC);
-   if (!page) {
-   if (mem_flags & __GFP_WAIT) {
-   DECLARE_WAITQUEUE(wait, current);
 
-   __set_current_state(TASK_UNINTERRUPTIBLE);
-   __add_wait_queue(>waitq, );
-   spin_unlock_irqrestore(>lock, flags);
+   /* pool_alloc_page() might sleep, so temporarily drop >lock */
+   spin_unlock_irqrestore(>lock, flags);
 
-   schedule_timeout(POOL_TIMEOUT_JIFFIES);
+   page = pool_alloc_page(pool, mem_flags);
+   if (!page)
+   return NULL;
 
-   spin_lock_irqsave(>lock, flags);
-   __remove_wait_queue(>waitq, );
-   goto restart;
-   }
-   retval = NULL;
-   goto done;
-   }
+   spin_lock_irqsave(>lock, flags);
 
+   list_add(>page_list, >page_list);
  ready:
page->in_use++;
offset = page->offset;
@@ -348,7 +336,6 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,
 #ifdef DMAPOOL_DEBUG
memset(retval, POOL_POISON_ALLOCATED, pool->size);
 #endif
- done:
spin_unlock_irqrestore(>lock, flags);
return retval;
 }
-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] mm: dmapool: use provided gfp flags for all dma_alloc_coherent() calls

2012-11-07 Thread Marek Szyprowski
dmapool always calls dma_alloc_coherent() with GFP_ATOMIC flag, regardless
the flags provided by the caller. This causes excessive pruning of
emergency memory pools without any good reason. This patch changes the code
to correctly use gfp flags provided by the dmapool caller. This should
solve the dmapool usage on ARM architecture, where GFP_ATOMIC DMA
allocations can be served only from the special, very limited memory pool.

Reported-by: Soren Moch sm...@web.de
Reported-by: Thomas Petazzoni thomas.petazz...@free-electrons.com
Signed-off-by: Marek Szyprowski m.szyprow...@samsung.com
---
 mm/dmapool.c |   27 +++
 1 file changed, 7 insertions(+), 20 deletions(-)

diff --git a/mm/dmapool.c b/mm/dmapool.c
index c5ab33b..86de9b2 100644
--- a/mm/dmapool.c
+++ b/mm/dmapool.c
@@ -62,8 +62,6 @@ struct dma_page { /* cacheable header for 
'allocation' bytes */
unsigned int offset;
 };
 
-#definePOOL_TIMEOUT_JIFFIES((100 /* msec */ * HZ) / 1000)
-
 static DEFINE_MUTEX(pools_lock);
 
 static ssize_t
@@ -227,7 +225,6 @@ static struct dma_page *pool_alloc_page(struct dma_pool 
*pool, gfp_t mem_flags)
memset(page-vaddr, POOL_POISON_FREED, pool-allocation);
 #endif
pool_initialise_page(pool, page);
-   list_add(page-page_list, pool-page_list);
page-in_use = 0;
page-offset = 0;
} else {
@@ -315,30 +312,21 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t 
mem_flags,
might_sleep_if(mem_flags  __GFP_WAIT);
 
spin_lock_irqsave(pool-lock, flags);
- restart:
list_for_each_entry(page, pool-page_list, page_list) {
if (page-offset  pool-allocation)
goto ready;
}
-   page = pool_alloc_page(pool, GFP_ATOMIC);
-   if (!page) {
-   if (mem_flags  __GFP_WAIT) {
-   DECLARE_WAITQUEUE(wait, current);
 
-   __set_current_state(TASK_UNINTERRUPTIBLE);
-   __add_wait_queue(pool-waitq, wait);
-   spin_unlock_irqrestore(pool-lock, flags);
+   /* pool_alloc_page() might sleep, so temporarily drop pool-lock */
+   spin_unlock_irqrestore(pool-lock, flags);
 
-   schedule_timeout(POOL_TIMEOUT_JIFFIES);
+   page = pool_alloc_page(pool, mem_flags);
+   if (!page)
+   return NULL;
 
-   spin_lock_irqsave(pool-lock, flags);
-   __remove_wait_queue(pool-waitq, wait);
-   goto restart;
-   }
-   retval = NULL;
-   goto done;
-   }
+   spin_lock_irqsave(pool-lock, flags);
 
+   list_add(page-page_list, pool-page_list);
  ready:
page-in_use++;
offset = page-offset;
@@ -348,7 +336,6 @@ void *dma_pool_alloc(struct dma_pool *pool, gfp_t mem_flags,
 #ifdef DMAPOOL_DEBUG
memset(retval, POOL_POISON_ALLOCATED, pool-size);
 #endif
- done:
spin_unlock_irqrestore(pool-lock, flags);
return retval;
 }
-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/