Re: [PATCH v2 00/21] Refine memblock API

2019-10-02 Thread Adam Ford
On Wed, Oct 2, 2019 at 2:36 AM Mike Rapoport  wrote:
>
> Hi Adam,
>
> On Tue, Oct 01, 2019 at 07:14:13PM -0500, Adam Ford wrote:
> > On Sun, Sep 29, 2019 at 8:33 AM Adam Ford  wrote:
> > >
> > > I am attaching two logs.  I now the mailing lists will be unhappy, but
> > >  don't want to try and spam a bunch of log through the mailing liast.
> > > The two logs show the differences between the working and non-working
> > > imx6q 3D accelerator when trying to run a simple glmark2-es2-drm demo.
> > >
> > > The only change between them is the 2 line code change you suggested.
> > >
> > > In both cases, I have cma=128M set in my bootargs.  Historically this
> > > has been sufficient, but cma=256M has not made a difference.
> > >
> >
> > Mike any suggestions on how to move forward?
> > I was hoping to get the fixes tested and pushed before 5.4 is released
> > if at all possible
>
> I have a fix (below) that kinda restores the original behaviour, but I
> still would like to double check to make sure it's not a band aid and I
> haven't missed the actual root cause.
>
> Can you please send me your device tree definition and the output of
>
> cat /sys/kernel/debug/memblock/memory
>
> and
>
> cat /sys/kernel/debug/memblock/reserved
>
> Thanks!
>

Before the patch:

# cat /sys/kernel/debug/memblock/memory
   0: 0x1000..0x8fff
# cat /sys/kernel/debug/memblock/reserved
   0: 0x10004000..0x10007fff
   1: 0x1010..0x11ab141f
   2: 0x1fff1000..0x1fffcfff
   3: 0x2ee4..0x2ef53fff
   4: 0x2ef56940..0x2ef56c43
   5: 0x2ef56c48..0x2fffefff
   6: 0x20c0..0x24d8
   7: 0x2500..0x255f
   8: 0x2580..0x2703
   9: 0x2740..0x2918
  10: 0x2940..0x29cf
  11: 0x2a00..0x2a0f
  12: 0x2a40..0x2a43
  13: 0x2a80..0x2ad5
  14: 0x2b00..0x2b55
  15: 0x2b80..0x2bd5
  16: 0x2c00..0x2c4e
  17: 0x2c50..0x2c6a
  18: 0x2c6c..0x2ce6
  19: 0x2ce8..0x2d02
  20: 0x2d04..0x2d1e
  21: 0x2d20..0x2d3a
  22: 0x2d3c..0x2d56
  23: 0x2d58..0x2e30
  24: 0x2e34..0x2e4c
  25: 0x2e50..0x2e68
  26: 0x2e6c..0x2e84
  27: 0x2e88..0x2ea0
  28: 0x2ea4..0x2ebc
  29: 0x2ec0..0x2edf
  30: 0x2ee4..0x2efc
  31: 0x2f00..0x2f13
  32: 0x2f28..0x2f4b
  33: 0x2f50..0x2f84
  34: 0x2f88..0x3fff


After the patch:
# cat /sys/kernel/debug/memblock/memory
   0: 0x1000..0x8fff
# cat /sys/kernel/debug/memblock/reserved
   0: 0x10004000..0x10007fff
   1: 0x1010..0x11ab141f
   2: 0x1fff1000..0x1fffcfff
   3: 0x3eec..0x3efd3fff
   4: 0x3efd6940..0x3efd6c43
   5: 0x3efd6c48..0x3fffbfff
   6: 0x3fffc0c0..0x3fffc4d8
   7: 0x3fffc500..0x3fffc55f
   8: 0x3fffc580..0x3fffc703
   9: 0x3fffc740..0x3fffc918
  10: 0x3fffc940..0x3fffc9cf
  11: 0x3fffca00..0x3fffca0f
  12: 0x3fffca40..0x3fffca43
  13: 0x3fffca80..0x3fffca83
  14: 0x3fffcac0..0x3fffcb15
  15: 0x3fffcb40..0x3fffcb95
  16: 0x3fffcbc0..0x3fffcc15
  17: 0x3fffcc28..0x3fffcc72
  18: 0x3fffcc74..0x3fffcc8e
  19: 0x3fffcc90..0x3fffcd0a
  20: 0x3fffcd0c..0x3fffcd26
  21: 0x3fffcd28..0x3fffcd42
  22: 0x3fffcd44..0x3fffcd5e
  23: 0x3fffcd60..0x3fffcd7a
  24: 0x3fffcd7c..0x3fffce54
  25: 0x3fffce58..0x3fffce70
  26: 0x3fffce74..0x3fffce8c
  27: 0x3fffce90..0x3fffcea8
  28: 0x3fffceac..0x3fffcec4
  29: 0x3fffcec8..0x3fffcee0
  30: 0x3fffcee4..0x3fffcefc
  31: 0x3fffcf00..0x3fffcf1f
  32: 0x3fffcf28..0x3fffcf53
  33: 0x3fffcf68..0x3fffcf8b
  34: 0x3fffcf90..0x3fffcfac
  35: 0x3fffcfb0..0x3fff
  36: 0x8000..0x8fff

> From 06529f861772b7dea2912fc2245debe4690139b8 Mon Sep 17 00:00:00 2001
> From: Mike Rapoport 
> Date: Wed, 2 Oct 2019 10:14:17 +0300
> Subject: [PATCH] mm: memblock: do not enforce current limit for memblock_phys*
>  family
>
> Until commit 92d12f9544b7 ("memblock: refactor internal allocation
> functions") the maximal address for memblock allocations was forced to
> memblock.current_limit only for the allocation functions returning virtual
> address. The changes introduced by that commit moved the limit enforcement
> into the allocation core and as a result the allocation functions returning
> physical address also started to limit allocations to
> memblock.current_limit.
>
> This caused breakage of etnaviv GPU driver:
>
> [3.682347] etnaviv etnaviv: bound 13.gpu (ops gpu_ops)
> [3.688669] etnaviv etnaviv: bound 134000.gpu (ops gpu_ops)
> [3.695099] etnaviv etnaviv: bound 2204000.gpu (ops gpu_ops)
> [3.700800] etnaviv-gpu 13.gpu: model: GC2000, revision: 5108
> [3.723013] etnaviv-gpu 13.gpu: command buffer outside valid
> memory window
> [3.731308] etnaviv-gpu 134000.gpu: model: GC320, revision: 5007
> [3.752437] etnaviv-gpu 134000.gpu: command buffer outside valid
> memory window
> [3.760583] etnaviv-gpu 2204000.gpu: model: GC355, revision: 1215
> [3.766766] etnaviv-gpu 2204000.gpu: Ignoring GPU with VG and FE2.0
>
> Resto

Re: [PATCH v2 00/21] Refine memblock API

2019-10-02 Thread Mike Rapoport
Hi Adam,

On Tue, Oct 01, 2019 at 07:14:13PM -0500, Adam Ford wrote:
> On Sun, Sep 29, 2019 at 8:33 AM Adam Ford  wrote:
> >
> > I am attaching two logs.  I now the mailing lists will be unhappy, but
> >  don't want to try and spam a bunch of log through the mailing liast.
> > The two logs show the differences between the working and non-working
> > imx6q 3D accelerator when trying to run a simple glmark2-es2-drm demo.
> >
> > The only change between them is the 2 line code change you suggested.
> >
> > In both cases, I have cma=128M set in my bootargs.  Historically this
> > has been sufficient, but cma=256M has not made a difference.
> >
> 
> Mike any suggestions on how to move forward?
> I was hoping to get the fixes tested and pushed before 5.4 is released
> if at all possible

I have a fix (below) that kinda restores the original behaviour, but I
still would like to double check to make sure it's not a band aid and I
haven't missed the actual root cause.

Can you please send me your device tree definition and the output of 

cat /sys/kernel/debug/memblock/memory

and 

cat /sys/kernel/debug/memblock/reserved

Thanks!

>From 06529f861772b7dea2912fc2245debe4690139b8 Mon Sep 17 00:00:00 2001
From: Mike Rapoport 
Date: Wed, 2 Oct 2019 10:14:17 +0300
Subject: [PATCH] mm: memblock: do not enforce current limit for memblock_phys*
 family

Until commit 92d12f9544b7 ("memblock: refactor internal allocation
functions") the maximal address for memblock allocations was forced to
memblock.current_limit only for the allocation functions returning virtual
address. The changes introduced by that commit moved the limit enforcement
into the allocation core and as a result the allocation functions returning
physical address also started to limit allocations to
memblock.current_limit.

This caused breakage of etnaviv GPU driver:

[3.682347] etnaviv etnaviv: bound 13.gpu (ops gpu_ops)
[3.688669] etnaviv etnaviv: bound 134000.gpu (ops gpu_ops)
[3.695099] etnaviv etnaviv: bound 2204000.gpu (ops gpu_ops)
[3.700800] etnaviv-gpu 13.gpu: model: GC2000, revision: 5108
[3.723013] etnaviv-gpu 13.gpu: command buffer outside valid
memory window
[3.731308] etnaviv-gpu 134000.gpu: model: GC320, revision: 5007
[3.752437] etnaviv-gpu 134000.gpu: command buffer outside valid
memory window
[3.760583] etnaviv-gpu 2204000.gpu: model: GC355, revision: 1215
[3.766766] etnaviv-gpu 2204000.gpu: Ignoring GPU with VG and FE2.0

Restore the behaviour of memblock_phys* family so that these functions will
not enforce memblock.current_limit.

Fixes: 92d12f9544b7 ("memblock: refactor internal allocation functions")
Reported-by: Adam Ford 
Signed-off-by: Mike Rapoport 
---
 mm/memblock.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/memblock.c b/mm/memblock.c
index 7d4f61a..c4b16ca 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1356,9 +1356,6 @@ static phys_addr_t __init 
memblock_alloc_range_nid(phys_addr_t size,
align = SMP_CACHE_BYTES;
}
 
-   if (end > memblock.current_limit)
-   end = memblock.current_limit;
-
 again:
found = memblock_find_in_range_node(size, align, start, end, nid,
flags);
@@ -1469,6 +1466,9 @@ static void * __init memblock_alloc_internal(
if (WARN_ON_ONCE(slab_is_available()))
return kzalloc_node(size, GFP_NOWAIT, nid);
 
+   if (max_addr > memblock.current_limit)
+   max_addr = memblock.current_limit;
+
alloc = memblock_alloc_range_nid(size, align, min_addr, max_addr, nid);
 
/* retry allocation without lower limit */
-- 
2.7.4

 
> > adam
> >
> > On Sat, Sep 28, 2019 at 2:33 AM Mike Rapoport  wrote:
> > >
> > > On Thu, Sep 26, 2019 at 02:35:53PM -0500, Adam Ford wrote:
> > > > On Thu, Sep 26, 2019 at 11:04 AM Mike Rapoport  
> > > > wrote:
> > > > >
> > > > > Hi,
> > > > >
> > > > > On Thu, Sep 26, 2019 at 08:09:52AM -0500, Adam Ford wrote:
> > > > > > On Wed, Sep 25, 2019 at 10:17 AM Fabio Estevam  
> > > > > > wrote:
> > > > > > >
> > > > > > > On Wed, Sep 25, 2019 at 9:17 AM Adam Ford  
> > > > > > > wrote:
> > > > > > >
> > > > > > > > I tried cma=256M and noticed the cma dump at the beginning 
> > > > > > > > didn't
> > > > > > > > change.  Do we need to setup a reserved-memory node like
> > > > > > > > imx6ul-ccimx6ulsom.dtsi did?
> > > > > > >
> > > > > > > I don't think so.
> > > > > > >
> > > > > > > Were you able to identify what was the exact commit that caused 
> > > > > > > such regression?
> > > > > >
> > > > > > I was able to narrow it down the 92d12f9544b7 ("memblock: refactor
> > > > > > internal allocation functions") that caused the regression with
> > > > > > Etnaviv.
> > > > >
> > > > >
> > > > > Can you please test with this change:
> > > > >
> > > >
> > > > That appears to have fixed my issue.  I am not sure what the impact
> > > > is, but is this a safe option?
> > >
> > > It'

Re: [PATCH v2 00/21] Refine memblock API

2019-10-01 Thread Adam Ford
On Sun, Sep 29, 2019 at 8:33 AM Adam Ford  wrote:
>
> I am attaching two logs.  I now the mailing lists will be unhappy, but
>  don't want to try and spam a bunch of log through the mailing liast.
> The two logs show the differences between the working and non-working
> imx6q 3D accelerator when trying to run a simple glmark2-es2-drm demo.
>
> The only change between them is the 2 line code change you suggested.
>
> In both cases, I have cma=128M set in my bootargs.  Historically this
> has been sufficient, but cma=256M has not made a difference.
>

Mike any suggestions on how to move forward?
I was hoping to get the fixes tested and pushed before 5.4 is released
if at all possible

> adam
>
> On Sat, Sep 28, 2019 at 2:33 AM Mike Rapoport  wrote:
> >
> > On Thu, Sep 26, 2019 at 02:35:53PM -0500, Adam Ford wrote:
> > > On Thu, Sep 26, 2019 at 11:04 AM Mike Rapoport  wrote:
> > > >
> > > > Hi,
> > > >
> > > > On Thu, Sep 26, 2019 at 08:09:52AM -0500, Adam Ford wrote:
> > > > > On Wed, Sep 25, 2019 at 10:17 AM Fabio Estevam  
> > > > > wrote:
> > > > > >
> > > > > > On Wed, Sep 25, 2019 at 9:17 AM Adam Ford  
> > > > > > wrote:
> > > > > >
> > > > > > > I tried cma=256M and noticed the cma dump at the beginning didn't
> > > > > > > change.  Do we need to setup a reserved-memory node like
> > > > > > > imx6ul-ccimx6ulsom.dtsi did?
> > > > > >
> > > > > > I don't think so.
> > > > > >
> > > > > > Were you able to identify what was the exact commit that caused 
> > > > > > such regression?
> > > > >
> > > > > I was able to narrow it down the 92d12f9544b7 ("memblock: refactor
> > > > > internal allocation functions") that caused the regression with
> > > > > Etnaviv.
> > > >
> > > >
> > > > Can you please test with this change:
> > > >
> > >
> > > That appears to have fixed my issue.  I am not sure what the impact
> > > is, but is this a safe option?
> >
> > It's not really a fix, I just wanted to see how exactly 92d12f9544b7 
> > ("memblock:
> > refactor internal allocation functions") broke your setup.
> >
> > Can you share the dts you are using and the full kernel log?
> >
> > > adam
> > >
> > > > diff --git a/mm/memblock.c b/mm/memblock.c
> > > > index 7d4f61a..1f5a0eb 100644
> > > > --- a/mm/memblock.c
> > > > +++ b/mm/memblock.c
> > > > @@ -1356,9 +1356,6 @@ static phys_addr_t __init 
> > > > memblock_alloc_range_nid(phys_addr_t size,
> > > > align = SMP_CACHE_BYTES;
> > > > }
> > > >
> > > > -   if (end > memblock.current_limit)
> > > > -   end = memblock.current_limit;
> > > > -
> > > >  again:
> > > > found = memblock_find_in_range_node(size, align, start, end, 
> > > > nid,
> > > > flags);
> > > >
> > > > > I also noticed that if I create a reserved memory node as was done one
> > > > > imx6ul-ccimx6ulsom.dtsi the 3D seems to work again, but without it, I
> > > > > was getting errors regardless of the 'cma=256M' or not.
> > > > > I don't have a problem using the reserved memory, but I guess I am not
> > > > > sure what the amount should be.  I know for the video decoding 1080p,
> > > > > I have historically used cma=128M, but with the 3D also needing some
> > > > > memory allocation, is that enough or should I use 256M?
> > > > >
> > > > > adam
> > > >
> > > > --
> > > > Sincerely yours,
> > > > Mike.
> > > >
> >
> > --
> > Sincerely yours,
> > Mike.
> >


Re: [PATCH v2 00/21] Refine memblock API

2019-09-29 Thread Adam Ford
I am attaching two logs.  I now the mailing lists will be unhappy, but
 don't want to try and spam a bunch of log through the mailing liast.
The two logs show the differences between the working and non-working
imx6q 3D accelerator when trying to run a simple glmark2-es2-drm demo.

The only change between them is the 2 line code change you suggested.

In both cases, I have cma=128M set in my bootargs.  Historically this
has been sufficient, but cma=256M has not made a difference.

adam

On Sat, Sep 28, 2019 at 2:33 AM Mike Rapoport  wrote:
>
> On Thu, Sep 26, 2019 at 02:35:53PM -0500, Adam Ford wrote:
> > On Thu, Sep 26, 2019 at 11:04 AM Mike Rapoport  wrote:
> > >
> > > Hi,
> > >
> > > On Thu, Sep 26, 2019 at 08:09:52AM -0500, Adam Ford wrote:
> > > > On Wed, Sep 25, 2019 at 10:17 AM Fabio Estevam  
> > > > wrote:
> > > > >
> > > > > On Wed, Sep 25, 2019 at 9:17 AM Adam Ford  wrote:
> > > > >
> > > > > > I tried cma=256M and noticed the cma dump at the beginning didn't
> > > > > > change.  Do we need to setup a reserved-memory node like
> > > > > > imx6ul-ccimx6ulsom.dtsi did?
> > > > >
> > > > > I don't think so.
> > > > >
> > > > > Were you able to identify what was the exact commit that caused such 
> > > > > regression?
> > > >
> > > > I was able to narrow it down the 92d12f9544b7 ("memblock: refactor
> > > > internal allocation functions") that caused the regression with
> > > > Etnaviv.
> > >
> > >
> > > Can you please test with this change:
> > >
> >
> > That appears to have fixed my issue.  I am not sure what the impact
> > is, but is this a safe option?
>
> It's not really a fix, I just wanted to see how exactly 92d12f9544b7 
> ("memblock:
> refactor internal allocation functions") broke your setup.
>
> Can you share the dts you are using and the full kernel log?
>
> > adam
> >
> > > diff --git a/mm/memblock.c b/mm/memblock.c
> > > index 7d4f61a..1f5a0eb 100644
> > > --- a/mm/memblock.c
> > > +++ b/mm/memblock.c
> > > @@ -1356,9 +1356,6 @@ static phys_addr_t __init 
> > > memblock_alloc_range_nid(phys_addr_t size,
> > > align = SMP_CACHE_BYTES;
> > > }
> > >
> > > -   if (end > memblock.current_limit)
> > > -   end = memblock.current_limit;
> > > -
> > >  again:
> > > found = memblock_find_in_range_node(size, align, start, end, nid,
> > > flags);
> > >
> > > > I also noticed that if I create a reserved memory node as was done one
> > > > imx6ul-ccimx6ulsom.dtsi the 3D seems to work again, but without it, I
> > > > was getting errors regardless of the 'cma=256M' or not.
> > > > I don't have a problem using the reserved memory, but I guess I am not
> > > > sure what the amount should be.  I know for the video decoding 1080p,
> > > > I have historically used cma=128M, but with the 3D also needing some
> > > > memory allocation, is that enough or should I use 256M?
> > > >
> > > > adam
> > >
> > > --
> > > Sincerely yours,
> > > Mike.
> > >
>
> --
> Sincerely yours,
> Mike.
>
Starting kernel ...

[0.00] Booting Linux on physical CPU 0x0
[0.00] Linux version 5.3.1-dirty (aford@aford-IdeaCentre-A730) (gcc version 8.3.0 (Buildroot 2019.02.5-00192-gcd72d5bf57-dirty)) #2 SMP Sun Sep 29 08:26:09 CDT 2019
[0.00] CPU: ARMv7 Processor [412fc09a] revision 10 (ARMv7), cr=10c5387d
[0.00] CPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing instruction cache
[0.00] OF: fdt: Machine model: Logic PD i.MX6QD SOM-M3
[0.00] printk: debug: ignoring loglevel setting.
[0.00] Memory policy: Data cache writealloc
[0.00] cma: Reserved 128 MiB at 0x8800
[0.00] On node 0 totalpages: 524288
[0.00]   Normal zone: 1536 pages used for memmap
[0.00]   Normal zone: 0 pages reserved
[0.00]   Normal zone: 196608 pages, LIFO batch:63
[0.00]   HighMem zone: 327680 pages, LIFO batch:63
[0.00] percpu: Embedded 21 pages/cpu s54632 r8192 d23192 u86016
[0.00] pcpu-alloc: s54632 r8192 d23192 u86016 alloc=21*4096
[0.00] pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 
[0.00] Built 1 zonelists, mobility grouping on.  Total pages: 522752
[0.00] Kernel command line: console=ttymxc0,115200 root=PARTUUID=60f4e103-02 rootwait rw ignore_loglevel cma=128M
[0.00] Dentry cache hash table entries: 131072 (order: 7, 524288 bytes, linear)
[0.00] Inode-cache hash table entries: 65536 (order: 6, 262144 bytes, linear)
[0.00] mem auto-init: stack:off, heap alloc:off, heap free:off
[0.00] Memory: 1922048K/2097152K available (12288K kernel code, 956K rwdata, 4252K rodata, 1024K init, 6920K bss, 44032K reserved, 131072K cma-reserved, 1179648K highmem)
[0.00] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
[0.00] Running RCU self tests
[0.00] rcu: Hierarchical RCU implementation.
[0.00] rcu: RCU event tracing is enabled.
[0.00] rcu: RCU lockdep che

Re: [PATCH v2 00/21] Refine memblock API

2019-09-28 Thread Mike Rapoport
On Thu, Sep 26, 2019 at 02:35:53PM -0500, Adam Ford wrote:
> On Thu, Sep 26, 2019 at 11:04 AM Mike Rapoport  wrote:
> >
> > Hi,
> >
> > On Thu, Sep 26, 2019 at 08:09:52AM -0500, Adam Ford wrote:
> > > On Wed, Sep 25, 2019 at 10:17 AM Fabio Estevam  wrote:
> > > >
> > > > On Wed, Sep 25, 2019 at 9:17 AM Adam Ford  wrote:
> > > >
> > > > > I tried cma=256M and noticed the cma dump at the beginning didn't
> > > > > change.  Do we need to setup a reserved-memory node like
> > > > > imx6ul-ccimx6ulsom.dtsi did?
> > > >
> > > > I don't think so.
> > > >
> > > > Were you able to identify what was the exact commit that caused such 
> > > > regression?
> > >
> > > I was able to narrow it down the 92d12f9544b7 ("memblock: refactor
> > > internal allocation functions") that caused the regression with
> > > Etnaviv.
> >
> >
> > Can you please test with this change:
> >
> 
> That appears to have fixed my issue.  I am not sure what the impact
> is, but is this a safe option?

It's not really a fix, I just wanted to see how exactly 92d12f9544b7 ("memblock:
refactor internal allocation functions") broke your setup.

Can you share the dts you are using and the full kernel log?
 
> adam
> 
> > diff --git a/mm/memblock.c b/mm/memblock.c
> > index 7d4f61a..1f5a0eb 100644
> > --- a/mm/memblock.c
> > +++ b/mm/memblock.c
> > @@ -1356,9 +1356,6 @@ static phys_addr_t __init 
> > memblock_alloc_range_nid(phys_addr_t size,
> > align = SMP_CACHE_BYTES;
> > }
> >
> > -   if (end > memblock.current_limit)
> > -   end = memblock.current_limit;
> > -
> >  again:
> > found = memblock_find_in_range_node(size, align, start, end, nid,
> > flags);
> >
> > > I also noticed that if I create a reserved memory node as was done one
> > > imx6ul-ccimx6ulsom.dtsi the 3D seems to work again, but without it, I
> > > was getting errors regardless of the 'cma=256M' or not.
> > > I don't have a problem using the reserved memory, but I guess I am not
> > > sure what the amount should be.  I know for the video decoding 1080p,
> > > I have historically used cma=128M, but with the 3D also needing some
> > > memory allocation, is that enough or should I use 256M?
> > >
> > > adam
> >
> > --
> > Sincerely yours,
> > Mike.
> >

-- 
Sincerely yours,
Mike.



Re: [PATCH v2 00/21] Refine memblock API

2019-09-26 Thread Adam Ford
On Thu, Sep 26, 2019 at 11:04 AM Mike Rapoport  wrote:
>
> Hi,
>
> On Thu, Sep 26, 2019 at 08:09:52AM -0500, Adam Ford wrote:
> > On Wed, Sep 25, 2019 at 10:17 AM Fabio Estevam  wrote:
> > >
> > > On Wed, Sep 25, 2019 at 9:17 AM Adam Ford  wrote:
> > >
> > > > I tried cma=256M and noticed the cma dump at the beginning didn't
> > > > change.  Do we need to setup a reserved-memory node like
> > > > imx6ul-ccimx6ulsom.dtsi did?
> > >
> > > I don't think so.
> > >
> > > Were you able to identify what was the exact commit that caused such 
> > > regression?
> >
> > I was able to narrow it down the 92d12f9544b7 ("memblock: refactor
> > internal allocation functions") that caused the regression with
> > Etnaviv.
>
>
> Can you please test with this change:
>

That appears to have fixed my issue.  I am not sure what the impact
is, but is this a safe option?


adam

> diff --git a/mm/memblock.c b/mm/memblock.c
> index 7d4f61a..1f5a0eb 100644
> --- a/mm/memblock.c
> +++ b/mm/memblock.c
> @@ -1356,9 +1356,6 @@ static phys_addr_t __init 
> memblock_alloc_range_nid(phys_addr_t size,
> align = SMP_CACHE_BYTES;
> }
>
> -   if (end > memblock.current_limit)
> -   end = memblock.current_limit;
> -
>  again:
> found = memblock_find_in_range_node(size, align, start, end, nid,
> flags);
>
> > I also noticed that if I create a reserved memory node as was done one
> > imx6ul-ccimx6ulsom.dtsi the 3D seems to work again, but without it, I
> > was getting errors regardless of the 'cma=256M' or not.
> > I don't have a problem using the reserved memory, but I guess I am not
> > sure what the amount should be.  I know for the video decoding 1080p,
> > I have historically used cma=128M, but with the 3D also needing some
> > memory allocation, is that enough or should I use 256M?
> >
> > adam
>
> --
> Sincerely yours,
> Mike.
>


Re: [PATCH v2 00/21] Refine memblock API

2019-09-26 Thread Mike Rapoport
Hi,

On Thu, Sep 26, 2019 at 08:09:52AM -0500, Adam Ford wrote:
> On Wed, Sep 25, 2019 at 10:17 AM Fabio Estevam  wrote:
> >
> > On Wed, Sep 25, 2019 at 9:17 AM Adam Ford  wrote:
> >
> > > I tried cma=256M and noticed the cma dump at the beginning didn't
> > > change.  Do we need to setup a reserved-memory node like
> > > imx6ul-ccimx6ulsom.dtsi did?
> >
> > I don't think so.
> >
> > Were you able to identify what was the exact commit that caused such 
> > regression?
> 
> I was able to narrow it down the 92d12f9544b7 ("memblock: refactor
> internal allocation functions") that caused the regression with
> Etnaviv.


Can you please test with this change:

diff --git a/mm/memblock.c b/mm/memblock.c
index 7d4f61a..1f5a0eb 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1356,9 +1356,6 @@ static phys_addr_t __init 
memblock_alloc_range_nid(phys_addr_t size,
align = SMP_CACHE_BYTES;
}
 
-   if (end > memblock.current_limit)
-   end = memblock.current_limit;
-
 again:
found = memblock_find_in_range_node(size, align, start, end, nid,
flags);
 
> I also noticed that if I create a reserved memory node as was done one
> imx6ul-ccimx6ulsom.dtsi the 3D seems to work again, but without it, I
> was getting errors regardless of the 'cma=256M' or not.
> I don't have a problem using the reserved memory, but I guess I am not
> sure what the amount should be.  I know for the video decoding 1080p,
> I have historically used cma=128M, but with the 3D also needing some
> memory allocation, is that enough or should I use 256M?
> 
> adam

-- 
Sincerely yours,
Mike.



Re: [PATCH v2 00/21] Refine memblock API

2019-09-26 Thread Adam Ford
On Wed, Sep 25, 2019 at 10:17 AM Fabio Estevam  wrote:
>
> On Wed, Sep 25, 2019 at 9:17 AM Adam Ford  wrote:
>
> > I tried cma=256M and noticed the cma dump at the beginning didn't
> > change.  Do we need to setup a reserved-memory node like
> > imx6ul-ccimx6ulsom.dtsi did?
>
> I don't think so.
>
> Were you able to identify what was the exact commit that caused such 
> regression?

I was able to narrow it down the 92d12f9544b7 ("memblock: refactor
internal allocation functions") that caused the regression with
Etnaviv.

I also noticed that if I create a reserved memory node as was done one
imx6ul-ccimx6ulsom.dtsi the 3D seems to work again, but without it, I
was getting errors regardless of the 'cma=256M' or not.
I don't have a problem using the reserved memory, but I guess I am not
sure what the amount should be.  I know for the video decoding 1080p,
I have historically used cma=128M, but with the 3D also needing some
memory allocation, is that enough or should I use 256M?

adam


Re: [PATCH v2 00/21] Refine memblock API

2019-09-25 Thread Fabio Estevam
On Wed, Sep 25, 2019 at 9:17 AM Adam Ford  wrote:

> I tried cma=256M and noticed the cma dump at the beginning didn't
> change.  Do we need to setup a reserved-memory node like
> imx6ul-ccimx6ulsom.dtsi did?

I don't think so.

Were you able to identify what was the exact commit that caused such regression?


Re: [PATCH v2 00/21] Refine memblock API

2019-09-25 Thread Adam Ford
On Wed, Sep 25, 2019 at 7:12 AM Fabio Estevam  wrote:
>
> Hi Adam,
>
> On Wed, Sep 25, 2019 at 6:38 AM Adam Ford  wrote:
>
> > I know it's rather late, but this patch broke the Etnaviv 3D graphics
> > in my i.MX6Q.
> >
> > When I try to use the 3D, it returns some errors and the dmesg log
> > shows some memory allocation errors too:
> > [3.682347] etnaviv etnaviv: bound 13.gpu (ops gpu_ops)
> > [3.688669] etnaviv etnaviv: bound 134000.gpu (ops gpu_ops)
> > [3.695099] etnaviv etnaviv: bound 2204000.gpu (ops gpu_ops)
> > [3.700800] etnaviv-gpu 13.gpu: model: GC2000, revision: 5108
> > [3.723013] etnaviv-gpu 13.gpu: command buffer outside valid
> > memory window
> > [3.731308] etnaviv-gpu 134000.gpu: model: GC320, revision: 5007
> > [3.752437] etnaviv-gpu 134000.gpu: command buffer outside valid
> > memory window
>
> This looks similar to what was reported at:
> https://bugs.freedesktop.org/show_bug.cgi?id=111789
>
> Does it help if you use the same suggestion and pass cma=256M in your
> kernel command line?

I tried cma=256M and noticed the cma dump at the beginning didn't
change.  Do we need to setup a reserved-memory node like
imx6ul-ccimx6ulsom.dtsi did?

adam


Re: [PATCH v2 00/21] Refine memblock API

2019-09-25 Thread Fabio Estevam
Hi Adam,

On Wed, Sep 25, 2019 at 6:38 AM Adam Ford  wrote:

> I know it's rather late, but this patch broke the Etnaviv 3D graphics
> in my i.MX6Q.
>
> When I try to use the 3D, it returns some errors and the dmesg log
> shows some memory allocation errors too:
> [3.682347] etnaviv etnaviv: bound 13.gpu (ops gpu_ops)
> [3.688669] etnaviv etnaviv: bound 134000.gpu (ops gpu_ops)
> [3.695099] etnaviv etnaviv: bound 2204000.gpu (ops gpu_ops)
> [3.700800] etnaviv-gpu 13.gpu: model: GC2000, revision: 5108
> [3.723013] etnaviv-gpu 13.gpu: command buffer outside valid
> memory window
> [3.731308] etnaviv-gpu 134000.gpu: model: GC320, revision: 5007
> [3.752437] etnaviv-gpu 134000.gpu: command buffer outside valid
> memory window

This looks similar to what was reported at:
https://bugs.freedesktop.org/show_bug.cgi?id=111789

Does it help if you use the same suggestion and pass cma=256M in your
kernel command line?


Re: [PATCH v2 00/21] Refine memblock API

2019-09-25 Thread Adam Ford
On Mon, Jan 21, 2019 at 2:05 AM Mike Rapoport  wrote:
>
> Hi,
>
> Current memblock API is quite extensive and, which is more annoying,
> duplicated. Except the low-level functions that allow searching for a free
> memory region and marking it as reserved, memblock provides three (well,
> two and a half) sets of functions to allocate memory. There are several
> overlapping functions that return a physical address and there are
> functions that return virtual address. Those that return the virtual
> address may also clear the allocated memory. And, on top of all that, some
> allocators panic and some return NULL in case of error.
>
> This set tries to reduce the mess, and trim down the amount of memblock
> allocation methods.
>
> Patches 1-10 consolidate the functions that return physical address of
> the allocated memory
>
> Patches 11-13 are some trivial cleanups
>
> Patches 14-19 add checks for the return value of memblock_alloc*() and
> panics in case of errors. The patches 14-18 include some minor refactoring
> to have better readability of the resulting code and patch 19 is a
> mechanical addition of
>
> if (!ptr)
> panic();
>
> after memblock_alloc*() calls.
>
> And, finally, patches 20 and 21 remove panic() calls memblock and _nopanic
> variants from memblock.
>
> v2 changes:
> * replace some more %lu with %zu
> * remove panics where they are not needed in s390 and in printk
> * collect Acked-by and Reviewed-by.
>
>
> Christophe Leroy (1):
>   powerpc: use memblock functions returning virtual address
>
> Mike Rapoport (20):
>   openrisc: prefer memblock APIs returning virtual address
>   memblock: replace memblock_alloc_base(ANYWHERE) with memblock_phys_alloc
>   memblock: drop memblock_alloc_base_nid()
>   memblock: emphasize that memblock_alloc_range() returns a physical address
>   memblock: memblock_phys_alloc_try_nid(): don't panic
>   memblock: memblock_phys_alloc(): don't panic
>   memblock: drop __memblock_alloc_base()
>   memblock: drop memblock_alloc_base()
>   memblock: refactor internal allocation functions
>   memblock: make memblock_find_in_range_node() and choose_memblock_flags() 
> static
>   arch: use memblock_alloc() instead of memblock_alloc_from(size, align, 0)
>   arch: don't memset(0) memory returned by memblock_alloc()
>   ia64: add checks for the return value of memblock_alloc*()
>   sparc: add checks for the return value of memblock_alloc*()
>   mm/percpu: add checks for the return value of memblock_alloc*()
>   init/main: add checks for the return value of memblock_alloc*()
>   swiotlb: add checks for the return value of memblock_alloc*()
>   treewide: add checks for the return value of memblock_alloc*()
>   memblock: memblock_alloc_try_nid: don't panic
>   memblock: drop memblock_alloc_*_nopanic() variants
>
I know it's rather late, but this patch broke the Etnaviv 3D graphics
in my i.MX6Q.

When I try to use the 3D, it returns some errors and the dmesg log
shows some memory allocation errors too:
[3.682347] etnaviv etnaviv: bound 13.gpu (ops gpu_ops)
[3.688669] etnaviv etnaviv: bound 134000.gpu (ops gpu_ops)
[3.695099] etnaviv etnaviv: bound 2204000.gpu (ops gpu_ops)
[3.700800] etnaviv-gpu 13.gpu: model: GC2000, revision: 5108
[3.723013] etnaviv-gpu 13.gpu: command buffer outside valid
memory window
[3.731308] etnaviv-gpu 134000.gpu: model: GC320, revision: 5007
[3.752437] etnaviv-gpu 134000.gpu: command buffer outside valid
memory window
[3.760583] etnaviv-gpu 2204000.gpu: model: GC355, revision: 1215
[3.766766] etnaviv-gpu 2204000.gpu: Ignoring GPU with VG and FE2.0
[3.776131] [drm] Initialized etnaviv 1.2.0 20151214 for etnaviv on minor 0

# glmark2-es2-drm
Error creating gpu
Error: eglCreateWindowSurface failed with error: 0x3009
Error: eglCreateWindowSurface failed with error: 0x3009
Error: CanvasGeneric: Invalid EGL state
Error: main: Could not initialize canvas


Before this patch:

[3.691995] etnaviv etnaviv: bound 13.gpu (ops gpu_ops)
[3.698356] etnaviv etnaviv: bound 134000.gpu (ops gpu_ops)
[3.704792] etnaviv etnaviv: bound 2204000.gpu (ops gpu_ops)
[3.710488] etnaviv-gpu 13.gpu: model: GC2000, revision: 5108
[3.733649] etnaviv-gpu 134000.gpu: model: GC320, revision: 5007
[3.756115] etnaviv-gpu 2204000.gpu: model: GC355, revision: 1215
[3.762250] etnaviv-gpu 2204000.gpu: Ignoring GPU with VG and FE2.0
[3.771432] [drm] Initialized etnaviv 1.2.0 20151214 for etnaviv on minor 0

and the 3D gemos work without this.

I don't know enough about the i.MX6 nor the 3D accelerator to know how
to fix it.
I am hoping someone in the know might have some suggestions.

>  arch/alpha/kernel/core_cia.c  |   5 +-
>  arch/alpha/kernel/core_marvel.c   |   6 +
>  arch/alpha/kernel/pci-noop.c  |  13 +-
>  arch/alpha/kernel/pci.c   |  11 +-
>  arch/alpha/kernel/pci_iommu.c |  16 +-
>  arch/alpha/kernel/setup.c   

[PATCH v2 00/21] Refine memblock API

2019-01-21 Thread Mike Rapoport
Hi,

Current memblock API is quite extensive and, which is more annoying,
duplicated. Except the low-level functions that allow searching for a free
memory region and marking it as reserved, memblock provides three (well,
two and a half) sets of functions to allocate memory. There are several
overlapping functions that return a physical address and there are
functions that return virtual address. Those that return the virtual
address may also clear the allocated memory. And, on top of all that, some
allocators panic and some return NULL in case of error.

This set tries to reduce the mess, and trim down the amount of memblock
allocation methods.

Patches 1-10 consolidate the functions that return physical address of
the allocated memory

Patches 11-13 are some trivial cleanups

Patches 14-19 add checks for the return value of memblock_alloc*() and
panics in case of errors. The patches 14-18 include some minor refactoring
to have better readability of the resulting code and patch 19 is a
mechanical addition of

if (!ptr)
panic();

after memblock_alloc*() calls.

And, finally, patches 20 and 21 remove panic() calls memblock and _nopanic
variants from memblock.

v2 changes:
* replace some more %lu with %zu
* remove panics where they are not needed in s390 and in printk
* collect Acked-by and Reviewed-by.


Christophe Leroy (1):
  powerpc: use memblock functions returning virtual address

Mike Rapoport (20):
  openrisc: prefer memblock APIs returning virtual address
  memblock: replace memblock_alloc_base(ANYWHERE) with memblock_phys_alloc
  memblock: drop memblock_alloc_base_nid()
  memblock: emphasize that memblock_alloc_range() returns a physical address
  memblock: memblock_phys_alloc_try_nid(): don't panic
  memblock: memblock_phys_alloc(): don't panic
  memblock: drop __memblock_alloc_base()
  memblock: drop memblock_alloc_base()
  memblock: refactor internal allocation functions
  memblock: make memblock_find_in_range_node() and choose_memblock_flags() 
static
  arch: use memblock_alloc() instead of memblock_alloc_from(size, align, 0)
  arch: don't memset(0) memory returned by memblock_alloc()
  ia64: add checks for the return value of memblock_alloc*()
  sparc: add checks for the return value of memblock_alloc*()
  mm/percpu: add checks for the return value of memblock_alloc*()
  init/main: add checks for the return value of memblock_alloc*()
  swiotlb: add checks for the return value of memblock_alloc*()
  treewide: add checks for the return value of memblock_alloc*()
  memblock: memblock_alloc_try_nid: don't panic
  memblock: drop memblock_alloc_*_nopanic() variants

 arch/alpha/kernel/core_cia.c  |   5 +-
 arch/alpha/kernel/core_marvel.c   |   6 +
 arch/alpha/kernel/pci-noop.c  |  13 +-
 arch/alpha/kernel/pci.c   |  11 +-
 arch/alpha/kernel/pci_iommu.c |  16 +-
 arch/alpha/kernel/setup.c |   2 +-
 arch/arc/kernel/unwind.c  |   3 +-
 arch/arc/mm/highmem.c |   4 +
 arch/arm/kernel/setup.c   |   6 +
 arch/arm/mm/init.c|   6 +-
 arch/arm/mm/mmu.c |  14 +-
 arch/arm64/kernel/setup.c |   8 +-
 arch/arm64/mm/kasan_init.c|  10 ++
 arch/arm64/mm/mmu.c   |   2 +
 arch/arm64/mm/numa.c  |   4 +
 arch/c6x/mm/dma-coherent.c|   4 +
 arch/c6x/mm/init.c|   4 +-
 arch/csky/mm/highmem.c|   5 +
 arch/h8300/mm/init.c  |   4 +-
 arch/ia64/kernel/mca.c|  25 +--
 arch/ia64/mm/contig.c |   8 +-
 arch/ia64/mm/discontig.c  |   4 +
 arch/ia64/mm/init.c   |  38 -
 arch/ia64/mm/tlb.c|   6 +
 arch/ia64/sn/kernel/io_common.c   |   3 +
 arch/ia64/sn/kernel/setup.c   |  12 +-
 arch/m68k/atari/stram.c   |   4 +
 arch/m68k/mm/init.c   |   3 +
 arch/m68k/mm/mcfmmu.c |   7 +-
 arch/m68k/mm/motorola.c   |   9 ++
 arch/m68k/mm/sun3mmu.c|   6 +
 arch/m68k/sun3/sun3dvma.c |   3 +
 arch/microblaze/mm/init.c |  10 +-
 arch/mips/cavium-octeon/dma-octeon.c  |   3 +
 arch/mips/kernel/setup.c  |   3 +
 arch/mips/kernel/traps.c  |   5 +-
 arch/mips/mm/init.c   |   5 +
 arch/nds32/mm/init.c  |  12 ++
 arch/openrisc/mm/init.c   |   5 +-
 arch/openrisc/mm/ioremap.c|   8 +-
 arch/powerpc/kernel/dt_cpu_ftrs.c |   8 +-
 arch/powerpc/kernel/irq.c |   5 -
 arch/powerpc/kernel/paca.c|   6 +-
 arch/powerpc/kernel/pci_32.c  |   3 +
 arch/powerpc/kernel/prom.c|   5 +-
 arch/powerpc/kernel/rtas.c